Let's say I have a plain HTML website. More than 80% of my visitors are usually from search engines like Google, Yahoo, etc. What I want to do is to make my whole website in Flash.
However, search engines can't read information from Flash or JavaScript. That means my web page would lose more than half of the visitors.
So how do I show show HTML pages instead of Flash to the search engines?
Note: you could reach a specific page/category/etc in Flash by using PHP GET function, for example: you can surf trough all the web pages from the homepage and link to a specific web page by typing page?id=1234.
Short answer: don't make your whole site in Flash.
Longer answer: If you show humans one view and the googlebot another, you are potentially guilty of "cloaking". If the Google Gods find you guilty, you will be banned to the Supplemental Index, never to be heard from again.
Also, doing an entire site in Flash breaks the basic contract of the web, namely that you can link to specific content from other sites or in emails. If your site has just one URL and everything else is handled inside of Flash ... well, I don't know what you have, but it isn't a website anymore. Adobe may like you, but many people will not. Oh, and Flash is very unfriendly to people with handicaps.
I recommend using Flash where it is needed (videos, animations, etc.), but make it part of an honest-to-God website.
What I want to do is to make my whole
website in Flash
So how to accomplish this: show HTML
pages instead of Flash?
These two seem a bit contradictory.
Important is to understand the reasoning behind choosing Flash to build your entire website.
More than 80 percent of my visitors
are usually from search engines
You did some analysis but did you look at how many visitors access your website via a mobile device? Because apart from SEO, Flash won't serve on the majority of these devices.
Have you considered HTML5 as an alternative for anything you want to do with Flash?
Facebook requires you to build applications in Flash among others but html, why? I do not know, but that is their policy and there has got to be a reason.
I have been recently developing simple social applications in Flash (*.swf) and my latest app is a website in flash that will display in tab of my company webpage in Facebook; at the same time, I also want to use that website as a regular webpage on the internet for my company. So, the only way I could find out to display html text within a flash file is by changing the properties for the text wherever I can in CHARACTER to "Render text as HTML", look for the symbol "<>". I think that way the search engines will be able to read your content and process your website accordingly. Good luck.
As you say that you can reach the Flash page by get variable using page ID or any other variables. So its good. I hope you will add Flash in each HTML page. Beside this, you can add all other HTML contents in hidden format. So the crawlers could reach the content and your site will look-up in Flash. Isn't it?
Since no-one actually gave you an straight answer (probably because your question is absolute face-palm-esque), i'll try:
Consider using the web-development approach called progressive enhancement. Now, it's fair to say that it probably wasn't intended for Flashification of a website, but you can make use of it's principles.
Start with your standard HTML version of your website
Introduce swfobject to dynamically (important bit) swap out the HTML content for it's Flash equivalent
Introduce swfaddress to allow for deep linking into your Flash movies (pseudo-URLs)
Granted, steps 2 and 3 are a little more advanced that how i've described them and your site size/structure/design may not suit this approach, but at least it's an answer.
All that being said, I agree with the other answers/comments about the need for using Flash to display your entire site - there's very very very few reasons anyone would do that, and there's more reasons than already added as to why not to (iOS devices etc)...
Related
I have never understood why some people say making custom css for each browser is a bad thing. To keep my page size down and download times fast it makes perfect sense to me to make a custom css for the major browsers (especially IE in its many different forms), and a general catch all css for everything else.
If you want to send out a bloated, huge, Swiss army knife of the css world, for all situations then go right ahead I'm not going to stop you.
Fast detection of the browser is important when doing this. Loading a JavaScript file to detect the browser seems slow. So I would prefer to use php to detect the browser, and send out the specified css. Or at least a general browser specific css then use the JavaScript to load a more detailed version of the css.
But I've read article after article about why this is a bad thing. The main reason behind each of these articles is because the user agent can be faked. Or there using Firefox but the server thinks they're using IE7 so it sends out the wrong css file.
As a developer/designer of web apps why is this my problem? If you want to use Firefox, but tell my server your using safari or IE*, and get a crappy looking page, why is it my problem?
And don't throw that whole if the user can't see your site right they'll never come back, or some kind of similar argument at me. a normal user isn't going to be doing this. its only going to be the people who know how to do this, and will know whats wrong when my site looks crappy.
This is similar to looking at my site on a old Apple II (I have no clue how), and yelling at me because everything looks green.
So is there a good reason, not a personal preference, why I shouldn't use php to detect the browser and send out customized css files?
I do this mostly for the different versions of IE. It just seems like for some sites, adding the if IE6 and if IE7 parts just double or triple the size of the css file.
Typically when a user intentionally fakes the user agent string, it is because something is not viewable in the user's browser that should be. For example, some sites may restrict users to IE or Firefox, but the user is using Iceweasel on Debian. Iceweasel is just a Firefox renamed for trademarked reasons (there are a few other changes also), so there is no reason that the site should not work.
Realize that this happens because of (bad) browser detection, not despite it. I would say you don't need to be terribly concerned about this issue. Further, if you can just make your site reasonably cross-browser compatible, it won't matter at all. If you really want to use browser-specific CSS, and you don't want to do so all in one CSS file, don't let a fake user agent stop you.
As long as the only thing you're doing is changing style sheets, there is no valid reason as far as I can tell. If you're attempting to deliver custom security measures by browser, then you'll have issues.
Not sure about php but in Rails it is normal and dead simple practice to provide css files and different layouts based on the user agent particularly when considering that your site is just as likely to be accessed by any of the myriad of available mobile devices, never mind writing for the most popular (Currently Firefox) browsers and even writing custom MIME types if need be is also dead simple.
IMO not doing so is pure laziness on the coders part but then not all sites are developed by professional teams of developers with styling gurus at hand. Also in languages other than Rails it might not be so simple. Sorry, I haven't a clue about PHP so this may not be an appropriate reply
In my opinion, starting with normalize.css, and having a base style sheet to start, overriding the base styles as needed usually works along with making sure you set appropriate fallbacks. If you really need it a few media queries, and feature detection can go a long way.
One reason you shouldn't base things off of the browser is because major search engines like Google and Yahoo prohibit displaying different content for different browsers. GoogleBot can detect different CSS and HTML and you may get bad search positioning. Additionally, if you use any advertising services you may be in breach of their contract by displaying varying content.
I'm launching this big database (1.5+ million records) driven website and I want to know some SEO tips before..
Which links I need to tag as rel="nofollow", rel="me", etc?
How to prevent search engines to follow links that are meant to users only? Like 'login', 'post message', 'search', etc.
Do I need to prevent search engines from entering the 'search' section of the site? How to prevent it?
The site is basically a database of movies and actors. How to create a good sitemap?
I need to prevent search engines form reading user comments and reviews???
Another robots.txt or .htacces configuration is needed?
How to use noindex the right way?
Additional tips?
Thanks!
If you just have internal links, no reason to make them nofollow
Make them buttons on forms with method="post" (that's the correct way to do it anyway)
Don't think you need to do that.
Perhaps see how IMDb does it? I'd consider just listing all actors and all movies in some sort of a sensible manner or something like that.
Why would you need to do that?
Depending on whether you want to block something (via robots.txt) or need .htaccess for something else
No idea
Remember to use semantic HTML - use h1's for page titles and so on.
Use nofollow when you don't want your linking to a page to give it additional weight in Google's pageRank. So, for example, you'd use it on links to user homepages for comments or signatures. Use me when you are linking to your other "identities", e.g. your facebook page, your myspace account, etc.
robots.txt allows you to give a set of rules to webcrawlers on what they can or can't crawl and how to crawl. nofollow also tells Google not to crawl a link supposedly. Additionally, if you have application queries that are non-idempotent (cannot be safely called multiple times), then they should be POST requests—these include things like news/message/page deletions.
Unless your searches are incredibly database-intensive (perhaps they should be cached) then you probably don't need to worry about this.
Google is intelligent enough to figure out a sitemap that you've created for your user. And that's the way you ought to be thinking instead of SEO; E.g. how can I make my site more usable/accessible/user-friendly—all of which will indirectly optimize your site for search engines. But if you want to go the distance, there are semantic sitemap technologies you can use, like RDF sitemaps or XML sitemaps. Also, Google Webmasters Tools offers site map creation.
No, why would you want to hide content from the search engine? Probably 90% of StackOverflow's search engine referrals are from user-generated content.
What? Configure your web server for people, not search engines.
This is easy to find the answer to.
Don't make your site spammy, such as overloading it with banners or using popup ads; use semantic markup (H1, H2, P, etc.); use good spelling/grammar; use REST-style URLs (even if it's not a RESTful application); use slugs to hide ugly URI-encoding; observe accessibility standards and guidelines; and, most importantly, make your site useful to encourage return visits and backlinks—that is the most sure fire way of attaining good search ranking.
I'm looking to translate a webpage in PHP 5 so I can save the translation and make it easily accessible via mydomain.com/lang/fr/category/article.html rather than users having to go through google translate.
I've found various easy ways to translate text via CURL, however what i'd really like to be able to do is translate an entire webpage but obviously ignore the tags.
The problem is that Google Translate messes up all the HTML tags, class names etc
Does anyone know of a php class that can translate an entire webpage whilst ignoring the tags?
I'm guessing it may be possible via advanced regular expressions or something like that, but i'm not sure.
I can't just curl Google's response as i'll have all the extra JS that they put in.
Any ideas?
I know it's not quite what you asked for, but a much simpler alternative would just be to include the free Google Translate widget on all your pages. That way visitors select the language they would like to view the site in and Google dynmaically does the rest (and persists their selection throughout the site). You then don't need to worry about trying to create and keep updated dozens of different HTML files for every page, each with it's own set of internal links (which, frankly, sounds like a nightmare to maintain).
A friend has asked me for help with her website design. Although I know a fair amount about the basics behind HTML, XML, Php, ASP.Net, javascript, etc., I'm not really comfortable sitting down and coding from scratch. All of the work I do is in Java, C++, and so on.
My friend would like to add a vertically scrolling marquee to her site - no problem, there is code for that all over the internet. Here is the tricky part - she would like the text to be dynamically pulled from another website. This isn't like a simple text file, either - it's a list of names from a specific blog post, so there would be a lot of text processing involved to wade through all of the other markup, and extract the relevant info.
The way I see it, here are her options -
1) Write some kind of a perl script or somesuch that is set to run daily. This script will visit the blog and extract the necessary info. It will then update the HTML file's marquee text with its new info.
2) Some sort of active page written in ASP or PHP that will dynamically build the marquee (and the rest of the site) each time the site is visited, basically doing the work of the perl script each time. This seems like it has the potential to be somewhat slow.
Per my understanding, those are her only options. Am I correct? There is no simply way to do this in javascript that I am just missing? I know you can reference an image to be dynamically pulled with the marquee, but this isn't that simple...
Thanks.
EDIT: I guess where I was going with my question was this: Unless I implement this statically, this is going to be fairly involved, right? I believe it is over my head. This is why I would like to simply copy/paste the text list into the html document. It would need to be updated every time the blog does, but that only appears to happen every few months, so that's not a large chore. I realize this is a lazy solution, but this is from someone very inexperienced in web development.
For reference, this is the SPECIFIC blog post which the text will come from, and my friend would ONLY like to display that list of names that begins when you scroll several paragraphs down.
http://truthnottasers.blogspot.com/2008/04/what-follows-are-names-where-known.html
It depends what the list of names looks like, i.e. how much intelligence is needed to parse it. But this could be something that could be fairly easily be pulled, parsed and displayed using Ajax, for example in the jquery flavour.
All the blogs I have ever seen have an RSS feed. Why not just grab the feed?... Google provides javascript that does only this.
Google Ajax Feed API
The RSS suggestion sounds good. If you can't get it in the RSS you could screen scrape the content.
If you could do it with Javascript I think it would suffer the same resource issues as your once a day Perl script and every load asp/php methods since it would still have to fetch the web content by making a call to the web site.
Another option is to use asp.net and enable caching so that when other visitors come to the site instead of getting the page all over again it serves up the cached page. You can set this to cache for 24 hours or so. I'm sure other server languages have similar features. Basically this would be the same as your once a day Perl method but keep it within a web framework.
Another hacky solution would be to use an iframe and frame the content with javascript so that it only shows the content you want to show. Of course you'll have no control over the formatting (background, fonts) of the iframe and if the content gets bigger or changes position you'll have problems.
I currently run several Wordpress MU installations.
My users are asking for the ability to post video (not just Youtube, but from our own Flash Media Server).
By default, Wordpress strips out <embed> tags.
Now, I would never allow users to include PHP or JavaScript in their posts, do I have to worry about Flash vulnerabilities?
How dangerous is the embed tag and should I worry about giving them the ability?
Thanks
Generally speaking, Flash has come a long way in terms of preventing exploits like key trapping, etc.
The safest thing you could do would be to obfuscate the embedding code and have them only supply a SWF URL, that way they couldn't pull anything fancy in the embed object like allowing cross scripting, etc...
In particular, you want to watch out for things like potential hackers trying to call JS functions from your blog JS files by using AS3's ExternalInterface.call() function... that would definitely be bad. However I think you can use embed techniques to turn this off.
Make sure you set allowScriptAccess="never" in the object/embed tag to deny scripting powers to third party SWFs.
I would suggest that Flash is only as secure as the content it is presenting; and that including a Youtube video is no more or less dangerous than going to visit the same video on Youtube's website.
Flash is pretty secure. A lot of websites big and small are using it for 10 years now. Of course exploits are found, as in every piece of software. No web system is 100% secure. A lot of people are using flash and a lot of developers are working to make it secure. If you really sensitive information don't put them on web in the first place. The security depends more on the developer that writes a piece of code than the type of code ( actionscript, javascript, php or java ). Languages permit errors and developers sometimes make errors.
My recommandation is to use it if you need it.