multiple test pages with the same content google ranking seo - php

I have some pages names index2.php, index3.php,..
they are copies of my index.php, for testing purposes
should I disallow these pages in robots.txt because of the double content?
or it doesn't matter because these pages are not linked to on my website?
if I get punished by google for the double content, how bad is it?

If you never link to the test pages or share the link with anyone (who might post it elsewhere), and they don't appear in any sitemap then crawlers will never find the pages.
I wouldn't recommend adding them to your robots.txt because anyone can view your robots.txt and find out the location of your test pages. Usually you don't want the public to have access to test pages.
If you wanted to ensure no one was able to view the test pages other than you, then add an IP check, or some other security such as a login.

Dupe content or anything that can knock you down, can be very hard to recover from. Overall I would say it depends on what your doing with your site/service. If its all about content, then yea its going to matter to some extent or another. Once google and other engines list you in a pool of domains for dupe content or other possible flags that would be an attempt to falsely boost your rank, getting out of that pool is tough. Its like trust, once you lose it for someone its hard for them to gain it back.
Then again its hard to say, Ive known some who have been plauged for months on end trying to fix similar issues, and I know some that fixed it within a week.
Again it all boils down to what your site/service is doing, and how its doing it. So many factors, that not just one alone will kill..
As for that whole robots text thing, if your that concerned go for it, theres no harm in it. Engines like google pay mind to the robots text and respect it enough. Without it, despite not being linked directly google and or other engines sometimes find a way to find it. Ive had whole sub domains never seen by the public end up getting indexed in search engines, I have had thee most obsucre file names never linked publicly end up in search engines.. its hit or miss

To be on the safe side, you should disallow them and it would be awesome if you completely remove them.
And about the punishment, it's just that if you have duplicate content on different pages those pages will start competing each other for higher rankings. And you would not want your own pages fighting each other for rankings.

Related

How to block spider if he's disobeying the rules of robots.txt

Is there any way to block a crawler/spider search bots if they're not obeying the rules written in robots.txt file. If yes, where can I find more info about it?
I would prefer some .htaccess rule, if not then PHP.
There are ways to prevent most bots from spidering your site.
Aside from filtering by user agent and known IP adresses, you should as well implement behaviour driven blocking. That means, if it acts like a crawler, block it.
You can find multiple lists of search engine bots here. But most of the big players obey the robots.txt.
So the other, rather big part is the blocking because of the bots behaviour. Things are getting less complicated when you are using a framework like Laravel or Symfony, because you easily set a filter to be executed before every page load. If not, you'd have to implement a function which is called before every page load.
Now there are some things to consider. A spider usually crawls as fast as it can. So you could use the session to measure time between page loads and page loads in a given time span. If amount X this is exceed, the client is blocked.
Sadly, this approach relies on the bot handling sessions/cookies correctly, which may not always be the case.
Another or an additional approach would be to measure the amount of page loads from a given IP address. This is dangerous because there may be as well a huge amount of users using the same IP address. So this may exclude humans.
A third approach I can think of is to use some kind of honeypot. Create a link that leads to a specific site. That link has to be visible to computers, but not to humans. Hide it away with some css. If someone or something is accessing the page using the hidden link, you can be (close to) sure it is a program. But be aware, there are browser addons which are preloading every link they can find. So you cannot rely totally on this.
Depending on the nature of your site, on last approach would be to hide the complete site behind a capture. This is a harsh measure in terms of usability, so decide carefully if it applies to your use case.
Then there are techniques like using flash or complicated Javascript most bots do not understand, but it's disgusting and I don't want to talk about it. ^^
Finally, I will now come to a conclusion.
By using a well written robots.txt most robots will leave you alone. In addition to that, you should combine all or some of the approaches mentioned beforehand to get the bad guys.
Afterall, as long as your site is publically available, you can never evade a custom made bot tailored specifically for your site. When a browser can parse it, a robot can do it as well.
For a more useful answer I would need to know what you are trying to hide and why.

How are web pages scraped and how to protect againist someone doing it?

Im not talking about extracting a text, or downloading a web page.
but I see people downloading whole web sites, for example, there is a directory called "example" and it isnt even linked in web site, how do I know its there? how do I download "ALL" pages of a website? and how do I protect against?
for example, there is "directory listing" in apache, how do I get list of directories under root, if there is a index file already?
this question is not language-specific, I would be happy with just a link that explains techniques that does this, or a detailed answer.
Ok so to answer your questions one by one; how do you know that a 'hidden' (unlinked) directory is on the site? Well you don't, but you can check the most common directory names, whether they return HTTP 200 or 404... With couple of threads you will be able to check even thousands a minute. That being said, you should always consider the amount of requests you are making in regards to the specific website and the amount of traffic it handles, because for small to mid-sized websites this could cause connectivity issues or even a short DoS, which of course is undesirable. Also you can use search engines to search for unlinked content, it may have been discovered by the search engine on accident, there might have been a link to it from another site etc. (for instance google site:targetsite.com will list all the indexed pages).
How you download all pages of a website has already been answered, essentially you go to the base link, parse the html for links, images and other content which points to a onsite content and follow it. Further you deconstruct links to their directories and check for indexes. You will also bruteforce common directory and file names.
Well you really effectively can't protect against bots, unless you limit user experience. For instance you could limit the number of requests per minute; but if you have ajax site, a normal user will also be producing a large number of requests so that really isn't a way to go. You can check user agent and white list only 'regular' browsers, however most scraping scripts will identify themselves as regular browsers so that won't help you much either. Lastly you can blacklist IPs, however that is not very effective, there is plenty of proxies, onion routing and other ways to change your IP.
You will get directory list only if a) it is not forbidden in the server config and b) there isn't the default index file (default on apache index.html or index.php).
In practical terms it is good idea not to make it easier to the scraper, so make sure your website search function is properly sanitized etc. (it doesn't return all records on empty query, it filters % sign if you are using LIKE mysql syntax...). And of course use CAPTCHA if appropriate, however it must be properly implemented, not a simple "what is 2 + 2" or couple of letters in common font with plain background.
Another protection from scraping might be using referer checks to allow access to certain parts of the website; however it is better to just forbid access to any parts of the website you don't want public on server side (using .htaccess for example).
Lastly from my experience scrapers will only have basic js parsing capabilities, so implementing some kind of check in javascript could work, however here again you'd also be excluding all web visitors with js switched off (and with noscript or similar browser plugin) or with outdated browser.
To fully "download" a site you need a web crawler, that in addition to follow the urls also saves their content. The application should be able to :
Parse the "root" url
Identify all the links to other pages in the same domain
Access and download those and all the ones contained in these child pages
Remember which links have already been parsed, in order to avoid loops
A search for "web crawler" should provide you with plenty of examples.
I don't know counter measures you could adopt to avoid this: in most cases you WANT bots to crawl your websites, since it's the way search engines will know about your site.
I suppose you could look at traffic logs and if you identify (by ip address) some repeating offenders you could blacklist them preventing access to the server.

External links from search engine - Redirect?

I am making a site which is designed to contain a large amount of links to other sites, much like a search engine. I have seen two different approached with regards to linking to external sites.
Simply to link directly to the external content directly from the links on my own site
To redirect to the content via an internal link, such as www.site.com/r/myref123 -> www.internet.com/hello.php
Would anyone be able to tell me what the advantage is with each approach? I am stuck at a crossroads here and can't find much information on which approach I should be using.
This is a bit opinion based, so I wouldn't be surprised if the question gets closed off, but I think that the second option is the better by FAR.
The reason being is that you are then able to track who clicks through to what. You are then also able to perform some fancy code that the user will never see - such as internally ranking sites that generate a lot of click-through traffic when presented in a list of choices.
Of course, lastly, and most importantly, if you are going to possibly throw in some links that generate some sort of income, you need to be able to track those clicks. If you simply present them and do nothing more, you will have no way to bill your advertisers.
You may want to track when, who, from where, etc are links clicked. If you put one of your pages in between the original page and the linked one, you will be able to do so.
In case you use the 1st version, whenever a click is done it will directly go to the referred page and you won't have any way to track it.
If you use the 2nd version, you will be able to track the clicks done by visitors through a script located in the www.site.com/r/myref....

Best way to reduce urls bots have indexed

Google, bing and other web search engines have tons of uris in their indexes my site does not contain. Let's say something like http://www.mydomain.com?key=apple+banana+orange .
Despite there is no direct link to this uri in my site, it displays a good page according to my own search engine results. (php, mysql and other stuff). The problem is that bots are eating my server resources because of heavy access spidering thousands and thousands of uris like that one. Even worst, there are indexed lot of odd strings (cannot say words). All this is lowering performance and (I suspect) lowering site ranking.
I only want to keep all those that really exist as links in my site like
http://www.mydomain.com?key=apple or
http://www.mydomain.com?key=banana or
http://www.mydomain.com?key=orange (one simple word key)
and remove the others (the combinations, like the first uri).
I have created a google sitemap a year ago.
I need a solution according to google rules. The only thing I have in mind is
if(strstr($_SERVER['QUERY_STRING'],'+') then redirect to index.php
Thank you
If you have your index page look at the query string and return a 404 Not Found for key's that don't actually exist, that should get them out of the index. Redirecting can be an indication that the URLs are actually valid.
There are two ways, I can think of, to attack this issue:
1. create a sitemap.xml (google it)
2. Open an account in google webmaster tool: http://www.google.com/webmasters/ and claim ownership (5 mins process). after you're verified as the website owner, login your webmaster account, go to: site configuration -> sitelinks
and there you'll have the option to demote specific links you want google to ignore.
You could use a "robots.txt" file to give instructions about your site to web robots.
You can read about how to set it up here.
Edit
Google talks about robots.txt as well here.

Google Sitemap - Should I provision for load control / caching?

I have a community site which has around 10,000 listings at the moment. I am adopting a new url strategy something like
example.com/products/category/some-product-name
As part of strategy, I am implementing a site map. Google already has a good index of my site, but the URLs will change. I use a php framework which accesses the DB for each product listing.
I am concerned about the perfomance effects of supplying 10,000 new URLs to google, should I be?
A possible solution I'm looking at is rendering my php-outputted pages to static HTML pages. I already have this functionality elsewhere on the site. That way, google would index 10,000 html pages. The beauty of this system is that if a user arrives via google to that HTML page, as soon as they start navigating around the site, they jump straight back into the PHP version.
My problem with this method is that I would have to append .html onto my nice clean URLs...
example.com/products/category/some-product-name.html
Am I going about this the wrong way?
Edit 1:
I want to cut down on PHP and MySQL overhead. Creating the HTML pages is just a method of caching in preparation of a load spike as the search engines crawl those pages. Are there better ways?
Unless I'm missing something, I think you don't need to worry about it. I'm assuming that your list of product names doesn't change all that often -- on a scale of a day or so, not every second. The Google site-map should be read in a second or less, and the crawler isn't going to crawl you instantly after you update. I'd try it without any complications and measure the effect before you break your neck optimizing.
You shouldnt be worried about 10000 new links, but you might want to analyze your current google traffic, to see how fast google would crawl them. Caching is always a good idea (See: Memcache, or even generate static files?).
For example, i have currently about 5 requests / second from googlebot, which would mean google would crawl those 10,000 pages in a good half hour, but, consider this:
Redirect all existing links to new locations
By doing this, you assure that links already indexed by google and other search engines are almost immediatelly rewritten. Current google rank is migrated to the new link (additional links start with score 0).
Google Analytics
We have noticed that google uses Analytics data to crawl pages, that it usually wouldn't find with normal crawling (javascript redirects, logged in user content links). Chances are, google would pick up on your url change very quickly, but see 1).
Sitemap
The rule of thumb for the sitemap files in our case is only to keep them updated with the latest content. Keeping 10,000 links, or even all of your links in there is pretty pointless. How will you update this file?
It's a love & hate relationship with me and Google crawler theese days, since most used links by users are pretty well cached, but the thing google crawler crawls usually are not. This is the reason google causes 6x the load in 1/6th the requests.
Not an answer to your main question.
You dont have to append .html. You can leave the URLs as they are. If you cant find a better way to redirect to the html file (which does not have ot have an .html suffix), you can output it via PHP with readfile.
I am concerned about the perfomance effects of supplying 10,000 new URLs to google, should I be?
Performance effects on Google's servers? I wouldn't worry about it.
Performance effects on your own servers? I also wouldn't worry about it. I doubt you'll get much more traffic than you used to, you'll just get it sent to different URLs.

Categories