Google webmaster tools HTML suggestions - duplicate meta descriptions - php

When i look in my webmaster tools account i have hundreads of dupliacte meta descriptions. When i look at each one the duplicate urls's are like so:
/in​dex​.ph​p?r​out​e=p​rod​uct​/pr​odu​ct&​pro​duc​t_i​d=1​58?​48e​fc5​20
/in​dex​.ph​p?r​out​e=p​rod​uct​/pr​odu​ct&​pro​duc​t_i​d=1​58?​abc56c80
Where are these numbers coming from after my product id????
Thanks
Pjn

That means that the <meta name="description" content="..."> is the same for several pages.
Since you're not sure how the additional parameters are added to your URL, you could use the link tag to specify the canonical URL. This needs to be added to the head of each page.
<link rel="canonical" href="http://www.example.com/product.php?item=swedish-fish" />
For more information, have a look at http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html.

To resolve this, add code to 301 redirect Urls with extraneous parameters to the canonical Url. For example, redirect /in​dex​.ph​p?r​out​e=p​rod​uct​/pr​odu​ct&​pro​duc​t_i​d=1​58?​48e​fc5​20 to
/in​dex​.ph​p?r​out​e=p​rod​uct​/pr​odu​ct&​pro​duc​t_i​d=1​58
Assuming these extraneous parameters are indeed not being used.
Stackoverflow does this, see how this Url redirects:
Google webmaster tools HTML suggestions - duplicate meta descriptions

Do you have a referral system or something in place? Those are querystrings, and without the name part of the usual name=value pair, they tend to look like those for referrals.
Is this in a system you build yourself, or are you using something?
Without more information about your setup, it will be tough to diagnose.

Hard to say where Google "learned" the names. You can some things in Webmaster Tools to avoid the reports.
One is the canonical like somebody mentioned. The other is in Webmaster Tools in Site Configuration/Settings. Click the Parameter Handling tab and enter your exceptions

Related

Meta Description Content Not Showing In Google Search Result Page

I think i am having some critical meta tag issue in my website. When i search my website in google, In the search result, Website title and name showing correct information but in place of description some other content is showing which is not the meta description content but some parts of the content of my website's home page. My website is developed in PHP-based opensource opencart.
i searched a lot to resolve it but still i got no solution. I have no previous experience in seo that makes me estimate something without sufficient information to be sure where the error is. If anyone helps me out here that would be really great. I attached an screenshot for better understanding.
Firstly, ensure that you are following the structure which is shown in https://support.google.com/webmasters/answer/79812?hl=en
This means that your tag should look like this:
<meta name="description" content="A description of the page" />
Something that could be causing this problem is that Google does not update descriptions automatically- so you may have to wait until they crawl your website again, for it to update (you could use the Google Webmaster systems to encourage this).
Google will sometimes use the meta description of a page in search
results snippets, if we think it gives users a more accurate
description than would be possible purely from the on-page content
https://support.google.com/webmasters/answer/35624?rd=1
google only sometimes uses your meta description. Other times, it uses page content.

SEO duplicate content issue with alternative URLs

I have a PHP website where every page can be accessed either by page ID or by page name:
http://domain/page_id=ID
http://domain/page=NAME
The problem is that Google treats this as duplicated content. What is the best practice to avoid duplicate content in the case? Will 303 redirect will be better than entirely avoiding two different URLs to lead to the same page?
According to Google:
In the world of content management and online shopping systems, it's
common for the same content to be accessed through multiple URLs.
Therefore,
Indicate the preferred URL with the rel="canonical" link element
Suppose you want
http://blog.example.com/dresses/green-dresses-are-awesome/ to be the
preferred URL, even though a variety of URLs can access this content.
You can indicate this to search engines as follows:
Mark up the canonical page and any other variants with a
rel="canonical" link element. Add a element with the attribute
rel="canonical" to the section of these pages:
This indicates the preferred URL to use to access the green dress
post, so that the search results will be more likely to show users
that URL structure. (Note: We attempt to respect this, but cannot
guarantee this in all cases.)
So, all you need to do is to add the canonical link element to the <head> section of your pages with absolute paths.

How to prevent duplicate title tags on dynamic content

Links on the website I am making currently look like this:
http://www.example.net/blogs/151/This-is-a-title-in-a-url
My php system pulls out the id (151 say) and uses that to pull to content from my database. The text afterwards is effectively ignored (much like stackoverflow uses).
Now my problem is that this creates duplicate titles that Google will sometimes index and I lose SEO as a result:
http://www.example.net/blogs/151/This-is
http://www.example.net/blogs/151/
What is the best way to make it so that google and other search engines only see the correct full link so that I don't end up with duplicates and get the best ranking possible?
EDIT: I notice that with stackoverflow site that you get dynamically redirected to another page? How do they do that?
Pick a URI to be canonical.
When you get a request for http://example.com/123/anything then, instead of ignoring the anything, compare it to the canonical URI.
If it doesn't match, issue a 301 Moved Permanently redirect.
A less optimal approach would be to specify the canonical URI in the page instead of redirecting:
<link rel="canonical" href="http://example.com/123/anything"/>

price comparison website - crawler

i have got a price comparison website.
You can click on a link of an offer and i'll get $1 from the shop.
The problem is: crawlers crawling the whole website. So they "click on the links".
How can i prevent them from clicking? Javascript is a bad solution.
Thank you!
I've been thinking about this the wrong way.
I agree with everything that #yttriuszzerbus says above - add a robots.txt to the file, add "rel=nofollow" to links, and block the user agents that you know about.
So if you've got someone who's now trying to click on a link, it's either a live person, or a badly behaved bot that you don't want clicking.
So how about doing something strange to create the links to the shop sites? Normally, you'd never, ever do this, as it makes your site impossible to index. But that's not an issue - all the well-behaved bots won't be indexing those links because they'll be obeying the robots.txt file.
I'm thinking of something like not having an <a href= tag in there - instead, generate the text of the link adding underlining to the font using a stylesheet, so it looks like a link to a normal user, and then having a javascript onClick function that redirects the user when they click on it. Bots won't see it as a link, and users won't notice a thing.
You could:
Use "rel=nofollow" to instruct crawlers not to follow your links.
Block certain user-agent strings
Use robots.txt to exclude spread of your site.
Unfortunately, none of the above will exclude badly-behaved crawlers. The only solution to actually prevent crawlers is some kind of JavaScript link or a CAPTCHA.
I also have similar project.
My problem was solved only by block certain user-agent strings.
Another problem is that I don't know every "bad" user-agent's, so when a new crawler enters the site, I add it to the blacklist and retroactively remove its visits from statistics.
"rel=nofollow" and robots.txt not work at all for me.

How to make google search dynamic pages of my site

I am planning an informational site on php with mysql.
I have read about google sitemap and webmaster tools.
What i did not understand is will google be able to index dynamic pages of my site using any of these tools.
For example if i have URLs like www.domain.com/articles.php?articleid=103
Obviously this page will be having same title and same meta information always but the content will change according to articleid. So how google will come to know about the article on the page to display in search.
Is there some way that i can get google rankings for these pages
A URL is a URL, Google doesn't give up when it sees a question mark in one (although excessive parameters may get ignored, but you only have one). All you need is a link to a page.
You could alternatively make the url SEO friendly with mod_rewrite www.domain.com/articles/103
RewriteRule ^articles/(.*)$ articles.php?articleid=$1 [L]
I do suggest you give each individual page relevant meta tags no more then 80 chars and dont place the article content within a table tag as googles placement algorithm is strict, random non related links will also do harm to the rank.
You have to link to the page for Google to notice it. And the more links you have the higher up in Google's result list your page will get. A smart thing to do is to find a page where you can link to all of your pages. This way Google will find them and give them a higher ranking than if you only link to them once.

Categories