I have an issue in my website. Looking up my site in search engines, and viewing the cached version of the index page, shows only the header and footer of the page. Everything in between is omitted. This issue is apparent only in the index page. Other pages on the site are fine.
Here is the cached page
Here's a direct link to the page.
Things to note:
This issue does not seem to happen in Google search only, but also
Bing and Yahoo.
In Google cache, it's possible to display the text-only version of
the page, which seems to SHOW the page just fine, including the omitted content but without style.
In google webmaster tools, the page preview of the index page does
not have this issue. It shows it just fine.
The index page is using a jquery plugin to display the car brands,
and it allows for sorting based on region/country. I don't know if
this is the culprit.
My site is in Arabic. Sorry if you don't understand anything .-.
I think Google seems to have trouble parsing your document. You have 2 sets of <html></html> tags in your document, which could lead the Google engine to confusion and errors.
I would strongly recommend you to fix your HTML errors using W3 Validator, this will probably allow Google to parse your document without errors.
http://validator.w3.org/check?uri=http%3A%2F%2Fwww.mrkabat.com%2F&charset=%28detect+automatically%29&doctype=Inline&group=0
Related
I think i am having some critical meta tag issue in my website. When i search my website in google, In the search result, Website title and name showing correct information but in place of description some other content is showing which is not the meta description content but some parts of the content of my website's home page. My website is developed in PHP-based opensource opencart.
i searched a lot to resolve it but still i got no solution. I have no previous experience in seo that makes me estimate something without sufficient information to be sure where the error is. If anyone helps me out here that would be really great. I attached an screenshot for better understanding.
Firstly, ensure that you are following the structure which is shown in https://support.google.com/webmasters/answer/79812?hl=en
This means that your tag should look like this:
<meta name="description" content="A description of the page" />
Something that could be causing this problem is that Google does not update descriptions automatically- so you may have to wait until they crawl your website again, for it to update (you could use the Google Webmaster systems to encourage this).
Google will sometimes use the meta description of a page in search
results snippets, if we think it gives users a more accurate
description than would be possible purely from the on-page content
https://support.google.com/webmasters/answer/35624?rd=1
google only sometimes uses your meta description. Other times, it uses page content.
I'm working on AJAX-crawlable (Google AJAX-crawling) website, but some things are unclear to me. On the back-end of the application I filter out the _escaped_fragment_ parameter and return a HTML snapshot as expected.
When calling the URL's manually as shown below there are no problems:
(1) animals#!dogs
(2) animals?_escaped_fragment_=dogs
When viewing the page source at option (1) the content is loaded dynamically and with option (2) the page source contains the html snapshot. So far so good.
The problem is that, when using Google fetch as suggested (Google Fetch) the spider only seems crawl option (1) as if the hashbang (#!) never gets converted by the AJAX-crawler. Even when hard-coding die("AJAX test); inside the function dealing with the _escaped_fragment_ this does not reflect in the result generated by the spider.
So far I have done everything according to Google's guidelines and the only lead I have towards this problem is found on a sub page on the Google forums: Fetch as Google ignoring my hashtag. If this is the case, then it would mean there is no accurate way of testing what the Google bot would see until the changes have gone live and the page is re-indexed?
Other pages such as How to Test If Googlebot Can Access Your AJAX Content and the Google page its-self suggest that this can be tested using Google Fetch.
The information seems to contradict its-self and I have no idea if my AJAX content will be crawled correctly by the Google bot. Hopefully someone with more knowledge on the subject can help me out.
Hash bangs have been abandoned. PUSH states are the more friendly alternative.
Hi I've got a site I'm working on it's just a html site with Wordpress integrated to allow a few dynamic features for the client. For some reason IE with Google Toolbar enabled truncates the information and strips the css. In other words it's like it does read anything until it get half way down the page. When you refresh a few times it will change what it does and doesn't display. If I remove the wordpress code fragment at the top(see below) it works, but of course all the dynamic content goes away. The weird thing is that if it's WP generated page it works just fine.
<?php
require('./cms/wp-blog-header.php');
?>
I'm a little lost on this. I'm not doing anything I haven't done several times before with fine results and I can't find anything about it online. Any help would be great, thanks.
I've tried a bunch of techniques to crawl this url (see below), and for some reason the title comes back incorrect. If I look at the source of the page with firebug I can see the correct title tag, however, if I view the page source it's different.
Using several php techniques I get the same result. Digg is able to crawl the page and parse the correct title.
Here's the link: http://lifehacker.com/#!5772420/how-to-make-ios-more-like-android
The correct title is "How to Make Your iPhone (or Other iOS Device) More Like Android"
The parsed title is "Lifehacker, tips and downloads for getting things done"
Is this normal? How are they doing this? Is there a way to get the correct title?
That's because when you request it using PHP (without any JS support) you're getting the main page of lifehacker - which is lifehacker.com.
Lifehacker switched their CMS recently so that all requests go to an initial page and then everything after the hashbang is read by a JS script in the main page to figure out which page needs to be served. You need to modify your program to take this into account
EDIT
Have a gander at these links
http://code.google.com/web/ajaxcrawling/docs/getting-started.html
http://www.tbray.org/ongoing/When/201x/2011/02/09/Hash-Blecch
Found the answer:
http://lifehacker.com/#!5772420/how-to-make-ios-more-like-android
becomes:
http://lifehacker.com/?_escaped_fragment_=5772420/how-to-make-ios-more-like-android
I have a classifieds website.
It has an index.html, which consists of a form. This form is the one users use to search for classifieds. The results of the search are displayed in an iframe in index.html, so the page wont reload or anything. However, the action of the form is a php-page, which does the work of fetching the classifieds etc.
Very simple.
My problem is, that google hasn't indexed any of the search results yet.
Must the links be on the same page as index.html for google to index the Search Results? (because it is currently displayed in an iframe)
Or is it because the content is dynamic?
I have a sitemap which works, with all URLS to the classifieds in the sitemap, but still not indexed.
I also have this robots.txt:
Disallow: /bincgi/
the php code is inside the /bincgi/ folder, could this be the reason why it isn't being indexed?
I have used rewrite to rewrite the URLS of the classifieds to
/annons/classified_title_here
And that is how the sitemap is made up, using the rewritten urls.
Any ideas why this isn't working?
Thanks
If you need more input let me know.
If the content is entirely dynamic and there is no other way to get to that content except by submitting the form, then Google is likely not indexing the results because of that. Like I mentioned in a comment elsewhere, Google did some experimental form submission on large sites in 2008, but I really have no idea if they expanded on that.
However, if you have a valid and accessible Google Sitemap, Google should index your classifieds fine. I suggest to use the Google Webmaster Tools to find out how Google treats your site and to diagnose any potential problems with crawling.
To use ebay is probably a bad example as its not impossible that google uses custom rules for such a popular site.
Although it is worth considering that ebay has text links to categories and sub categories of auction types, so it is possible to find auction items without actually filling in a form.
Personally, I'd get rid of the iframe, it's not unreasonable when submitting a form to load a new page.
that question is not answerable with the information given, to many open detail questions. if you post your site domain and URLs that you want to get indexed.
based on how you use GWT it can produce unindexable content.
Switch every parameters to GET
Make html links to those search queries on "known by Googlebot" webpages
and they'll be index