How to make google search dynamic pages of my site - php

I am planning an informational site on php with mysql.
I have read about google sitemap and webmaster tools.
What i did not understand is will google be able to index dynamic pages of my site using any of these tools.
For example if i have URLs like www.domain.com/articles.php?articleid=103
Obviously this page will be having same title and same meta information always but the content will change according to articleid. So how google will come to know about the article on the page to display in search.
Is there some way that i can get google rankings for these pages

A URL is a URL, Google doesn't give up when it sees a question mark in one (although excessive parameters may get ignored, but you only have one). All you need is a link to a page.

You could alternatively make the url SEO friendly with mod_rewrite www.domain.com/articles/103
RewriteRule ^articles/(.*)$ articles.php?articleid=$1 [L]
I do suggest you give each individual page relevant meta tags no more then 80 chars and dont place the article content within a table tag as googles placement algorithm is strict, random non related links will also do harm to the rank.

You have to link to the page for Google to notice it. And the more links you have the higher up in Google's result list your page will get. A smart thing to do is to find a page where you can link to all of your pages. This way Google will find them and give them a higher ranking than if you only link to them once.

Related

How to index custom url based on terms searched on Google

Sometimes I see on Google links with my terms searched on Google as parameter. For example, if I search "StrangeWord", I can see in results:
example.com/p=StrangeWord
I'm pretty sure it is generated automatically, how to do it? I'm using PHP with Nginx.
It isn't generated automatically. If that page is in the index, it's because there was a crawlable link to that page - whether intentionally done by the webmaster or not - and Google happened to crawl that link.
Links can get generated by users sharing such a page, bookmarking it, or even linking to it from their own sites / social profiles

How to handle Google indexing 'pages' that not exists

I build dynamic websites where structure is hierarchically saved in the database (Own CMS). I am using the Adjacency model to manage this database tables (PHP and Mysql through PDO)
I detected that Google is indexing pages that it should not.
An example of a tree structure used for navigation:
home
about us
products
productgroup 1
productgroup 2
contact
support
sales
Imagine this structure in a pulldown menu with links to the pages. When I select products->productgroup 1 I get a url like www.domain.com/products/productgroup-1 which pulls the data from the database (based on the last uri element: productgroup-1, a slug version of the title) and shows it in my template. I do not query all elements, only the last (I should, I know).
So far so good. Google is indexing this page as expected:
http://www.domain.com/products/productgroup-1
But... When I use Google webmaster tools I see a lot of pages indexed with 404's, like:
http://www.domain.com/products
http://www.domain.com/contact
And so fort.
These pages are empty and have no link in the navigation structure.
I have designed my structure so that these pages return a 404 error. Webmastertools confirms this but keeps indexing these pages. I know I can use robots.txt to disallow Google's search bot to keep it drom indexing url's. Is there another way to do this? Should I generate a 403 instead of a 404?
I am in the dark here.
You should do a few things:
Use 301 Permanent Redirection to direct this empty pages to a relevant page:
Even if Google does not crawl http://www.domain.com/products, some people may still access this link by removing the last segment from the URL from the browser. You probably don't want to show them 404s but some relevant information.
For example, you can redirect http://www.domain.com/products AND http://www.domain.com/products/ to http://www.domain.com/products/productgroup-1
Learn more about 301 redirection from Moz
It is possible to use mod-rewrite to do 301 redirects instead of doing it at code level.
Submit a sitemap to google webmaster tools.
This is a definitive list of URLs in your site.
Having a sitemap will note remove the list of 404 URLs already indexed on Google, but will inform Google of all your "official" URLs in your site and the intended crawl frequency.
Read more from Google webmaster tools here.
Check your HTML code for references to "/products" or "/contact". Googlebot will not be crawling these URLs otherwise.
301 redirection is the best option which you dont want pages and also you can assign those pages in robots.txt page.

URL Rewrite: multiple addresses per article

I have a Joomla! website with rewrite rules activated. My article URl is mysite.com/category/ID-alias.html. The only thing which is important (from this url) is the id, because when I can access the article with any text at "category" and any text at "alias".
Let's show a concrete example:
My article URL: mysite.com/flowers/15-begonia.html
I can access the same by changing category name and alias directly from url:
mysite.com/tralala/15-anything.html //Shows the same article as above.
Is this SEO? If one of my visitors want to destroy my website SEO, can he open my articles with different addresses (like above) and Google will say that articles are duplicated? Does Google knows when a visitor goes to a webpage to which link doesn't exists anywhere?
Hope my question is clear.
Thanks.
Google do a good job of deciding which is the "right" version of a page - it is worth watching this video to see how they handle this situation:
http://www.youtube.com/watch?v=mQZY7EmjbMA
Since these wrong URLs should not be linked to from anywhere, it is unlikely they will be indexed by mistake.
However, should they index the wrong version of a page, setting a sitemap with the right one will usually fix it.
A visitor could not harm your SEO with this knowledge. The worst they could do would be to provide good links to a non-indexed page, which would cause the wrong URL to be indexed. However, it would then be very easy for you to 301 redirect that page and turn their attempts at harm into an SEO benefit.
I personally think Joomla should look into adding the canonical tag, but if you want that currently, you must use an extension like this:
http://extensions.joomla.org/extensions/site-management/seo-a-metadata/url-canonicalization-/25795
(NB I have never used this extension so cannot guarantee its quality - the reviews are good, though)

Any idea how website capture google search keywords in in their title/url dynamically?

To be more specific let me give an example : If I search a keyword "rankog" on google I get the website rankog.com in the search result, but in the google search results i find some results like (a)www.markosweb.com/www/rankog.com/ and (b)www.tracedomain.com/rankog.com, I know these are some seo tools which give domain information.
My question in 1 line is how such websites (a and b) capture the search terms in their title/url?.
If I want to do the same thing - Capture a search term in google on the title/url of my page how should I do it; say I have 1000 keywords and I want to capture them in my page url, as done in (a) and (b) making 1000 pages is not the solution i guess. How do these website work and capture 1000's of keyword in their url, title?
This is done by parsing the referrer URL. Most browsers will send the prior URL in their header. You can parse this, and figure out what the search terms were.
$_SERVER['HTTP_REFERER'];
http://php.net/manual/en/reserved.variables.server.php
Now, getting your page indexed by Google is a whole 'nother story. You can sniff for their user-agent and dynamically create a bunch of fake pages, but if you do that, everyone will hate you and won't spend much time on your sites anyway.
If you want your site to show up in Google listings, the best way to do that is to have great content that others will link to.

How to make search engines index search results on my website?

I have a classifieds website.
It has an index.html, which consists of a form. This form is the one users use to search for classifieds. The results of the search are displayed in an iframe in index.html, so the page wont reload or anything. However, the action of the form is a php-page, which does the work of fetching the classifieds etc.
Very simple.
My problem is, that google hasn't indexed any of the search results yet.
Must the links be on the same page as index.html for google to index the Search Results? (because it is currently displayed in an iframe)
Or is it because the content is dynamic?
I have a sitemap which works, with all URLS to the classifieds in the sitemap, but still not indexed.
I also have this robots.txt:
Disallow: /bincgi/
the php code is inside the /bincgi/ folder, could this be the reason why it isn't being indexed?
I have used rewrite to rewrite the URLS of the classifieds to
/annons/classified_title_here
And that is how the sitemap is made up, using the rewritten urls.
Any ideas why this isn't working?
Thanks
If you need more input let me know.
If the content is entirely dynamic and there is no other way to get to that content except by submitting the form, then Google is likely not indexing the results because of that. Like I mentioned in a comment elsewhere, Google did some experimental form submission on large sites in 2008, but I really have no idea if they expanded on that.
However, if you have a valid and accessible Google Sitemap, Google should index your classifieds fine. I suggest to use the Google Webmaster Tools to find out how Google treats your site and to diagnose any potential problems with crawling.
To use ebay is probably a bad example as its not impossible that google uses custom rules for such a popular site.
Although it is worth considering that ebay has text links to categories and sub categories of auction types, so it is possible to find auction items without actually filling in a form.
Personally, I'd get rid of the iframe, it's not unreasonable when submitting a form to load a new page.
that question is not answerable with the information given, to many open detail questions. if you post your site domain and URLs that you want to get indexed.
based on how you use GWT it can produce unindexable content.
Switch every parameters to GET
Make html links to those search queries on "known by Googlebot" webpages
and they'll be index

Categories