Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I update my website content daily, around 15 to 20 new pages.
From my webmaster account, I "Fetch as Google" for each page, long process..
Is there a way to automate it by PHP?
Can PHP "auto submit" my new pages for me (The New Links are in a MySql data) to "fetch as google"?
Please help.
Thank you.
It is wrong to do that "by hand".
I will cite another answer
I would say it is not a preferred way to alert Google when you have a
new page and it is pretty limited. What is better, and frankly more
effective is to do things like:
add the page to your XML sitemap (make sure sitemap is submitted to Google)
add the page to your RSS feeds (make sure your RSS is submitted to Google)
add a link to the page on your home page or other "important" page on your site
tweet about your new page
status update in FB about your new page
Google Plus your new page
Feature your new page in your email newsletter
Obviously, depending on the page you may not be able to do all of
these, but normally, Google will pick up new pages in your sitemap. I
find that G hits my sitemaps almost daily (your mileage may vary).
I only use fetch if I am trying to diagnose a problem on a specific
page and even then, I may just fetch but not submit. I have only
submitted when there was some major issue with a page that I could not
wait for Google to update as a part of its regular crawl of my site.
As an example, we had a release go out with a new section and that
section was blocked by our robots.txt. I went ahead and submitted the
robots.txt to encourage Google to update the page sooner so that our
new section would be :"live" to Google sooner as G does not hit our
robots.txt as often. Otherwise for 99.5% of my other pages on sites,
the options above work well.
The other thing is that you get very few fetches a month, so you are
still very limited in what you can do. Your sitemaps can include
thousands of pages each. Google fetch is limited, so another reason I
reserve it for my time sensitive emergencies.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a webpage I'm trying to promote via ad banners. I want to associate a utm code to those those links, so when a visitor lands on my website, I'll be able to track where they came from (mysite.com?utm_campaign=adXYZ)
Normally these ad banners lead to a single webpage with single point of conversion where I'm able to capture the utm_campaign ID and gauge how effective my marketing is. However, I'm now leading users to a full website with many pages and many points of conversion. I'm hoping to keep that utm_campaign ID across multiple pages using some crafty JS or PHP.
For example:
user clicks ad banner to mysite.com?utm_campaign=adXYZ
user lands on mysite.com but wants to go to mysite.com/features
user goes from mysite.com/features to mysite.com/pricing
By the time the user reaches mysite.com/pricing, I want ?utm_campaign=adXYZ to still be there in the URL.
I know there are ways via analytics and what not to track a session/conversion, but I specifically need to capture the referral utm code in an HTML form down the road. Can anyone point me in the right direction? Thanks a bunch!
Edit: An important point to note. The site should still be accessible organically via search, bookmark, linking, etc and not have the trailing campaign ID in the URL. Only when user visits the site from ad banner should the campaign be there and all subsequent pages.
I would set a cookie containing the relevenat information the first time the user enters your website.
Otherwise you have to pass the information every time again with every request (GET / POST). This solution will work even if the user don't allow cookies. Murat Cem YALIN wrote how this works in detail. But if you want to use the JS-method: Be aware that the user must have JS activated!
The third option might be using PHP Sessions.
you can do it with both php and js.
in php use simplehtmldom (http://simplehtmldom.sourceforge.net/) to access all links in html output and add ?utm_campaign=adXYZ to all of them just before outputting rendered html.
in js you use jquery to do the same when the document loaded. ex:
$("a").each(function(){
$(this).attr("href", $(this).attr("href") + '?utm_source=adxYZ');
});
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am about to create an Online Shopping site for my one of the client. I have to make this site SEO Friendly and therefore I must have to understand few things before I proceed to make a custom CMS Based website.
As I said I am going to make a Custom CMS Based website so that my client will be able to add new content through CMS but I don't understand few things.
For Example: I have an index.php page which has many links to different products and all of these links are created through Database using PHP. Site Link like
http://www.def.com/shoes/Men-Shoes
My Questions:
1) I want to know that when the GoogleBot crawls my site, will it also open my dynamically created links and index them? Will GoogleBot also index the content of my dynamic links?
2) Do I have to create seperate pages for all of the products on site and store them on my server? Or just a single page which serves dynamically according to user query for every product?
I read this
"It functions much like your web browser, by sending a request to a web server for a web page, downloading the entire page, then handing it off to Google’s indexer."
is it right?
my above query was actually looking like this and I used .htaccess file to make it pretty
http://www.def.com/shoes.php?type=Men-Shoes
so is it right and google will crawl it to index?
SEO is a complex science in itself and Google is always changing the goal posts and modifying their algorithm.
While you don't need to create separate pages for each product, creating friendly URL's using the .htaccess file can make them look better and easier to navigate. Also creating a site map and submitting this to Google Via their webmaster tools will help them to know which pages to index.
GoogleBot will follow the links in your site, including dynamically created one, but it is important not to try and game the system using Blackhat methods if long term success is your aim.
Also, use social media (Twitter, Facebook, Google+) to help promote your brand and make sure you follow Google's guidelines with regards to SEO and inpage optimisation.
There is a huge amount of information on the internet on this subject, but be careful what advice you follow.
Google and other search engines index the dynamic links too. So a way to avoid duplicate content is to use the "Crawl"->"URL Parameters" tool in Google Webmasters. You can read more about how that works here https://support.google.com/webmasters/answer/6080548?rd=1. Set "Crawl" field to "No URLs". By this way you could hide from search dynamic links but you have to have a list of all of your dynamic links of your website/CMS in order not to hide important content accidentally. The "URL Parameters" feature is available in Bing Webmaster tools also http://searchenginewatch.com/sew/how-to/2195777/bing-webmaster-tools-an-overview#.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am new to search engines, and I find googlenews very interesting.
I would like to write a simple crawler which
parse only the article links of three different news sites.
Save the links in database (mysql) with the timestamp in which the link has been advertised on the website (not the time in which the link has been detected by the crawler).
As you know, news website generate links on a daily basis (And I would like basically to parse all their links (not just those who are printed today, but also all the links that were generated before...and all these links are kept in the news website database).
I dont know which database is used by the news websites that I want to crawl and I also don`t have access permission to it.
So how does googlenews able to parse all the article links of all news sites, including the links which have been generated long time ago? Does googlenews have access to all those websites databases?
How does a crawler know that a NEW link has been added to the website? if for example, a news site posted a new article, and I want my crawler to parse the link immediately, how can the crawler knows that (googlenews also able to do it...so how...?) i.e does the crawler knows immediately about the new article link? or google just crawls the website on a fixed interval (every one hour etc...)?
How does google news crawler know when a new website has been launched?
Does the crawler looks automatically for new websites, or google engineers basically holds a fixed list of news website to crawl?
The same question can be asked regarding google search crawler i.e crawler should be aware that a new domain has been launched so it can crawl it and therefore make sure google database reflect the most updated state of the world wide web.
So is there any open worldwide database which keeps all the domains ever launched and google basically crawls it?
What will be the best tool to implement my news website crawler?
Apache Lucene, Nutch, Solr, ElasticSearch?
Maybe http://phpcrawl.cuab.de/?
I am REALLY curious to the answer of the above four questions.
Please assist.
Thanks in advance.
You have some key questions here which I'll answer but first you should understand what is a crawler.
What is a crawler?
The crawler's job is to scan the internet by reading a page, getting all the links he contains and then reading those pages as well. The main purpose of this action is to find new content automatically. A good crawler will start crawling few big and familiar websites that updates often, this way he can update and index these sites and also get new content and new sites fast (because big websites often contains links to other sites).
Regarding your questions:
Does googlenews have access to all those websites databases?
No, if you got access to the database there is no need for a crawler.
How does a crawler know that a NEW link has been added to the website?
Google crawls every site once in a while and searches for new links inside the site. Usually a new page or an article will be linked through the main page that already stored in Google's database.
How does google news crawler know when a new website has been
launched?
The simple answer is: the crawler finds a link to the new website, checks if the website is in the system and if not, adds it.
How they get the links of the old articles?
Easy, they save those links in a huge database. Google started crawling the internet years ago. Old links probably won't show up if Google will start crawling the internet today all over again.
How do I get the timing in which the site posted the article?
That's depends on the site you're crawling. If each article have a date you need to parse the page and extract this date. This article have a date in the top and it's easy to find the the HTML dom by searching the date class: <span class="date">6 June 2014</span>.
If the date does not appear, you won't have a way to know when they published it.
As a developer you can make the life of Google easier and ask Google to crawl your new website via Google Webmaster Tools.
While crawling the web, Google also counts how many links lead to a page, this will affect the page's ranking. Many links to your site will indicate you have a valuable content and you should appear higher in the search results.
Writing a simple crawler is easy. You get a page's content with php cURL or file_get_contents, parse it, select and save the data you want, extract all the links in this page and then recursively crawl the links you found.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I looked all over google and the internet as to performing In Page A/B testing.
What I am trying to do is perform A/B testing on a single page, but that page's content varies on the referring url, performed through <?php include
Say if you come from Google, it displays, 'Hey, are you new here?!', or if you come from another page on our website it will display 'Let's get you started'. The goal is then to see which page has longer visit duration.
Does anyone know of how to do this through Google analytics/Optimizely or any other analytics plugin?
Ryan,
I believe this shouldn't be too difficult to do... just depends on the tools you use :-).
Personally, I can recommend Visual Website Optimizer, which allows you to segment a running test to specific segment based on various conditions. Referring URL is one of them (see screen below).
However, you can then use only one variation of the page, so if you have more segments that you would like to test, you would need to:
Duplicate the test itself,
Change the copy in the variation,
Set up the segment rules according to your needs,
Follow this procedure with every segment :)
I have done this, but can't say it had much impact. It was too much work and I personally prefer segmenting based on customer data (new/existing customer etc.), where you can notice much more impact and it's also then "easier" to report since the differences are quite noticeable.
Hope this helps!
You should be able to set this up in any modern A/B testing tool on the market. Here’s how to do it in Optimizely:
Create a new experiment and go to Options and Targeting. Select the page that the experiment should run on and select the referrers it should run for in Visitor Conditions:
Make sure that the messaging isn’t displayed for the the Original / Control / A in the experiment.
For the Variation / A, use the visual editor to add an element with the messaging or select an existing element on the page to change it’s text. You can also write Javascript code to insert the element (via ‘Edit Code’).
If you want to display different messages for different referrers, click on the the ‘Edit Code’ ribbon in Optimizely and wrap the Javascript in if clauses for each referrer (and create a backup message), like so:
if (document.referrer.match(/^https?:\/\/([^\/]+\.)?reddit\.com(\/|$)/i)) {
$('#welcome-message').text('Hi redditor!');
} if (document.referrer.match(/^https?:\/\/([^\/]+\.)?huffingtonpost\.com(\/|$)/i)) {
$('#welcome-message').text('Hi Huffington Post reader!');
} else {
$('#welcome-message').text('Hi! I’m a backup message, just in case !');
}
Select Options and Analytics Integration. Enable Google Analytics.
Start the experiment. Within a few minutes you should see the first results in Optimizely. In a few hours, results will be available in Google Analytics, where you can drill down to see how things like visit duriation, pageview per visit and bounce rate changed based on custom segments (that’s how the variations are displayed in Google Analytics).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I like to pause music when switching to another page from home page & on returning back we should resume the music as it was before. It shouldn't need to restart the music again when I get back to home page.
We are using flashplayer found from this site:
http://flashnifties.com/products/nifty-audio-player/documentation/
However we have not found any script which fulfill our need.
Please help me if anyone has the solution of this problem.
The critical part of this problem is that you are reloading the page, which completely resets the Flash player within it. You are left with two options:
Implement your site as a single page application and use ajax to refresh the content. This means that the Flash player will not be reloaded when the user needs more content (this is what we do on our site).
-or-
Use a frameset, with player loaded in one frame, and rest of website in the other. This is a bad choice of architecture... frames are evil.
It looks like you can pause with this:
audioPlayer.pauseAudio()
I would add this javascript to the button or object that changes to a new url (page). This listens for an action and does
document.getElementById('newPageButton').onclick = function() {
//interact with the flash "audioPlayer.pauseAudio();", cache song time location.
//go to next page code
}
you'll probably need to write your own actionscript to save the location of were the song was paused at. Looking at the documentation there isn't a way to see exactly were the song was paused.
After you figure that out, store the location on some sort of caching. Load that caching when you go back to the page. Though looking at the documentation again, it looks like you'll need to write some actionscript to get resuming at a specific time working.
This might not be the best flash player for what you wish to achieve.