I am getting the source code for a particular web page.
But I am not getting some dynamic content.
Is there any way to wait until page load and then get the source code.
I need a solution in PHP.
I am getting the source code for a particular web page. But I am not getting some dynamic content.
Dynamic contents will be loaded mostly only after making an ajax call in a webpage. If you want to get those data using curl, then you should inspect the network call that's being made from that webpage and replicate that call in curl.
Is there any way to wait until page load and then get the source code
Even though you get the full source code of the page via curl, you won't get the dynamically loaded content.
Otherwise, you can use tools like selenium to get those data.
Related
There is a site that I want to scrape: https://tse.ir/MarketWatch.html
I know that I have to use:
file_get_contents("https://examplesite.html")
to get the html part of site, but how can I find a specific part of site for example like this part in text file:
<td title="دالبر"title="something" class="txtclass="someclass">Tag namad">دالبر<Name</td>
When I open the text file, I never see this part and I think it is because in website there is JavaScript file. How can I get all information of website that include every part I want?
Content loaded by ajax request via javascript. This means you can't get this data simply grabbing the page contents.
There are two ways of collecting data you need:
Use solution based on selenium webdriver to load this page by real browser (which will execute JS), and collect data from rendered DOM.
Research what kind of requests are sent by website to get this data. You could use network activity tab in browser dev tools. Here is example for chrome. For other browsers is the same or similar. Than you send the same request and pase response regarding to your needs.
In your specific case, probably, you could use this url: https://tseest.ir/json/MarketWatch/data_211111.json to accees the json object with data you need.
YOU have three variants for scraping the data:
There's an export to excel file: https://tse.ir/json/MarketWatch/MarketWatch_1.xls?1582392259131. Parse through it, just remember that this number is Unix Timestamp, where first 10 numbers are the month/day/year/hours/minutes
Also there's probably a refresh function(s) for the market data somewhere in all .js files loaded in the page. Just find it and see if you can connect directly to the source (usually a .json)
Download the page at your specific interval and scrape each table row using PHP's DOMXPath::query
Please let me know is it possible to scrap some info after ajax loaded with PHP? I had only used SIMPLE_HTML_DOM for static pages.
Thanks for advice.
Scraping the entire site
Scraping Dynamic content requires you to actually render the page. A PHP server-side scraper will just do a simple file_get_contents or similar. Most server based scrappers wont render the entire site and therefore don't load the dynamic content generated by the Ajax calls.
Something like Selenium should do the trick. Quick google search found numerous examples on how to set it up. Here is one
Scraping JUST the Ajax calls
Though I wouldn't consider this scraping you can always examine an ajax call by using your browsers dev tools. In chrome while on the site hit F12 to open up the dev tools console.
You should then see a window like the above. Hit the network tab and then hit chrome's refresh button. This will show every request made between you and the site. You can then filter out specific requests.
For example if you are interested in Ajax calls you can select XHR
You can then click on any of the listed items in the tabled section to get more information.
File get content on AJAX call
Depending on how robust the APIs are on these ajax calls you could do something like the following.
<?php
$url = "http://www.example.com/test.php?ajax=call";
$content = file_get_contents($url);
?>
If the return is JSON then add
$data = json_decode($content);
However, you are going to have to do this for each AJAX request on a site. Beyond that you are going to have to use a solution similar to the ones presented [here].
Finally you can also implement PhantomJS to render an entire site.
Summary
If all you want is the data returned by specific ajax calls you might be able to get them using file_get_contents. However, if you are trying to scrape the entire site that happens to also use AJAX to manipulate the document then you will NOT be able to use SIMPLE_HTML_DOM.
Finally I worked around my problem. I just get a POST url with all parameters from ajax call and make the same request using SIMPLE_HTML_DOM class.
I'm looking to get information from an external website, by taking it from a div in their code. But using the file_get_contents() method doesn't work because the information isn't in the source code for the page. It only shows up after the page loads (It's available if you use an inspect element in the web browser).
Is there a way to do this? Or Am I just out of luck on that?
Basically, a page generates some dynamic content, and I want to get that dynamic content, and not just the static html. I am not being able to do this with cURL. Help please.
You can't with just cURL.
cURL will grab the specific raw (static) files from the site, but to get javascript generated content, you would have to put that content into a browser-like envirionment that supports javascript and all other host objects that the javascript uses so the script can run.
Then once the script runs, you would have to access the DOM to grab whatever content you wanted from it.
This is why most search engines don't index javascript-generated content. It's not easy.
If this is one specific site that you're trying to gather info on, you may want to look into exactly how the site gets the data itself and see if you can't get the data directly from that source. For example, is the data embedded in JS in the page (in which case you can just parse out that JS) or is the JS obtained from an ajax call (in which case you can maybe just make that ajax call directly) or some other method.
you could try selenium at http://seleniumhq.org, which supports js.
how is it possible to grab web page source from a ajax type web page:
curl doesn't seem to be able to get ajax generated source.
Sorry if duplicate, but looking throw questions didn't find answer.
If the page you want to grab uses ajax to compose different parts of it, then the content does not exist until all the loading is done.
You couldn't do this with curl, as curl acts as a client requesting only the URL you instruct it, but has no javascript engine to interpret the script and load other parts of the page.
If the content you are looking for is in one of the parts loaded through ajax, you should use the chrome inspector -> network tab and see what is the exact URL of the loaded page, then load that page using curl.