My scraping code works for just about every site i've come accross while testing... except for nytimes.com articles. I use ajax with the following PHP code (i've left out some details to focus on my specific problem):
$link = "http://www.nytimes.com/2014/02/07/us/huge-leak-of-coal-ash-slows-at-north-carolina-power-plant.html?hp";
$article = new DOMDocument;
$article->loadHTMLFile($link);
//generate image array
$images = $article->getElementsByTagName("img");
foreach ($images as $image) {
$source = $image->getAttribute("src");
echo '<img src="' . $source . '" alt="alt"><br><br>';
}
My problem is that the main images on nytimes pages don't even seem to get picked up by the getElementsByTagName. Pinterest finds a way to scrape the main images from this site for example: http://www.nytimes.com/2014/02/07/us/huge-leak-of-coal-ash-slows-at-north-carolina-power-plant.html?hp whereas I cannot. Any suggestions?
OK. So this is what I tried so far as I found your question interesting.
When I do this on browser console using jQuery, I do get results on images. My query was
var a= new Array();
$('img[src]').each(function(){ a.push($(this).attr('src'));});
console.log(a);
Also see screenshot of results
Note that console.log(arrayname) work in Chrome browser.
So ideally your code must work. Please consider adding a is_null check like I've done.
Below is the code where I try loading the URL using a different approach(perhaps better too) and get the root cause of why you get only single image of NYT logo.
The resultant HTML screenshot is attached .
<?php
$html = file_get_contents("http://www.nytimes.com/2014/02/07/us/huge-leak-of-coal-ash-slows-at-north-carolina-power-plant.html?hp");
echo $html;
$doc = new DOMDocument();
$doc->strictErrorChecking = false;
$doc->recover=true;
#$doc->loadHTML("<html><body>".$html."</body></html>");
$xpath = new DOMXpath($doc);
$images = $xpath->query("//*/img");
if (!is_null($images)) {
echo sizeof($images);
foreach ($images as $image) {
$source = $image->getAttribute('src');
echo '<img src="' . $source . '" alt="alt"><br><br>';
}
}
?>
You can't get the content via feed unless you are authenticated.
You can try-
To use context parameter in file_get_contents method
You can try consuming the RSS/ATOM feeds of the article.
you download the page as HTML and then load it in file_get_contents methods. PS: This works.
Related
An api returns me couple of html code (only part of the body, not full html) and i want to change all images src's with others.
I get and set attributes then if i echo it in foreach loop i see old and new value but when i try to save it with saveHTML then dump the full html block which is returned from api, i don't see replaced paths.
$page = json_decode($page);
$page = (array) $page->rows;
$page = ($page[0]->_->content);
$dom = new \DOMDocument();
$dom->loadHTML($page);
$tag = $dom->getElementsByTagName('img');
foreach($tag as $t)
{
echo $t->getAttribute('src').'<br'>; //showing old src
$t->setAttribute('src', 'bla');
echo $t->getAttribute('src').'<br'>; //showing new src
}
$dom->saveHTML();
var_dump($page); //nothing is changed
My_ friend this is not how it works.
You should have your edited HTML in the result of saveHTML() so:
$editedHtml = $dom->saveHTML()
var_dump($editedHtml);
Now you should see your changed HTML.
Explanation is that $page is completely different object that has nothing to do with $dom object.
Cheers!
I want to fetch image from google using PHP. so I tried to get help from net I got a script as I needed but it is showing this fatal error
Fatal error: Call to a member function find() on a non-object in C:\wamp\www\nq\qimages.php on line 7**
Here is my script:
<?php
include "simple_html_dom.php";
$search_query = "car";
$search_query = urlencode( $search_query );
$html = file_get_html( "https://www.google.com/search?q=$search_query&tbm=isch" );
$image_container = $html->find('div#rcnt', 0);
$images = $image_container->find('img');
$image_count = 10; //Enter the amount of images to be shown
$i = 0;
foreach($images as $image){
if($i == $image_count) break;
$i++;
// DO with the image whatever you want here (the image element is '$image'):
echo $image;
}
?>
I am also using Simple html dom.
Look at my example that works and gets first image from google results:
<?php
$url = "https://www.google.hr/search?q=aaaa&biw=1517&bih=714&source=lnms&tbm=isch&sa=X&ved=0CAYQ_AUoAWoVChMIyKnjyrjQyAIVylwaCh06nAIE&dpr=0.9";
$content = file_get_contents($url);
libxml_use_internal_errors(true);
$dom = new DOMDocument;
#$dom->loadHTML($content);
$images_dom = $dom->getElementsByTagName('img');
foreach ($images_dom as $img) {
if($img->hasAttribute('src')){
$image_url = $img->getAttribute('src');
}
break;
}
//this is first image on url
echo $image_url;
This error usually means that $html isn't an object.
It's odd that you say this seems to work. What happens if you output $html? I'd imagine that the url isn't available and that $html is null.
Edit: Looks like this may be an error in the parser. Someone has submitted a bug and added a check in his code as a workaround.
I'm scraping some html from a webite using php simple html dom, which include several images. However the images is not pointing correctly to the website. For example below is a example of one of the images where you can see it is no pointing to the website. Is it possible to dynamically change the urls to point to the website for instance
http://www.url.com/bilder/flags_long/United States.gif
html example
<img src="/bilder/flags_long/United States.gif" align="absmiddle" title="United States" alt="United States" border="0">
sample code:
include('simple_html_dom.php');
$sum_gosu = file_get_html("http://www.gosugamers.net/counterstrike/news/30995-starladder-is-back-with-the-thirteenth-edition-of-starseries");
$gosu_full = $sum_gosu->find("//div[#class='content light']/div[#class='text clearfix']/div", 0);
How about concatenating the actual URL you fetched the document from and the relative image paths. Just to give an idea (this is not tested and you should definitely do some checks whether the image src attribute is relative or maybe absolute in some cases):
<?php
$url = 'http://www.url.com/';
$html = file_get_html($url);
$images = array();
foreach($html->find('img') as $img) {
// Option 1: Fill your images array (in case you only need the images)
$images[] = rtrim($url, '/') . '/' . ltrim($img->src, '/');
// Option 2: Update $img->src inside your $html document
$img->src = rtrim($url, '/') . '/' . ltrim($img->src, '/');
}
?>
UPDATE According your sample code my example could look like follows:
<?php
include('simple_html_dom.php');
$sum_gosu_url = "http://www.gosugamers.net/counterstrike/news/30995-starladder-is-back-with-the-thirteenth-edition-of-starseries";
$sum_gosu = file_get_html($sum_gosu_url);
$gosu_full = $sum_gosu->find("//div[#class='content light']/div[#class='text clearfix']/div", 0);
foreach($gosu_full->find('img') as $img) {
$img->src = $sum_gosu_url . $img->src;
}
?>
After that the img src attributes inside your $gosu_full document should be fixed and resolvable (downloadable by a client). Hope that helps and that I'm actually understanding your problem :)
$url="http://www.url.com";
$Chtml = file_get_html($url);
$imgurl=Chtml->find("img",0)->src;
echo $url.$imgurl;
I am using the PHP Simple DOM parser to extract all of the image sources on a given page like so:
// Include the library
include('simple_html_dom.php');
// Retrieve the DOM from a given URL
$html = file_get_html('http://google.com/');
// Retrieve all images and print their SRCs
foreach($html->find('img') as $e)
echo $e->src . '<br>';
Instead of using Google.com, I wish to use a page on Wordpress's admin (backend) area. These pages are PHP pages, not HTML (but the page has standard HTML throughout). How would I use the current page as the $html variable? PHP newbie over here.
Using this library dxtool found here.
Login
require 'WebGet.php';
$w = new WebGet();
// using cache to prevent repetitive download
$w->useCache = true;
$w->cacheLocation = '/tmp';
$w->cacheMaxAge = 3600;
$w->cookieFile = '/tmp/cookie.txt';
// $login_get_data and $login_post_data is associative array
$login = $w->requestContent($login_url, $login_get_data, $login_post_data);
Visiting Image containing page
// $image_page_url is the url of the page where your images exist.
$image_page = $w->requestContent($image_page_url);
Parse images and display
$dom = new DOMDocument();
$dom->loadHTML($image_page);
$imgs = $dom->getElementsByTagName("img");
foreach($imgs as $img){
echo $img->getAttribute("src");
}
Disclaimer: I am the author of this class
I have the following code that replaces all tags on a page and adds the nCode image resizer to it. The code is as follows:
function ncode_the_content($content) {
return preg_replace("/<img([^`|>]*)>/im", "<img onload=\"NcodeImageResizer.createOn(this);\"$1>", $content); }
}
What I need to do is make it so that if an image has the class of "noresize" it doesn't do the preg_match.
I have only managed to get it so that if there is the "noresize" class anywhere on the page it stops resizing all images instead of just the one with the correct class.
Any suggestions?
UPDATE:
Am I even remotely in the right ballpark with this?
function ncode_the_content($content) {
//Load the HTML page
$html = file_get_contents($content);
//Parse it. Here we use loadHTML as a static method
//to parse the HTML and create the DOM object in one go.
#$dom = DOMDocument::loadHTML($html);
//Init the XPath object
$xpath = new DOMXpath($dom);
//Query the DOM
$linksnoresize = $xpath->query( 'img[#class = "noresize"]' );
$links = $xpath->query( 'img[]' );
//Display the results as in the previous example
foreach($links as $link){
echo $link->getAttribute('onload'), 'NcodeImageResizer.createOn(this);';
}
foreach($linksnoresize as $link){
echo $link->getAttribute('onload'), '';
}
}
Here's some untested code:
$dom = DOMDocument::loadHTML($content);
$images = $dom->getElementsByTagName("img");
foreach ($images as $image) {
if (!strstr($image->getAttribute("class"), "noresize")) {
$image->setAttribute("onload", "NcodeImageResizer.createOn(this);");
}
}
But, if it were me, I would eschew any such inline event handler and instead just find the appropriate elements with Javascript.
I ended up just using pure CSS and adding a around the images I didn't want to be resized. Forced the width and height of that div back to auto and then removed the warning message that was displayed above them. Seems to work fine. Thanks for your help :)