example: at this domain http://www.example.com/234234/go.html is only one iframe-code
how can i get the url in the iframe-code?
go.html:
<iframe style="width: 99%;height:80%;margin:0 auto;border:1px solid grey;" src="i want this url" scrolling="auto" id="iframe_content"></iframe>
i have this snippet, but its very bad coded..
function downloadlink ($d_id)
{
$res = #get_url ('' . 'http://www.example.com/' . $d_id . '/go.html');
$re = explode ('<iframe', $res);
$re = explode ('src="', $re[1]);
$re = explode ('"', $re[1]);
$url = $re[0];
return $url;
}
thank you!
Use a html parser such as simple_html_dom to parse html.
$html = file_get_html('http://www.example.com/');
// Find all iframes
foreach($html->find('iframe') as $element)
echo $element->src . '<br>';
I don't know what scope you have here - is it just that snippet, or are you browsing whole pages?
If you're browsing whole pages, you could use the PHP Simple HTML DOM Parser.
A slightly modified example from their site:
// Create DOM from URL or file
$html = file_get_html('http://www.google.com/');
// Find all iframes
foreach($html->find('iframe') as $element)
echo $element->style . '<br>';
This sample code goes through all iframes on the page, and outputs their src property.
PHP has built-in functions for this as well (like SimpleXML), but I find the DOM Parser very nice and easy to handle (as you can see).
Related
I'm scraping some html from a webite using php simple html dom, which include several images. However the images is not pointing correctly to the website. For example below is a example of one of the images where you can see it is no pointing to the website. Is it possible to dynamically change the urls to point to the website for instance
http://www.url.com/bilder/flags_long/United States.gif
html example
<img src="/bilder/flags_long/United States.gif" align="absmiddle" title="United States" alt="United States" border="0">
sample code:
include('simple_html_dom.php');
$sum_gosu = file_get_html("http://www.gosugamers.net/counterstrike/news/30995-starladder-is-back-with-the-thirteenth-edition-of-starseries");
$gosu_full = $sum_gosu->find("//div[#class='content light']/div[#class='text clearfix']/div", 0);
How about concatenating the actual URL you fetched the document from and the relative image paths. Just to give an idea (this is not tested and you should definitely do some checks whether the image src attribute is relative or maybe absolute in some cases):
<?php
$url = 'http://www.url.com/';
$html = file_get_html($url);
$images = array();
foreach($html->find('img') as $img) {
// Option 1: Fill your images array (in case you only need the images)
$images[] = rtrim($url, '/') . '/' . ltrim($img->src, '/');
// Option 2: Update $img->src inside your $html document
$img->src = rtrim($url, '/') . '/' . ltrim($img->src, '/');
}
?>
UPDATE According your sample code my example could look like follows:
<?php
include('simple_html_dom.php');
$sum_gosu_url = "http://www.gosugamers.net/counterstrike/news/30995-starladder-is-back-with-the-thirteenth-edition-of-starseries";
$sum_gosu = file_get_html($sum_gosu_url);
$gosu_full = $sum_gosu->find("//div[#class='content light']/div[#class='text clearfix']/div", 0);
foreach($gosu_full->find('img') as $img) {
$img->src = $sum_gosu_url . $img->src;
}
?>
After that the img src attributes inside your $gosu_full document should be fixed and resolvable (downloadable by a client). Hope that helps and that I'm actually understanding your problem :)
$url="http://www.url.com";
$Chtml = file_get_html($url);
$imgurl=Chtml->find("img",0)->src;
echo $url.$imgurl;
I am a php newb but I am pretty sure this will be hard to accomplish and very server consuming. But I want to ask, get the opinion of much smarter users than myself.
Here is what I am trying to do:
I have a list of URL's, an array of URL's actually.
For each URL, I want to count the outgoing links - which DO NOT HAVE REL="nofollow" attribute - on that page.
So in a way, I'm afraid I'll have to make php load the page and preg match using regular expressions all the links?
Would this work if I'd had lets say 1000 links?
Here is what I am thinking, putting it in code:
$homepage = file_get_contents('http://www.site.com/');
$homepage = htmlentities($homepage);
// Do a preg_match for http:// and count the number of appearances:
$urls = preg_match();
// Do a preg_match for rel="nofollow" and count the nr of appearances:
$nofollow = preg_match();
// Do a preg_match for the number of "domain.com" appearances so we can subtract the website's internal links:
$internal_links = preg_match();
// Substract and get the final result:
$result = $urls - $nofollow - $internal_links;
Hope you can help, and if the idea is right maybe you can help me with the preg_match functions.
You can use PHP's DOMDocument class to parse the HTML and parse_url to parse the URLs:
$url = 'http://stackoverflow.com/';
$pUrl = parse_url($url);
// Load the HTML into a DOMDocument
$doc = new DOMDocument;
#$doc->loadHTMLFile($url);
// Look for all the 'a' elements
$links = $doc->getElementsByTagName('a');
$numLinks = 0;
foreach ($links as $link) {
// Exclude if not a link or has 'nofollow'
preg_match_all('/\S+/', strtolower($link->getAttribute('rel')), $rel);
if (!$link->hasAttribute('href') || in_array('nofollow', $rel[0])) {
continue;
}
// Exclude if internal link
$href = $link->getAttribute('href');
if (substr($href, 0, 2) === '//') {
// Deal with protocol relative URLs as found on Wikipedia
$href = $pUrl['scheme'] . ':' . $href;
}
$pHref = #parse_url($href);
if (!$pHref || !isset($pHref['host']) ||
strtolower($pHref['host']) === strtolower($pUrl['host'])
) {
continue;
}
// Increment counter otherwise
echo 'URL: ' . $link->getAttribute('href') . "\n";
$numLinks++;
}
echo "Count: $numLinks\n";
You can use SimpleHTMLDOM:
// Create DOM from URL or file
$html = file_get_html('http://www.site.com/');
// Find all links
foreach($html->find('a[href][rel!=nofollow]') as $element) {
echo $element->href . '<br>';
}
As I'm not sure that SimpleHTMLDOM supports a :not selector and [rel!=nofollow] might only return a tags with a rel attribute present (and not ones where it isn't present), you may have to:
foreach($html->find('a[href][!rel][rel!=nofollow]') as $element)
Note the added [!rel]. Or, do it manually instead of with a CSS attribute selector:
// Find all links
foreach($html->find('a[href]') as $element) {
if (strtolower($element->rel) != 'nofollow') {
echo $element->href . '<br>';
}
}
I have this function to get title of a website:
function getTitle($Url){
$str = file_get_contents($Url);
if(strlen($str)>0){
preg_match("/\<title\>(.*)\<\/title\>/",$str,$title);
return $title[1];
}
}
However, this function make my page took too much time to response. Someone tell me to get title by request header of the website only, which won't read the whole file, but I don't know how. Can anyone please tell me which code and function i should use to do this? Thank you very much.
Using regex is not a good idea for HTML, use the DOM Parser instead
$html = new simple_html_dom();
$html->load_file('****'); //put url or filename
$title = $html->find('title');
echo $title->plaintext;
or
// Create DOM from URL or file
$html = file_get_html('*****');
// Find all images
foreach($html->find('title') as $element)
echo $element->src . '<br>';
Good read
RegEx match open tags except XHTML self-contained tags
Use jQuery Instead to get Title of your page
$(document).ready(function() {
alert($("title").text());
});
Demo : http://jsfiddle.net/WQNT8/1/
try this will work surely
include_once 'simple_html_dom.php';
$oHtml = str_get_html($url);
$Title = array_shift($oHtml->find('title'))->innertext;
$Description = array_shift($oHtml->find("meta[name='description']"))->content;
$keywords = array_shift($oHtml->find("meta[name='keywords']"))->content;
echo $title;
echo $Description;
echo $keywords;
Just wondering if someone can help me further with the following. I want to parse the URL on this website:http://www.directorycritic.com/free-directory-list.html?pg=1&sort=pr
I have the following code:
<?PHP
$url = "http://www.directorycritic.com/free-directory-list.html?pg=1&sort=pr";
$input = #file_get_contents($url) or die("Could not access file: $url");
$regexp = "<a\s[^>]*href=(\"??)([^\" >]*?)\\1[^>]*>(.*)<\/a>";
if(preg_match_all("/$regexp/siU", $input, $matches)) {
// $matches[2] = array of link addresses
// $matches[3] = array of link text - including HTML code
}
?>
Which does nothing at present and what I need this to do is scrap all the URL in the table for all 16 pages and would really appreciate some help with how to amend the above to do that and output URL into a text file.
Use HTML Dom Parser
$html = file_get_html('http://www.example.com/');
// Find all links
$links = array();
foreach($html->find('a') as $element)
$links[] = $element->href;
Now links array contains all URLs of given page and you can use these URLs to parse further.
Parsing HTML with regular expressions is not a good idea. Here are some related posts:
Using regular expressions to parse HTML: why not?
RegEx match open tags except XHTML self-contained tags
EDIT:
Some Other HTML Parsing tools as described by Gordon in comments below:
phpQuery
Zend_Dom
QueryPath
FluentDom
You really shouldn’t use regular expressions to parse HTML as it’s to error prone.
Better use an HTML parser like the one of PHP’s DOM library:
$code = file_get_contents($url);
$doc = new DOMDocument();
$doc->loadHTML($code);
$links = array();
foreach ($doc->getElementsByTagName('a') as $element) {
if ($element->hasAttribute('href')) {
$links[] = $elements->getAttribute('href');
}
}
Note that this will collect the URI references as they appear in the document and not as an absolute URI. You might want to resolve them before.
It seems that PHP doesn’t provide an appropriate library (or I haven’t found it yet). But see RFC 3986 – Reference Resolution and my answer on Convert a relative URL to an absolute URL with Simple HTML DOM? for further details.
Try this method
function getinboundLinks($domain_name) {
ini_set('user_agent', 'NameOfAgent (<a class="linkclass" href="http://localhost">http://localhost</a>)');
$url = $domain_name;
$url_without_www=str_replace('http://','',$url);
$url_without_www=str_replace('www.','',$url_without_www);
$url_without_www= str_replace(strstr($url_without_www,'/'),'',$url_without_www);
$url_without_www=trim($url_without_www);
$input = #file_get_contents($url) or die('Could not access file: $url');
$regexp = "<a\s[^>]*href=(\"??)([^\" >]*?)\\1[^>]*>(.*)<\/a>";
//$inbound=0;
$outbound=0;
$nonfollow=0;
if(preg_match_all("/$regexp/siU", $input, $matches, PREG_SET_ORDER)) {
foreach($matches as $match) {
# $match[2] = link address
# $match[3] = link text
//echo $match[3].'<br>';
if(!empty($match[2]) && !empty($match[3])) {
if(strstr(strtolower($match[2]),'URL:') || strstr(strtolower($match[2]),'url:') ) {
$nonfollow +=1;
} else if (strstr(strtolower($match[2]),$url_without_www) || !strstr(strtolower($match[2]),'http://')) {
$inbound += 1;
echo '<br>inbound '. $match[2];
}
else if (!strstr(strtolower($match[2]),$url_without_www) && strstr(strtolower($match[2]),'http://')) {
echo '<br>outbound '. $match[2];
$outbound += 1;
}
}
}
}
$links['inbound']=$inbound;
$links['outbound']=$outbound;
$links['nonfollow']=$nonfollow;
return $links;
}
// ************************Usage********************************
$Domain='<a class="linkclass" href="http://zachbrowne.com">http://zachbrowne.com</a>';
$links=getinboundLinks($Domain);
echo '<br>Number of inbound Links '.$links['inbound'];
echo '<br>Number of outbound Links '.$links['outbound'];
echo '<br>Number of Nonfollow Links '.$links['nonfollow'];
I'm using SimpleHTMLDOM Parser to scape a website and I would like to know if there's any error handling method. For example, if the link is broken there is no use to advance in the code and search the document.
Thank you.
<?php
$html = file_get_html('http://www.google.com/');
foreach($html->find('a') as $element)
{
if(empty($element->href))
{
continue; //will skip <a> without href
}
echo $element->href . "<br>\n";
}
?>
a loop and continue?