I am trying to make a sitescraper. I made it on my local machine and it works very fine there. When I execute the same on my server, it shows a 403 forbidden error.
I am using the PHP Simple HTML DOM Parser. The error I get on the server is this:
Warning:
file_get_contents(http://example.com/viewProperty.html?id=7715888)
[function.file-get-contents]: failed
to open stream: HTTP request failed!
HTTP/1.1 403 Forbidden in
/home/scraping/simple_html_dom.php on
line 40
The line of code triggering it is:
$url="http://www.example.com/viewProperty.html?id=".$id;
$html=file_get_html($url);
I have checked the php.ini on the server and allow_url_fopen is On. Possible solution can be using curl, but I need to know where I am going wrong.
I know it's quite an old thread but thought of sharing some ideas.
Most likely if you don't get any content while accessing an webpage, probably it doesn't want you to be able to get the content. So how does it identify that a script is trying to access the webpage, not a human? Generally, it is the User-Agent header in the HTTP request sent to the server.
So to make the website think that the script accessing the webpage is also a human you must change the User-Agent header during the request. Most web servers would likely allow your request if you set the User-Agent header to an value which is used by some common web browser.
A list of common user agents used by browsers are listed below:
Chrome: 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
Firefox: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:75.0) Gecko/20100101 Firefox/75.0
etc...
$context = stream_context_create(
array(
"http" => array(
"header" => "User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36"
)
)
);
echo file_get_contents("www.google.com", false, $context);
This piece of code, fakes the user agent and sends the request to https://google.com.
References:
stream_context_create
Cheers!
This is not a problem with your script, but with the resource you are requesting. The web server is returning the "forbidden" status code.
It could be that it blocks PHP scripts to prevent scraping, or your IP if you have made too many requests.
You should probably talk to the administrator of the remote server.
Add this after you include the simple_html_dom.php
ini_set('user_agent', 'My-Application/2.5');
You can change it like this in parser class from line 35 and on.
function curl_get_contents($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
function file_get_html()
{
$dom = new simple_html_dom;
$args = func_get_args();
$dom->load(call_user_func_array('curl_get_contents', $args), true);
return $dom;
}
Have you tried other site?
It seems that the remote server has some type of blocking. It may be by user-agent, if it's the case you can try using curl to simulate a web browser's user-agent like this:
$url="http://www.example.com/viewProperty.html?id=".$id;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
$html = curl_exec($ch);
curl_close($ch);
Write this in simple_html_dom.php for me it worked
function curl_get_contents($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_USERAGENT,'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13');
$html = curl_exec($ch);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
function file_get_html($url, $use_include_path = false, $context=null, $offset = -1, $maxLen=-1, $lowercase = true, $forceTagsClosed=true, $target_charset = DEFAULT_TARGET_CHARSET, $stripRN=true, $defaultBRText=DEFAULT_BR_TEXT, $defaultSpanText=DEFAULT_SPAN_TEXT)
{
$dom = new simple_html_dom;
$args = func_get_args();
$dom->load(call_user_func_array('curl_get_contents', $args), true);
return $dom;
//$dom = new simple_html_dom(null, $lowercase, $forceTagsClosed, $target_charset, $stripRN, $defaultBRText, $defaultSpanText);
}
I realize this is an old question, but...
Just setting up my local sandbox on linux with php7 and ran across this. Using the terminal run scripts, php calls php.ini for the CLI. I found that the "user_agent" option was commented out. I uncommented it and added a Mozilla user agent, now it works.
Did you check your permissions on file? I set up 777 on my file (in localhost, obviously) and I fixed the problem.
You also may need some additional information in the conext, to make the website belive that the request comes from a human. What a did was enter the website from the browser an copying any extra infomation that was sent in the http request.
$context = stream_context_create(
array(
"http" => array(
'method'=>"GET",
"header" => "User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/50.0.2661.102 Safari/537.36\r\n" .
"accept: text/html,application/xhtml+xml,application/xml;q=0.9,
image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3\r\n" .
"accept-language: es-ES,es;q=0.9,en;q=0.8,it;q=0.7\r\n" .
"accept-encoding: gzip, deflate, br\r\n"
)
)
);
In my case, the server was rejecting HTTP 1.0 protocol via it's .htaccess configuration. It seems file_get_contents is using HTTP 1.0 version.
Use below code:
if you use -> file_get_contents
$context = stream_context_create(
array(
"http" => array(
"header" => "User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36"
)
));
=========
if You use curl,
curl_setopt($curl, CURLOPT_USERAGENT,'User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36');
So there is the problem:
I've made some php code to register page views (with a lot of help from stack overflow). I specifically want to avoid using cookies for this. Also I would prefer not to use an SQL DB if it is possible a well working solution without it.
To deal with browser behaviour like prefetching and the like, I am trying to filter out the extra page views with an if, elseif, else function.
The problem in practice is that the sometimes pageviews are either written twice to the log file or there is a timing issue with the if-statement and the rest of the code.
Here is the code I have:
<?php
/*set variables for log file */
$useragnt = $_SERVER['HTTP_USER_AGENT']; //get user agent
$ipaddrs = $_SERVER['REMOTE_ADDR']; //get ipaddress
$filenameLog = "besog/" . date("Y-m-d") . "LOG.txt";
date_default_timezone_set('Europe/Copenhagen');
$infoToLog = $ipaddrs . "\t" . $useragnt . "\t" . date('H:i:s') . "\n";
$file_arr = file($filenameLog);
$last_row = $file_arr[count($file_arr) - 1];
$arr = explode( "\t", $last_row);
$tidForSidsteLogLinje = strtotime($arr[2]);
$tidNu = strtotime(date('H:i:s'));
//write ip, useragent and time of page view to log file logfil, but only if the same visitor has not viewed the page within the last 10 seconds
if ($arr[0] == $ipaddrs and $arr[1] == $useragnt and $tidNu - $tidForSidsteLogLinje > 10){
//write ip and user agent to textfile
$file = fopen($filenameLog, "a+");
fwrite($file, $infoToLog);
fclose($file);
}
elseif ($arr[0] == $ipaddrs and $arr[1] == $useragnt and $tidNu - $tidForSidsteLogLinje < 10){
die;
}
else {
//Write ip and user agent to textfile
$file = fopen($filenameLog, "a+");
fwrite($file, $infoToLog);
fclose($file);
}
?>
Here are examples of the duplicate entries in the log (I have masked some of the ipaddresses):
xxx.x.95.240 Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko 12:52:33
xx.xxx.229.91 Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 12:52:45
xx.xxx.229.91 Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36 12:52:45
xxx.xx.154.83 ServiceTester/4.4.64.1514 12:53:03
xxx.xx.91.126 Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/603.2.5 (KHTML, like Gecko) Version/10.1.1 Safari/603.2.5 12:53:05
xx.xxx.35.3 Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 12:53:09
xxx.xxx.130.34 Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko 12:53:56
xxx.xxx.130.34 Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko 12:53:56
xx.xxx.211.101 Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 12:54:11
x.xxx.54.4 Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/601.6.17 (KHTML, like Gecko) Version/9.1.1 Safari/601.6.17 12:54:33
If my if-statements were working as intended, it should be possible to see duplicate lines in the entries like in the above.
How do I improve the code to eliminate these duplicate entries?
And help or suggestions is much appreciated!
We use a Complex Website Visitor Track / log System in our system.
I would recomend that you store this Values in a Database and set the IP address field as Unique.
You can set an CookieID like
Cookie::set('__id', time());
and go like
if (isset($_COOKIE['__id'])){
//With mysql you go like
$db->Execute("INSERT IGNORE INTO VisitorTable(hash, ip,..)
VALUES($_COOKIE['__id'],$_SERVER['REMOTE_ADDR'] )" // the HTTP_USER_AGENT refferer all kind of information that you wannt to store
}
This way the Visitor Only exist once in your list. See insert ignore for more.
Now you can eazy make an other function to save the pages the user visits .
In a script that gets everytime executet you go like:
$db->Execute("INSERT INTO VisitorActivity (visitorID,page....) VALUES ($_COOKIE['__id'],$_Server['..'])" );
Im trying get a timestamp for a reason in website, but according to the way i have coded right now the timestamp prints inside the user inputs area, the current code is as follows
$message['HTTP_USER_AGENT'] = $_SERVER['HTTP_USER_AGENT'].' Timestamp : ' . $orgtimestamp;
$sql = 'INSERT INTO imp_table (message) VALUES("'.mysql_real_escape_string(serialize($message)).'");';
echo(mysql_real_escape_string(serialize($message)))."\n";
the output is like this
a:1:{s:15:\"HTTP_USER_AGENT\";s:106:\"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:32.0) Gecko/20100101 Firefox/32.0 Timestamp : 2014-09-15 09:37:58am\";}
So can anybody help me to get a output where timestamp appears like i have shown below
a:1: Timestamp : 2014-09-15 09:37:58am :{s:15:\"HTTP_USER_AGENT\";s:106:\"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:32.0) Gecko/20100101 Firefox/32.0 \";}
a:1: Timestamp : 2014-09-15 09:37:58am :{s:15:\"HTTP_USER_AGENT\";s:106:\"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:32.0) Gecko/20100101 Firefox/32.0 \";}
That is not proper serialized string.
$message['HTTP_USER_AGENT'] = $_SERVER['HTTP_USER_AGENT'];
$message['Timestamp'] = $orgtimestamp;
echo(mysql_real_escape_string(serialize($message)))."\n";
Code above first make array and then serialize it so it should look like:
a:2{a:1{s:15:\"HTTP_USER_AGENT\";s:106:\"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:32.0) Gecko/20100101 Firefox/32.0"},{a:1{s:9:\"Timestamp\";s:21:\"2014-09-15 09:37:58am\"}}
Try
$message['HTTP_USER_AGENT'] = 'Timestamp : '.$orgtimestamp.' '.$_SERVER['HTTP_USER_AGENT'];
what I am missing here? all I get returned is "Location: 0"
ini_set("user_agent","Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1");
$url = "http://ebird.org/ws1.1/data/notable/region/recent?rtype=subnational1&r=US-AZ";
$xml = simplexml_load_file($url);
$locname = $xml->response->result->sighting->loc-id;
echo "Location: ".$locname . "<br/>";
the probelem is with the "-" because php think that you want to subtract id from $xml->response->result->sighting->loc
the solution is to change :
$locname = $xml->response->result->sighting->loc-id;
to
$locname = $xml->result[0]->sighting[0]->{'loc-id'};
it work with me
i hope this help you
note : i delete response node because it's the root and i choose the first elemet because the file containe many nodes
Using Wikipedia API link to get main image about some world known characters/events.
Example : (Stanislao Mattei)
This would show as following
Now my question
I'd like to parse the xml to get image url to be shown up
here is the code i'm willing to use if it right ~ thanks to ccKep ~
<?PHP
ini_set("user_agent","Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1");
$url = "http://en.wikipedia.org/w/api.php?action=query&list=allimages&aiprop=url&format=xml&ailimit=1&aifrom=Stanislao Mattei";
$xml = simplexml_load_file($url);
$extracts = $xml->xpath("/api/query/allimages");
var_dump($extracts);
?>
It should gives results as array
how i can get among it the exact url of the image to be shown should be :
http://upload.wikimedia.org/wikipedia/en/a/a1/Stanislaus.jpg
to put it in html code
<img src="http://upload.wikimedia.org/wikipedia/en/a/a1/Stanislaus.jpg">
~ Thanks a lot
Did you try $xml->query->allimages->img->attributes()->url
Your code will look like this:
<?php
ini_set("user_agent","Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1");
$url = "http://en.wikipedia.org/w/api.php?action=query&list=allimages&aiprop=url&format=xml&ailimit=1&aifrom=Stanislao Mattei";
$xml = simplexml_load_file($url);
$url = $xml->query->allimages->img->attributes()->url;
echo "URL: ".$url . "<br/>";
echo '<img src="'.$url.'">';
?>