Error when copying img with a cURL script (php) - php

I made a nice simple userscript:
When I browse the web, I can "bookmark" any image in 1 click
My userscript
Grab the img src
Grab the url of the webpage
Copy the .jpg .png .gif to my server
Everything works perfectly, BUT in some cases, the script cannot copy the file...
Actually the file is created but do not contains the img data, it only contains the content of an error webpage:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /data/x/xxx_xxx_x.jpg on this server.</p>
<p>Additionally, a 403 Forbidden
error was encountered while trying to use an ErrorDocument to handle the request.</p>
<hr>
<address>Apache Server at xxxxxxxx.net Port 80</address>
</body></html>
The "copy" code (php):
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $urlimg);
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
set_time_limit(300); # 5 minutes for PHP
curl_setopt($ch, CURLOPT_TIMEOUT, 300); # and also for CURL
$path = $dirpix.'/'.$aa.'/'.$mm;
if ( ! is_dir($path)) {
mkdir($path);
}
$outfile = fopen($path.'/'.$id.'.'.$ext, 'wb');
curl_setopt($ch, CURLOPT_FILE, $outfile);
curl_exec($ch);
fclose($outfile);
curl_close($ch);
Maybe the website blocks that kind of "copy" script?
Thanks!

2 things I can think of here are,
Set a user agent to your curl request. Because from what you say, you are able to view the image but curl is getting 403 error, it could very well be userAgent filtering on server side.
Add referer to your curl request. You can send the referer information from your userscript to the php script. You'd have to post or get window.location.href's value.

Try it below code it working fine in my server. it is tested code:-
<?php
$img[]='http://i.indiafm.com/stills/celebrities/sada/thumb1.jpg';
$img[]='http://i.indiafm.com/stills/celebrities/sada/thumb5.jpg';
$path="images/";
foreach($img as $i){
save_image($i, $path);
if(getimagesize($path.basename($i))){
echo '<h3 style="color: green;">Image ' . basename($i) . ' Downloaded OK</h3>';
}else{
echo '<h3 style="color: red;">Image ' . basename($i) . ' Download Failed</h3>';
}
}
//Alternative Image Saving Using cURL seeing as allow_url_fopen is disabled - bummer
function save_image($img,$fullpath='basename'){
if($fullpath!='basename'){
$fullpath = $fullpath.basename($img);
}
$ch = curl_init ($img);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,1);
$rawdata=curl_exec($ch);
curl_close ($ch);
if(file_exists($fullpath)){
unlink($fullpath);
}
$fp = fopen($fullpath,'x');
fwrite($fp, $rawdata);
fclose($fp);
}

for correct work add
$agent= 'Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11';
curl_setopt($ch, CURLOPT_USERAGENT, $agent);

I had a hard time to access my DLink camera using this method.
But finally I found the issue: authentication.
Don't forget authentication.
This is the solution that worked for me, thanks to all contributors.
<?php
function download_image1($image_url, $image_file){
$fp = fopen ($image_file, 'w+'); // open file handle
$ch = curl_init($image_url);
$agent= 'Accept:image/jpeg,text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11';
curl_setopt($ch, CURLOPT_USERAGENT, $agent);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,true);
curl_setopt($ch, CURLOPT_TIMEOUT, 400);
curl_setopt($ch, CURLOPT_AUTOREFERER, false);
curl_setopt($ch, CURLOPT_REFERER, "http://google.com");
curl_setopt($ch, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_1);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); // Follows redirect responses.
curl_setopt($ch, CURLOPT_USERPWD, "user:password");
$raw=curl_exec($ch);
if ($raw === false) {
trigger_error(curl_error($ch));
}
curl_close ($ch);
$localName = $image_file; // The file name of the source can be used locally
if(file_exists($localName)){
unlink($localName);
}
$fp = fopen($localName,'x');
fwrite($fp, $raw);
fclose($fp);
}
download_image1("http://url_here/image.jpg","/path/filename.jpg"); // to access DLink cameras
// be sure you have rights to the path
?>
The code above probably has some redundance, since I am openning fopen twice. To be honest, I won't correct, sinve it is working!

Related

Transalte curl command to PHP

Hello I have this Linux command that downloads a compressed file
curl -L -O http://www.url.com
The problem is that when I do curl inside PHP I get the HTML code instead of the compressed file.
The PHP code is this:
$url = https://www.example.com
$filePath = '/app/storage/temp/' . $fileName;
$fp = fopen($filePath . 'me', "w");
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_FTPAPPEND, 0);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
fwrite($fp, $data);
curl_close($ch);
fclose($fp);
I can't share the real URL since it contains secret keys.
EDIT
The curl command downloaded the same HTML file as the curl command when I added the -L -O options to the curl command it started working, so the thing here is, how can I add those lines with PHP
Using CURL_FILE means that the output is written to the file handle. You don't need to also use fwrite (especially since without setting CURLOPT_RETURNTRANSFER, the return from curl_exec is just true or false).
If it is indeed possible to load from this URL, either remove the fwrite, or remove the CURLOPT_FILE and use:
curl_setopt($ch, CUROPT_RETURNTRANFER, TRUE)
That way, the return from curl_exec will be the loaded data.
Create an empty zip file where you want to download your file.
function downloadFile($url) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 2);
curl_setopt($ch, CURLOPT_TIMEOUT, 60);
set_time_limit(65);
$rawFileData = curl_exec($ch);
$info = curl_getinfo($ch);
if (curl_errno($ch)) {
return curl_error($ch);
}
curl_close($ch);
$filepath = dirname(__FILE__) . '/testfile.zip';
file_put_contents($filepath, $rawFileData); // Put the downloaded content in the file
return $info;
}
Hope this helps!
Guys sorry for making this hard for you since I couldn't give much information about my doubt.
The problem is solved, I realized that the URL did work with
curl -O -L htttp://www.example.com
and also by the web browser.
This last thing was actually the one that gave me the path:
Open the web browser
Click F12
Paste the URL and hit enter
I came to realize I needed to add some headers the headers were these:
accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
accept-encoding:gzip, deflate, br
accept-language:en-US,en;q=0.8,es-MX;q=0.6,es;q=0.4
user-agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36
After I added these headers to the curl inside PHP the result was the compressed zip file I was searching for.

Curl isn't woking with Nagios

I'm currently developing a nagios plugin with PHP and cURL.
My problem is that my script is working well when i use it with PHP like this :
#php /usr/local/nagios/plugins/script.php
I mean it returns me a 200 HTTP CODE.
But with nagios it returns me a 0 HTTP CODE. It's strange because the php is working with NAGIOS (i can read variables...). So the problem is that Nagios can't use cURL.
Can someone give me a clue ? Thanks.
Here you can see my code.
<?php
$widgeturl = "http://google.com";
$agent = "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.12) Gecko/2009070611 Firefox/3.0.12";
if (!function_exists("curl_init")) die("pushMeTo needs CURL module, please install CURL on your php.");
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $widgeturl);
$page = curl_exec($ch); //or die("Curl exe failed");
$code=curl_getinfo($ch, CURLINFO_HTTP_CODE);
if ($code==200) {
fwrite(STDOUT, $page.'Working well : '.$code);
exit(0);
}
else {
fwrite(STDOUT, $page.'not working : '.$code);
exit(1);
}
curl_close($ch);
Solution :
It was because the proxy was basically set on my OS (centOS), but Nagios was not using it instead of PHP. So i just had to put : curl_setopt($ch, CURLOPT_PROXY, 'myproxy:8080'); curl_setopt($ch, CURLOPT_PROXYUSERPWD, "user:pass"); Hope it could help someone
curl_setopt($ch, CURLOPT_PROXY, 'myproxy:8080');
curl_setopt($ch, CURLOPT_PROXYUSERPWD, "user:pass")
Can you try making the CURL request like this (i.e. header only request):
<?php
// config
$url = 'http://www.google.com/';
// make request & parse response
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_FILETIME, true);
curl_setopt($curl, CURLOPT_NOBODY, true);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
$response_header = curl_exec($curl);
$response_info = curl_getinfo($curl);
curl_close($curl);
// debug
echo "<b>response_header</b>\r\n";
var_dump($response_header);
echo "<b>response_info</b>\r\n";
var_dump($response_info);
The above will output the following:

Retriving a rss feed with jfeed and curl?

I have been fighting with this for hours now I am trying to retrive a rss feed from maxhire:
rsslink, parse the content and display it using jfeed. now i am aware of the ajax not allowing for cross domain and i have been using the proxy.php that jfeed comes packaged with, but to no avail it just tells me there are to many redirects in the url so i have increased them like so:
<?php
header('Content-type: text/html');
$context = array(
'http'=>array('max_redirects' => 99)
);
$context = stream_context_create($context);
// hand over the context to fopen()
$handle = fopen($_REQUEST['url'], "r", false, $context);
if ($handle) {
while (!feof($handle)) {
$buffer = fgets($handle, 4096);
echo $buffer;
}
fclose($handle);
}
?>
but still no luck it just returns a message telling me that the object has been moved. So i have moved on to using curl like so:
$ch = curl_init('http://www.maxhire.net/cp/?EC5A6C361E43515B7A591C6539&L=EN');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, false);
$result = curl_exec($ch);
var_dump($result);
to retrive the xml page locally but it just returns the same error the object has moved:
<body>string(237) "<title>Object moved</title>
<h2>Object moved to here.</h2>
"
</body>
then redirects me to a url locally with : &AspxAutoDetectCookieSupport=1 added to the end.
Can someone please explain what i'm doing wrong?
Right I managed to get curl working by faking the useragent and the cookies and i am using a custom metafield in wordpress to assign the url like so:
<?php
$mykey_values = get_post_custom_values('maxhireurl');
foreach ( $mykey_values as $key => $value ) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $value);
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.6 (KHTML, like Gecko) Chrome/16.0.897.0 Safari/535.6');
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_COOKIEFILE, "cookie.txt");
curl_setopt($ch, CURLOPT_COOKIEJAR, "cookie.txt");
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 30);
curl_setopt($ch, CURLOPT_REFERER, "http://www.maxhire.net");
$html = curl_exec($ch);
curl_close($ch);
echo $html;
}
?>

CURL server data transfer timeout

I am using the following code to get the xml data form icecat.biz:
set_time_limit (0);
$login = "Arpan";
$password = "arpan";
//$url="http://data.icecat.biz/export/freexml.int/EN/files.index.xml";
$url= "http://data.icecat.biz/export/level4/EN";
//$url="http://data.icecat.biz/export/freexml.int/DE/10.xml";
$user_agent = 'Mozilla/5.0 (Windows; U;
Windows NT 5.1; ru; rv:1.8.0.9) Gecko/20061206 Firefox/1.5.0.9';
$header = array(
"Accept: text/xml,application/xml,application/xhtml+xml,
text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5",
"Accept-Language: ru-ru,ru;q=0.7,en-us;q=0.5,en;q=0.3",
"Accept-Charset: windows-1251,utf-8;q=0.7,*;q=0.7",
"Keep-Alive: 300");
$local_path = "myxml.xml";
$file_handle = fopen($local_path, "w");
ob_start();
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_FILE, $file_handle);
curl_setopt($ch, CURLOPT_HEADER, 0);
//curl_setopt ( $ch , CURLOPT_HTTPHEADER, $header );
curl_setopt($ch, CURLOPT_USERPWD, $login . ":" . $password);
curl_setopt($ch, CURLOPT_TIMEOUT, 0); // times out after 4s
//curl_setopt($c, CURLOPT_TIMEOUT, 2);
//curl_setopt($ch, CURLOPT_NOBODY, TRUE); // remove body
//curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
// curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
//$head = curl_exec($ch);
$result = curl_exec ($ch);
if(curl_errno($ch))
{
echo 'Curl error: ' . curl_error($ch);
}
curl_close($ch);
ob_end_clean();
fclose ($file_handle);
$xmlStr = file_get_contents($local_path);
$xmlObj = simplexml_load_string($xmlStr);
print "<pre>";
//print_r($xmlObj->Product->ProductRelated->attributes()->ID);
print_r($xmlObj);
exit;
The page is being executed for a unlimited time but the XML is not being updated after 10 to 20 sec. The output xml is also not being completed. I think after a certain time the server is not responding or data is not being transferred.
Here is the error message:
**** The server xml (icecat) size is big
What is the problem and how do I fix it?
Sounds like you are not giving enough time for the request to download properly.
Uncomment your //curl_setopt($c, CURLOPT_TIMEOUT, 2);
And put the timeout to 600 for a test.
Beyond that your request looks fine, you could always check to see if the server is caching responses, the last thing that I've seen recently happening is if some of my users have reverse proxies in order to cache their normal operations. Where some truncated responses got cached and thats all they got back for a 24 hour period although that may not be related to you.

how can i set cookie in curl

i am fetching somesite page..
but it display nothing
and url address change.
example i have typed
http://localhost/sushant/EXAMPLE_ROUGH/curl.php
in curl page my coding is=
$fp = fopen("cookie.txt", "w");
fclose($fp);
$agent= 'Mozilla/5.0 (Windows; U; Windows NT 5.1; pl; rv:1.9) Gecko/2008052906 Firefox/3.0';
$ch = curl_init();
curl_setopt($ch, CURLOPT_USERAGENT, $agent);
// 2. set the options, including the url
curl_setopt($ch, CURLOPT_URL, "http://www.fnacspectacles.com/place-spectacle/manifestation/Grand-spectacle-LE-ROI-LION-ROI4.htm");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt ($ch, CURLOPT_COOKIEJAR, 'cookie.txt');
curl_setopt($ch, CURLOPT_COOKIEFILE, "cookie.txt");
// 3. execute and fetch the resulting HTML output
if(curl_exec($ch) === false)
{
echo 'Curl error: ' . curl_error($ch);
}
else
echo $output = curl_exec($ch);
// 4. free up the curl handle
curl_close($ch);
but it canege url like this..
http://localhost/aide.do?sht=_aide_cookies_
object not found.
how can solve these problem help me
It looks like you're both trying to save cookies to cookies.txt, and read them from there. What you would normally do is that the first url you visit, you have curl save the cookies to a file. Then, for subseqent requests, you supply that file.
I'm not sure of the php aspects, but from the curl aspects it looks like you're trying to read a cookie file that doesn't exist yet.
edit: oh, and if you're only doing one request, you shouldn't even need cookies.
Seems like there is javascript in the output, which is causing the redirect.
So for testing purpose, instead of using:
echo $output = curl_exec($ch);
Use:
$output = curl_exec($ch);
echo strip_tags($output);
Update:
The code below will put the contents into contents.htm .. everything u need for pasring should be in there and in the output variable.
if(curl_exec($ch) === false)
{
echo 'Curl error: ' . curl_error($ch);
}
else{
$output = curl_exec($ch);
$fp2 = fopen("content.htm" , "w");
fwrite($fp2 , $output);
fclose($fp2);
}

Categories