Transalte curl command to PHP - php

Hello I have this Linux command that downloads a compressed file
curl -L -O http://www.url.com
The problem is that when I do curl inside PHP I get the HTML code instead of the compressed file.
The PHP code is this:
$url = https://www.example.com
$filePath = '/app/storage/temp/' . $fileName;
$fp = fopen($filePath . 'me', "w");
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_FTPAPPEND, 0);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
fwrite($fp, $data);
curl_close($ch);
fclose($fp);
I can't share the real URL since it contains secret keys.
EDIT
The curl command downloaded the same HTML file as the curl command when I added the -L -O options to the curl command it started working, so the thing here is, how can I add those lines with PHP

Using CURL_FILE means that the output is written to the file handle. You don't need to also use fwrite (especially since without setting CURLOPT_RETURNTRANSFER, the return from curl_exec is just true or false).
If it is indeed possible to load from this URL, either remove the fwrite, or remove the CURLOPT_FILE and use:
curl_setopt($ch, CUROPT_RETURNTRANFER, TRUE)
That way, the return from curl_exec will be the loaded data.

Create an empty zip file where you want to download your file.
function downloadFile($url) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 2);
curl_setopt($ch, CURLOPT_TIMEOUT, 60);
set_time_limit(65);
$rawFileData = curl_exec($ch);
$info = curl_getinfo($ch);
if (curl_errno($ch)) {
return curl_error($ch);
}
curl_close($ch);
$filepath = dirname(__FILE__) . '/testfile.zip';
file_put_contents($filepath, $rawFileData); // Put the downloaded content in the file
return $info;
}
Hope this helps!

Guys sorry for making this hard for you since I couldn't give much information about my doubt.
The problem is solved, I realized that the URL did work with
curl -O -L htttp://www.example.com
and also by the web browser.
This last thing was actually the one that gave me the path:
Open the web browser
Click F12
Paste the URL and hit enter
I came to realize I needed to add some headers the headers were these:
accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
accept-encoding:gzip, deflate, br
accept-language:en-US,en;q=0.8,es-MX;q=0.6,es;q=0.4
user-agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36
After I added these headers to the curl inside PHP the result was the compressed zip file I was searching for.

Related

Php curl multiple queries

i would like to open all the page ids of the website starting with http://website.com/page.php?id=1 and ending with id=1000
take the data via preg_match and record it somewhere or .txt or .sql
bellow is the curl function i'm using at the moment please kindly advise the full code that will get this job done.
function curl($url)
{
$POSTFIELDS = 'name=admin&password=guest&submit=save';
$reffer = "http://google.com/";
$agent = "Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.4) Gecko/20030624 Netscape/7.1 (ax)";
$cookie_file_path = "C:/Inetpub/wwwroot/spiders/cookie/cook"; // Please set your Cookie File path. This file must have CHMOD 777 (Full Read / Write Option).
$ch = curl_init(); // Initialize a CURL session.
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_URL, $url); // The URL to fetch. You can also set this when initializing a session with curl_init().
curl_setopt($ch, CURLOPT_USERAGENT, $agent); // The contents of the "User-Agent: " header to be used in a HTTP request.
curl_setopt($ch, CURLOPT_POST, 1); //TRUE to do a regular HTTP POST. This POST is the normal application/x-www-form-urlencoded kind, most commonly used by HTML forms.
curl_setopt($ch, CURLOPT_POSTFIELDS,$POSTFIELDS); //The full data to post in a HTTP "POST" operation.
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // TRUE to return the transfer as a string of the return value of curl_exec() instead of outputting it out directly.
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); // TRUE to follow any "Location: " header that the server sends as part of the HTTP header (note this is recursive, PHP will follow as many "Location: " headers that it is sent, unless CURLOPT_MAXREDIRS is set).
curl_setopt($ch, CURLOPT_REFERER, $reffer); //The contents of the "Referer: " header to be used in a HTTP request.
curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie_file_path); // The name of the file containing the cookie data. The cookie file can be in Netscape format, or just plain HTTP-style headers dumped into a file.
curl_setopt($ch, CURLOPT_COOKIEJAR, $cookie_file_path); // The name of a file to save all internal cookies to when the connection closes.
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
You can try it with the function file_put_contents and a loop calling your function.
$file = "data.txt";
$website_url = "http://website.com/page.php?id=";
for(i = 1; i <= 1000; i++){
file_put_contents($file, curl($website_url.i), FILE_APPEND);
}

Curl isn't woking with Nagios

I'm currently developing a nagios plugin with PHP and cURL.
My problem is that my script is working well when i use it with PHP like this :
#php /usr/local/nagios/plugins/script.php
I mean it returns me a 200 HTTP CODE.
But with nagios it returns me a 0 HTTP CODE. It's strange because the php is working with NAGIOS (i can read variables...). So the problem is that Nagios can't use cURL.
Can someone give me a clue ? Thanks.
Here you can see my code.
<?php
$widgeturl = "http://google.com";
$agent = "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.12) Gecko/2009070611 Firefox/3.0.12";
if (!function_exists("curl_init")) die("pushMeTo needs CURL module, please install CURL on your php.");
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $widgeturl);
$page = curl_exec($ch); //or die("Curl exe failed");
$code=curl_getinfo($ch, CURLINFO_HTTP_CODE);
if ($code==200) {
fwrite(STDOUT, $page.'Working well : '.$code);
exit(0);
}
else {
fwrite(STDOUT, $page.'not working : '.$code);
exit(1);
}
curl_close($ch);
Solution :
It was because the proxy was basically set on my OS (centOS), but Nagios was not using it instead of PHP. So i just had to put : curl_setopt($ch, CURLOPT_PROXY, 'myproxy:8080'); curl_setopt($ch, CURLOPT_PROXYUSERPWD, "user:pass"); Hope it could help someone
curl_setopt($ch, CURLOPT_PROXY, 'myproxy:8080');
curl_setopt($ch, CURLOPT_PROXYUSERPWD, "user:pass")
Can you try making the CURL request like this (i.e. header only request):
<?php
// config
$url = 'http://www.google.com/';
// make request & parse response
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_FILETIME, true);
curl_setopt($curl, CURLOPT_NOBODY, true);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
$response_header = curl_exec($curl);
$response_info = curl_getinfo($curl);
curl_close($curl);
// debug
echo "<b>response_header</b>\r\n";
var_dump($response_header);
echo "<b>response_info</b>\r\n";
var_dump($response_info);
The above will output the following:

Error when copying img with a cURL script (php)

I made a nice simple userscript:
When I browse the web, I can "bookmark" any image in 1 click
My userscript
Grab the img src
Grab the url of the webpage
Copy the .jpg .png .gif to my server
Everything works perfectly, BUT in some cases, the script cannot copy the file...
Actually the file is created but do not contains the img data, it only contains the content of an error webpage:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /data/x/xxx_xxx_x.jpg on this server.</p>
<p>Additionally, a 403 Forbidden
error was encountered while trying to use an ErrorDocument to handle the request.</p>
<hr>
<address>Apache Server at xxxxxxxx.net Port 80</address>
</body></html>
The "copy" code (php):
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $urlimg);
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
set_time_limit(300); # 5 minutes for PHP
curl_setopt($ch, CURLOPT_TIMEOUT, 300); # and also for CURL
$path = $dirpix.'/'.$aa.'/'.$mm;
if ( ! is_dir($path)) {
mkdir($path);
}
$outfile = fopen($path.'/'.$id.'.'.$ext, 'wb');
curl_setopt($ch, CURLOPT_FILE, $outfile);
curl_exec($ch);
fclose($outfile);
curl_close($ch);
Maybe the website blocks that kind of "copy" script?
Thanks!
2 things I can think of here are,
Set a user agent to your curl request. Because from what you say, you are able to view the image but curl is getting 403 error, it could very well be userAgent filtering on server side.
Add referer to your curl request. You can send the referer information from your userscript to the php script. You'd have to post or get window.location.href's value.
Try it below code it working fine in my server. it is tested code:-
<?php
$img[]='http://i.indiafm.com/stills/celebrities/sada/thumb1.jpg';
$img[]='http://i.indiafm.com/stills/celebrities/sada/thumb5.jpg';
$path="images/";
foreach($img as $i){
save_image($i, $path);
if(getimagesize($path.basename($i))){
echo '<h3 style="color: green;">Image ' . basename($i) . ' Downloaded OK</h3>';
}else{
echo '<h3 style="color: red;">Image ' . basename($i) . ' Download Failed</h3>';
}
}
//Alternative Image Saving Using cURL seeing as allow_url_fopen is disabled - bummer
function save_image($img,$fullpath='basename'){
if($fullpath!='basename'){
$fullpath = $fullpath.basename($img);
}
$ch = curl_init ($img);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,1);
$rawdata=curl_exec($ch);
curl_close ($ch);
if(file_exists($fullpath)){
unlink($fullpath);
}
$fp = fopen($fullpath,'x');
fwrite($fp, $rawdata);
fclose($fp);
}
for correct work add
$agent= 'Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11';
curl_setopt($ch, CURLOPT_USERAGENT, $agent);
I had a hard time to access my DLink camera using this method.
But finally I found the issue: authentication.
Don't forget authentication.
This is the solution that worked for me, thanks to all contributors.
<?php
function download_image1($image_url, $image_file){
$fp = fopen ($image_file, 'w+'); // open file handle
$ch = curl_init($image_url);
$agent= 'Accept:image/jpeg,text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11';
curl_setopt($ch, CURLOPT_USERAGENT, $agent);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,true);
curl_setopt($ch, CURLOPT_TIMEOUT, 400);
curl_setopt($ch, CURLOPT_AUTOREFERER, false);
curl_setopt($ch, CURLOPT_REFERER, "http://google.com");
curl_setopt($ch, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_1);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); // Follows redirect responses.
curl_setopt($ch, CURLOPT_USERPWD, "user:password");
$raw=curl_exec($ch);
if ($raw === false) {
trigger_error(curl_error($ch));
}
curl_close ($ch);
$localName = $image_file; // The file name of the source can be used locally
if(file_exists($localName)){
unlink($localName);
}
$fp = fopen($localName,'x');
fwrite($fp, $raw);
fclose($fp);
}
download_image1("http://url_here/image.jpg","/path/filename.jpg"); // to access DLink cameras
// be sure you have rights to the path
?>
The code above probably has some redundance, since I am openning fopen twice. To be honest, I won't correct, sinve it is working!

Downloading files via PHP

I have problem with downloading files via PHP.
The funny thing is that I can not trace the problem. The code works good for some websites and not good with other. It is loop in PHP that downloads the backup files from websites (there is delay with sleep before requests).
Why I can not trace the problem?
Because when I run manually the code, it works (downloads the file). And when it is run by CRON, sometimes it downloads the file, sometimes it does NOT download the file (only downloads 2 empty new lines).
The download is with curl (I have also tried with different code with fsockopen and fread).
Does anyone have an idea on how I can solve this?
Headers are removed with CURL by setting the correct option.
function fetch_url($url) {
$c = curl_init();
curl_setopt($c, CURLOPT_URL, $url);
curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($c, CURLOPT_TIMEOUT, 20);
if ($cookiejar != '') {
curl_setopt($c, CURLOPT_COOKIEJAR, $cookiejar);
curl_setopt($c, CURLOPT_COOKIEFILE, $cookiejar);
}
curl_setopt($c, CURLOPT_HEADER , false);
curl_setopt($c, CURLOPT_SSL_VERIFYHOST , false);
curl_setopt($c, CURLOPT_SSL_VERIFYPEER , false);
curl_setopt($c, CURLOPT_FOLLOWLOCATION , true);
curl_setopt($c, CURLOPT_AUTOREFERER , true);
curl_setopt($c, CURLOPT_USERAGENT, 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12');
$con = curl_exec($c);
curl_close($c);
return $con;
}
echo fetch_url('http://www.example.com/zip.zip');
Try using http://www.php.net/manual/en/function.curl-getinfo.php to display information about the curl request
echo curl_errno($c);
print_r(curl_getinfo($c));
Also, Maybe it's in your code elsewhere, but I'm not seeing any content type headers for your echoing of the file
$file = fetch_url('http://www.example.com/zip.zip');
header('Content-type: application/zip');
header('Content-Disposition: attachment; filename="zip.zip"');
header("Content-length: " . strlen($file));
echo $file;

how can i set cookie in curl

i am fetching somesite page..
but it display nothing
and url address change.
example i have typed
http://localhost/sushant/EXAMPLE_ROUGH/curl.php
in curl page my coding is=
$fp = fopen("cookie.txt", "w");
fclose($fp);
$agent= 'Mozilla/5.0 (Windows; U; Windows NT 5.1; pl; rv:1.9) Gecko/2008052906 Firefox/3.0';
$ch = curl_init();
curl_setopt($ch, CURLOPT_USERAGENT, $agent);
// 2. set the options, including the url
curl_setopt($ch, CURLOPT_URL, "http://www.fnacspectacles.com/place-spectacle/manifestation/Grand-spectacle-LE-ROI-LION-ROI4.htm");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt ($ch, CURLOPT_COOKIEJAR, 'cookie.txt');
curl_setopt($ch, CURLOPT_COOKIEFILE, "cookie.txt");
// 3. execute and fetch the resulting HTML output
if(curl_exec($ch) === false)
{
echo 'Curl error: ' . curl_error($ch);
}
else
echo $output = curl_exec($ch);
// 4. free up the curl handle
curl_close($ch);
but it canege url like this..
http://localhost/aide.do?sht=_aide_cookies_
object not found.
how can solve these problem help me
It looks like you're both trying to save cookies to cookies.txt, and read them from there. What you would normally do is that the first url you visit, you have curl save the cookies to a file. Then, for subseqent requests, you supply that file.
I'm not sure of the php aspects, but from the curl aspects it looks like you're trying to read a cookie file that doesn't exist yet.
edit: oh, and if you're only doing one request, you shouldn't even need cookies.
Seems like there is javascript in the output, which is causing the redirect.
So for testing purpose, instead of using:
echo $output = curl_exec($ch);
Use:
$output = curl_exec($ch);
echo strip_tags($output);
Update:
The code below will put the contents into contents.htm .. everything u need for pasring should be in there and in the output variable.
if(curl_exec($ch) === false)
{
echo 'Curl error: ' . curl_error($ch);
}
else{
$output = curl_exec($ch);
$fp2 = fopen("content.htm" , "w");
fwrite($fp2 , $output);
fclose($fp2);
}

Categories