how does curl works for file upload - php

I have a curl code which download file from remote host which is working fine.What I need to do is to make sure the file is downloaded completely and then run the next lines But I'm not sure what Curl does.I want to be sure that other line of code will not be executed while CURL is downloading the file from remote server.
This is the code
$options = array(
CURLOPT_FILE => $fh,
CURLOPT_TIMEOUT => 28800, // set this to 8 hours so we dont timeout on big files
CURLOPT_URL => $target_zip.'.zip',
);
$ch = curl_init();
curl_setopt_array($ch,$options);
curl_exec($ch);
curl_close($ch);
Thanks

The curl object while be close after curl done, which means your $ch will be executed and be closed after the work is done (curl_close($ch);), and then the script will move on and execute next line of php script, this is what happened when you are writing simple PHP script.
If you are talking about throwing curl work to background and doing some other job meantime, while the curl is finish and some specific line of script (maybe a function) will be called, I think you are looking for Ajax, in that case I think callback is what you need.

Related

Php handling of unresponsive curl

I have a php script that fetches data from external sites using curl then, after three minutes, reloads itself, fetches new data and displays updates. It works fine but if there is a network failure, and I presume it's curl not getting responses, php just hangs without returning errors or anything. These hanging processes then needs to be killed manually.
How can I deal with this situation? Tweak curl options? Modify php script that it watches for unresponsive curl? Or handle everything from the browser through ajax, including firing off a script that kills hanging php processes?
Solution: I've added
curl_setopt($ch, CURLOPT_FAILONERROR = true);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
to my curl and added a catch for these errors to my response checking part. Conceptually, it's all that was needed, CURLOPT_CONNECTTIMEOUT doesn't seem to be necessary because I already have reloading set up in case of errors.
It works with manual disconnect but I haven't seen how the script handles real life network failures yet. Should be okay.
To handle network issue, use CURLOPT_CONNECTTIMEOUT option to define some seconds. It will wait for the given amount of seconds to connect to the targeted host.
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 10);
And use CURLOPT_TIMEOUT option to define number of seconds you want to allow your curl for a particular operation. This will be helpful if the targeted server doesn't release the connection.
curl_setopt($ch, CURLOPT_TIMEOUT, 30);

Downloading remote files with PHP/cURL: a bit more robustly

I have a script that pulls URLs from the database and downloads them (pdf or jpg) to a local file.
Code is:
$cp = curl_init($remote_url);
$fp = fopen($dest_temp, "w");
#curl_setopt($cp, CURLOPT_FILE, $fp);
#curl_setopt($ch, CURLOPT_HEADER, TRUE);
curl_exec($cp);
curl_close($cp);
fclose($fp);
If the remote file is there, it works fine. If the remote file is not there, it just bombs and the browser hangs forever.
What's the best approach to handling this, should I somehow ping for the file first? or can I set options above that will handle this. I tried setting timeouts but it had no effect.
this is my first experience using cURL
I used to use wget much as you're using curl and got frustrated with the lack of ability to know what is going on because its essentially calling out to an external program.
I use perl WWW:Mechanize and the link below is a PHP version which might be a bit more robust for you to be able to deal with such instances.
http://www.compasswebpublisher.com/php/www-mechanize-for-php
Hope this helps.

PHP cURL timeout ignored

Using curl_setopt() I have set CURLOPT_CONNECTTIMEOUT_MS to 1000 (1 second) and have set up another script that sleeps for 5 seconds, then responds 200 OK (using sleep()) which I call for testing purposes. My script always waits for the response, even though it should yield in a cURL timeout error.
How do I make the timeout work as expected and interrupt the request?
$ch = curl_init($url);
curl_setopt_array($ch, array(
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_FOLLOWLOCATION => TRUE,
CURLOPT_NOBODY => TRUE,
CURLOPT_PROTOCOLS => CURLPROTO_HTTP | CURLPROTO_HTTPS,
CURLOPT_CONNECTTIMEOUT_MS => 1000,
CURLOPT_MAXREDIRS => 5,
CURLOPT_USERAGENT => 'Linkit/2.x Drupal/7.x',
));
$document = curl_exec($ch);
I have also tried CURLOPT_TIMEOUT_MS and also the variants without the _MS suffixes.
I'm using PHP 5.3.4 with cURL 7.19.7 on OS X 10.6, XAMPP.
The CURLOPT_CONNECTTIMEOUT or CURLOPT_CONNECTTIMEOUT_MS define the maximum amount of time that cURL can take to connect to the server but in your case, the connection is successful so the time-out no longer applies.
You need to use CURLOPT_TIMEOUT or CURLOPT_TIMEOUT_MS which define the maximum amount of time cURL can execute for.
For a complete list of options supported by PHP, look at the curl_setopt documentation.
The curl library makes a system call and operates independently of php (sidenote: that's why it is possible to take advantage of multi-threading with curl, even though php itself doesn't support threading). So if you make the curl call and then sleep(), curl still runs.
Also, the timeout setting is for how long to wait for the request to timeout, not your script. For instance, if I make a curl request to google.com and google.com is taking forever to respond, the timeout setting lets me tell curl how long to sit around and wait for google.com to respond.
edit:
Okay, so you are saying you have your curl script that makes a request to another script, and that script has the sleep() in it. Okay, well the curl CURLOPT_CONNECTTIMEOUT (or _MS) setting is to tell curl how long to wait around for a response from the requested server - as in, a connection made. When the curl request is made, it is getting a response that a connection was made...then the sleep() is just delaying the output it's giving. So basically, "wait for a response" is not the same as "how long to timeout the curl execution"
What you want to use is CURLOPT_TIMEOUT or CURLOPT_TIMEOUT_MS
Well, I had the same problem and wasted so much time looking for the solution and found a working solution at the end.
I though I should share it here and this might be helpful for someone in future.
I have simply used both options.
I have used 4 seconds and 8 seconds respectively.
curl_setopt($curl_session, CURLOPT_CONNECTTIMEOUT, 4);
curl_setopt($curl_session, CURLOPT_TIMEOUT, 8);

php file_get_contents fails sometimes during cronjob

I am trying to run a php script via a cronjob and sometimes (about half the time) I get the following warning:
PHP Warning: file_get_contents(http://url.com): failed to open stream: HTTP request failed! in /path/myfile.php on line 285
The program continues to run after that which makes me think it is not a timeout problem or a memory issue (timeout is set to 10 minutes and memory to 128M), but the variable that I am storing the results of that function call in is empty. The weird part is that I am making several other calls to this same website with other url parameters and they never have a problem. The only difference with this function call is that the file it is downloading is about 70 mb while the others are all around 300 kb.
Also, I never get this warning if I SSH into the web server and run the php script manually, only when it is run from a cron.
I have also tried using cURL instead of file_get_contents but then I run out of memory.
Thanks, any help here would be appreciated.
Perhaps the remote server on URL.com is sometimes timing out or returning an error for that particular (large) request?
I don't think you should be trying to store 70mb in a variable.
You can configure cURL to download directly to a file. Something like:
$file = fopen ('my.file', 'w');
$c = curl_init('http://url.com/whatever');
curl_setopt($c, CURLOPT_FILE, $file);
curl_exec($c);
curl_close($c);
fclose($file);
If nothing else, curl should provide you with much better errors about what's going wrong.
From another answer .. double check that this issue isn't occurring some of the time with the URL parameters you're using:
Note: If you're opening a URI with special characters, such as spaces, you need to encode the URI with urlencode() - http://docs.php.net/file%5Fget%5Fcontents

How do I transfer data to next server

I am trying to pass some information of a server in which the script is running to next url or server.
I tried using curl. I have following two disadvantages:
if cannot locate file it will tell file not found
it waits till the remote file execution is completed.
How can I overcome both of the things either by using curl or other commands?
Edit:
Now I would like to suppress the file not found message error message being displayed by curl even if the file really doesn't exists.
I do not want output from destination page so I don't want to wait till the execution of the destination page is finished. I just want to trigger the code and continue with another scripts remaining
Example:
I am trying to build a log system which will have its everything in next webserver. The client website which implements the log system will be sending some of the data required for the system by calling a file in my webserver.
Code I am using:
// create a new cURL resource
$ch = curl_init();
// set URL and other appropriate options
curl_setopt($ch, CURLOPT_URL, "http://example.com/log.php?data=data");
curl_setopt($ch, CURLOPT_HEADER, 0);
// grab URL and pass it to the browser
curl_exec($ch);
// close cURL resource, and free up system resources
curl_close($ch);
Why don't you want to use standard PHP features?
<?php
$logpage = file_get_contents('http://example.com/log.php?data=data');
echo $logpage;
?>

Categories