Php handling of unresponsive curl - php

I have a php script that fetches data from external sites using curl then, after three minutes, reloads itself, fetches new data and displays updates. It works fine but if there is a network failure, and I presume it's curl not getting responses, php just hangs without returning errors or anything. These hanging processes then needs to be killed manually.
How can I deal with this situation? Tweak curl options? Modify php script that it watches for unresponsive curl? Or handle everything from the browser through ajax, including firing off a script that kills hanging php processes?
Solution: I've added
curl_setopt($ch, CURLOPT_FAILONERROR = true);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
to my curl and added a catch for these errors to my response checking part. Conceptually, it's all that was needed, CURLOPT_CONNECTTIMEOUT doesn't seem to be necessary because I already have reloading set up in case of errors.
It works with manual disconnect but I haven't seen how the script handles real life network failures yet. Should be okay.

To handle network issue, use CURLOPT_CONNECTTIMEOUT option to define some seconds. It will wait for the given amount of seconds to connect to the targeted host.
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 10);
And use CURLOPT_TIMEOUT option to define number of seconds you want to allow your curl for a particular operation. This will be helpful if the targeted server doesn't release the connection.
curl_setopt($ch, CURLOPT_TIMEOUT, 30);

Related

How to make a webapp (PHP) receive a response from an API (Python) after a long wait time?

I have a web application written in PHP, running on a Linux Azure virtual machine with NGINX. The application is connected to an API (written in Python) on a separate server with NGINX (similar Linux Azure virtual machine). This API performs a complex operation which takes between 30sec and 20 min to complete. So the application has to wait for it.
The problem is that with long wait times, the API respond is not registered in the web app. I have tried the following:
— verified in the endpoint of the API and the logs that the API provides a response after long processing times (it does)
I suspect it is a timeout issue so have tried:
— fixed the PHP timeout settings and the timeout for the  /login_c/check_login endpoint
— checked the code for the request and response sent and received from the API, where I am using curl method. This is the parameter for the time out of curl:
curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT, 0);
curl_setopt($ch, CURLOPT_TIMEOUT, 2100);".
The exec method executes in the background:
exec($command);
The following articles did not provide a solution:
Setting Curl's Timeout in PHP
PHP cURL methods time out on some URLs, but command line always works
Any advice on how to solve this problem?
You must edit php.ini or add to php script:
ini_set("max_execution_time",1800); //for 30 minutes request
It seems that this solved the problem:
curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT, 0);
curl_setopt($ch, CURLOPT_TIMEOUT, 2100);".

PHP cURL timeout ignored

Using curl_setopt() I have set CURLOPT_CONNECTTIMEOUT_MS to 1000 (1 second) and have set up another script that sleeps for 5 seconds, then responds 200 OK (using sleep()) which I call for testing purposes. My script always waits for the response, even though it should yield in a cURL timeout error.
How do I make the timeout work as expected and interrupt the request?
$ch = curl_init($url);
curl_setopt_array($ch, array(
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_FOLLOWLOCATION => TRUE,
CURLOPT_NOBODY => TRUE,
CURLOPT_PROTOCOLS => CURLPROTO_HTTP | CURLPROTO_HTTPS,
CURLOPT_CONNECTTIMEOUT_MS => 1000,
CURLOPT_MAXREDIRS => 5,
CURLOPT_USERAGENT => 'Linkit/2.x Drupal/7.x',
));
$document = curl_exec($ch);
I have also tried CURLOPT_TIMEOUT_MS and also the variants without the _MS suffixes.
I'm using PHP 5.3.4 with cURL 7.19.7 on OS X 10.6, XAMPP.
The CURLOPT_CONNECTTIMEOUT or CURLOPT_CONNECTTIMEOUT_MS define the maximum amount of time that cURL can take to connect to the server but in your case, the connection is successful so the time-out no longer applies.
You need to use CURLOPT_TIMEOUT or CURLOPT_TIMEOUT_MS which define the maximum amount of time cURL can execute for.
For a complete list of options supported by PHP, look at the curl_setopt documentation.
The curl library makes a system call and operates independently of php (sidenote: that's why it is possible to take advantage of multi-threading with curl, even though php itself doesn't support threading). So if you make the curl call and then sleep(), curl still runs.
Also, the timeout setting is for how long to wait for the request to timeout, not your script. For instance, if I make a curl request to google.com and google.com is taking forever to respond, the timeout setting lets me tell curl how long to sit around and wait for google.com to respond.
edit:
Okay, so you are saying you have your curl script that makes a request to another script, and that script has the sleep() in it. Okay, well the curl CURLOPT_CONNECTTIMEOUT (or _MS) setting is to tell curl how long to wait around for a response from the requested server - as in, a connection made. When the curl request is made, it is getting a response that a connection was made...then the sleep() is just delaying the output it's giving. So basically, "wait for a response" is not the same as "how long to timeout the curl execution"
What you want to use is CURLOPT_TIMEOUT or CURLOPT_TIMEOUT_MS
Well, I had the same problem and wasted so much time looking for the solution and found a working solution at the end.
I though I should share it here and this might be helpful for someone in future.
I have simply used both options.
I have used 4 seconds and 8 seconds respectively.
curl_setopt($curl_session, CURLOPT_CONNECTTIMEOUT, 4);
curl_setopt($curl_session, CURLOPT_TIMEOUT, 8);

Can I do a CURL request to the same server?

I need to implement a way to make POST calls to pages located on the same server or in another server. We cannot use include because the files that we are calling usually call different databases or have functions with the same name.
I've been trying to implement this using curl, and while it works perfectly when calling files from another server, I get absolutely nothing when making a call to the same server where the file is.
EDIT TO ADD SOME CODE:
A simplified version of what I'm doing:
File1.php
<?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "www.myserver.com/File2.php");
curl_setopt($ch, CURLOPT_FRESH_CONNECT, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_VERBOSE, true);
$result = curl_exec($ch);
curl_close($ch);
echo $result;
?>
File2.php
<?php
echo "I'M IN!!";
?>
After calling File1.php, I get nothing, but if File2.php is in another server then I get a result.
Any help?
I tried using both the server URL (http...) and the total address of the files (/home/wwww....)
Be aware that if you're issuing the CURL request to your own site, you're using the default session handler, and the page you're requesting via CURL uses the same session as the page that's generating the request, you'll run into a deadlock situation.
The default session handler locks the session file for the duration of the page request. When you try to request another page using the same session, that subsequent request will hang until the request times out or the session file becomes available. Since you're doing an internal CURL, the script running CURL will hold a lock on the session file, and the CURL request can never complete as the target page can never load the session.
Because when you tried to request to the local server with the public ip, apache couldn't resolve to its local domain. So you have to check which local ip apache is using for that domain. Then you need to edit the /etc/hosts file and add the new row with local ip plus your domain. For example:
My Local ip for that domain in apache's virtual host is : 172.190.1.120 and my domain is mydomain.com
So I will add:
172.190.1.120 mydomain.com
Then your curl will work properly.
You should refactor your code. In addition to what Marc B mentioned, this approach will unnecessarily slow down your script (potentially by a large margin) and cause lots of confusion. No offense, but this is just an incredibly hacky fix for bad logic.

PHP curl post to login to wordpress

I am using php curl to login to wordpress behind-the-scenes as described here:
Wordpress autologin using CURL or fsockopen in PHP
However my script is not setting the cookies necessary to retain the wordpress session. Instead they are being sent back to my script and stored in cookies.txt.
Both the curl script and the wordpress login are on the same server in different directories.
Do I need to write another curl script to manually set the wordpress cookies? Is that possible?
If you're just using the code posted as is then it won't work because the subsequent requests won't send the cookies back on each request. Adding curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie); should help (well, at least it will if the cookies actually get saved now - otherwise look into permissions) - well, depending on what your usage scenario is anyway.
You could also check out 10 awesome things to do with cURL for some neat examples on how to use curl (example 4 might just be what you are looking for).
BTW If this script is intended for multiple (concurrent) users, you shouldn't use a static filename, but create a temporary file for each user.
I needed the cookies to be sent to the browser not back to my curl script. The curl script was triggered by a php script running in the browser.
I solved the problem as follows:
Added these params to the curl object:
curl_setopt($ch, CURLOPT_HEADER ,1);
curl_setopt ($ch, CURLOPT_HEADERFUNCTION, 'read_header');
Added a php functions called read_header which parsed out the cookie data
Used setcookie() to manually set the cookies
If anyone wants the details please comment.

How do I transfer data to next server

I am trying to pass some information of a server in which the script is running to next url or server.
I tried using curl. I have following two disadvantages:
if cannot locate file it will tell file not found
it waits till the remote file execution is completed.
How can I overcome both of the things either by using curl or other commands?
Edit:
Now I would like to suppress the file not found message error message being displayed by curl even if the file really doesn't exists.
I do not want output from destination page so I don't want to wait till the execution of the destination page is finished. I just want to trigger the code and continue with another scripts remaining
Example:
I am trying to build a log system which will have its everything in next webserver. The client website which implements the log system will be sending some of the data required for the system by calling a file in my webserver.
Code I am using:
// create a new cURL resource
$ch = curl_init();
// set URL and other appropriate options
curl_setopt($ch, CURLOPT_URL, "http://example.com/log.php?data=data");
curl_setopt($ch, CURLOPT_HEADER, 0);
// grab URL and pass it to the browser
curl_exec($ch);
// close cURL resource, and free up system resources
curl_close($ch);
Why don't you want to use standard PHP features?
<?php
$logpage = file_get_contents('http://example.com/log.php?data=data');
echo $logpage;
?>

Categories