I have a PHP file that invokes another PHP file via curl. I am trying to have the second file send a response back to the first to let it know that it started. The problem is the first can't wait for the first to finish execution because that can take a minute or more, I need it to send a response immediately then go about it's regular business. I tried using an echo at the top of the second file, but the first doesn't get that as a response.
How do I send back a response without finishing execution?
file1.php
<?php
$url = 'file2.php';
$params = array('data'=>$data,'moredata'=>$moredata);
$options = array(
CURLOPT_RETURNTRANSFER => true, // return web page
CURLOPT_HEADER => false, // don't return headers
CURLOPT_FOLLOWLOCATION => true, // follow redirects
CURLOPT_ENCODING => "", // handle all encodings
CURLOPT_USERAGENT => "Mozilla", // who am i
CURLOPT_AUTOREFERER => true, // set referer on redirect
CURLOPT_CONNECTTIMEOUT => 120, // timeout on connect
CURLOPT_TIMEOUT => 120, // timeout on response
CURLOPT_MAXREDIRS => 10, // stop after 10 redirects
CURLOPT_TIMEOUT => 10, // don't wait too long
CURLOPT_POST => true, // Use Method POST (not GET)
CURLOPT_POSTFIELDS => http_build_query($params)
);
$ch = curl_init($url);
curl_setopt_array( $ch, $options );
$response = curl_exec($ch); // See that the page started.
curl_close($ch);
echo 'Response: ' . $response;
?>
file2.php
<?php
/* This is the top of the file. */
echo 'I started.';
.
.
.
// Other CODE
.
.
.
?>
When I run file1.php it results in: 'Response: ' but I expect it to be 'Response: I started.' I know that file2.php gets started because 'Other CODE' get executed, but The echo doesn't get sent back to file1.php, why?
This could be just what you're looking for. Forking in PHP:
http://framework.zend.com/manual/en/zendx.console.process.unix.overview.html
A process divides in two. One is father of the other. The father can tell the client he just begun and the child can do the job. When the child finishes, he's able to report the father which can also report to the client.
Keep in mind there are many requirements for this to run:
Linux
CLI or CGI interface
shmop, pcntl and posix extensions (require recompiling)
The answer ended up being that CURL does not behave like a browser:
PHP Curl output buffer not receiving response
I ended up running my 2nd file first and my 1st file second. The 2nd file waited for a 'finished' file write that the 1st file did once it, obviously, finished.
At this point, it seems like the database would be a better place to store messages for files to be able to pass between each other, but a file would also work for a quick and dirty job.
Related
I am sending about 600 Curl requests to different websites and at some point my page stop/break and here is the error I am getting.
Website.com unexpectedly closed the connection.
ERR_INCOMPLETE_CHUNKED_ENCODING
I am looping the function below through all my 600 websites.
function GetCash($providerUrl, $providerKey){
$url = check_protocol($providerUrl);
$post = [
'key' => Decrypt($providerKey),
'action' => 'balance'
];
// Sets our options array so we can assign them all at once
$options = [
CURLOPT_URL => $url,
//CURLOPT_POST => false,
CURLOPT_POSTFIELDS => $post,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_CONNECTTIMEOUT => 5,
CURLOPT_TIMEOUT => 5,
];
// Initiates the cURL object
$curl = curl_init();
curl_setopt_array($curl, $options);
$json = curl_exec($curl);
curl_close($curl);
//Big variable of all the values
$services = json_decode($json, true);
//Check for invalid API response
if($services['error'] == "Invalid API key"){
return FALSE;
}else{
return $services['balance'];
}
return FALSE;
}
If you are sending requests to 600 different websites in synchronous fashion, it is very likely that the request is simply exceeding PHP's time limit. Depending on what the page was outputting, it may abruptly truncate the data, resulting in this error. To see if this is the case, try only querying a few websites.
You may be able to run set_time_limit(0) in your PHP code to remove the time limit, but it still might hit some sort of browser timeout. For that reason, it is generally best to run long-running tasks from the command line, which has no time limits, like php /path/to/script.php.
If you still need the results to show up on an HTML page, you may want to consider spawning a background task, having it save its progress to a text file or database of some sort, and use AJAX requests to continually check the progress.
I have two PHP files, one for "heavy lifting", one for quick responses that marshals the request to the heavy lifter so that the quick response file may respond to server request immediately (at least, that is the goal). The premise for this is the Slack Slash commands that prefer an instant 200 to let user know command is running.
<?php
echo("I want this text to reply to server instantly");
ini_set('display_errors', 1);
ini_set('display_startup_errors', 1);
error_reporting(E_ALL);
$code = '200';
$curl = curl_init();
curl_setopt_array($curl, array(
CURLOPT_URL => "http://myheavyliftingfile.php",
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => "",
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 30,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => "POST",
CURLOPT_POSTFIELDS => "datatobeusedbyheavylifter:data",
CURLOPT_HTTPHEADER => array(
"cache-control: no-cache",
"content-type: application/x-www-form-urlencoded",
"postman-token: 60757c65-a11e-e524-e909-4bfa3a2845fb"
),
));
$response = curl_exec($curl);
?>
What seems to be happening is, my response/echo doesn't get sent to Slack until my heavylifting.php curl finishes, even though I wish for my response to happen immediately, while the heavy-lifting process itself separately. How can I have one PHP file acknowledge the request, kick off another process on a different file, and respond without waiting for long process to finish?
Update
I do not wish to run multiple curls at once, I just wish to execute one curl but not wait for it to return in order to return a message to Slack to say I received the request. My curl sends data to my other php file that does the heavy lifting. If this is still the same issue as defined in the duplicate, feel free to flag it again and I won't reopen.
The reason this does not work is, that PHP curl calls are always synchronous and your timeout is set to 30 seconds, which far exceeds the max. 3 seconds that is allowed for Slash commands.
But there is a fix to make this work. You just need these small changes:
Set the curl timeout to a smaller value to ensure your first script is completing below the 3 second threshold, e.g. set CURLOPT_TIMEOUT_MS to 400, which defines a timeout of 400 ms.
Set CURLOPT_NOSIGNAL to 1 in your first script. This is required for the timeout to work in UNIX based systems.
Make sure to ignore timeout-errors (CURL ERROR 28) in your first script, since your curl should always return a timeout error.
Make sure your second script is not aborted by the forced timeout by adding this line: ignore_user_abort(true);
See also this answer for a full example.
P.S.: You to not need any buffer flushing for this approach.
We are having a RESTful API and RESTful clients, both in PHP. Client connecting to server via cURL http requests.
$handler = curl_init (self::API_ENDPOINT_URI . $resource);
$options =[
CURLOPT_RETURNTRANSFER => true,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_CUSTOMREQUEST => $method,
CURLOPT_TIMEOUT => 6000,
];
curl_setopt_array ($handler, $options);
$result = curl_exec ($handler);
curl_close ($handler);
Then in model somewhere we call it:
$request = json_decode($this->_doRequest('/client/some_id'));
There is a JSON response and we parse it. Everything is ok till... Till some users start creating multiple requests and PHP hangs. For example we have a client page which is making ~5 requests to API server. When user opens a 10 tabs in browser with 10 different clients it's ~50 requests which are going one by one. That means that before first tab won't finish his work other tabs won't start their work.
Is it any way to fix this issue in simple way?
We would like to use cURL multi handler for this but not sure how to get responses immediately.
Thanks.
Currently I'm writing a PHP script that is supposed to check if a URL is current (returns a HTTP 200 code or redirects to such an URL).
Since several of the URLs that are to be tested return a file, I'd like to avoid using a normal GET request, in order not having to actually download a file.
I would normally use the HTTP HEAD method, however tests show, that many servers don't recognize it and return a different HTTP code than the corresponding GET request.
My idea was know to make a GET request and use CURLOPT_HEADERFUNCTION to define a callback function which checks the HTTP code in the first line of the header and then immediately terminate the request by having it return 0 (instead of the length of the header) if it's not a redirect code.
My question is: Is it ok, to terminate a HTTP request like that? Or will it have any negative effects on the server? Will this actually avoid the unnecessary download?
Example code (untested):
$url = "http://www.example.com/";
$ch = curl_init($url);
curl_setopt_array($ch, array(
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HEADER => true,
CURLINFO_HEADER_OUT => true,
CURLOPT_HTTPGET => true,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_HEADERFUNCTION => 'requestHeaderCallback',
));
$curlResult = curl_exec($ch);
curl_close($ch);
function requestHeaderCallback($ch, $header) {
$matches = array();
if (preg_match("/^HTTP/\d.\d (\d{3}) /")) {
if ($matches[1] < 300 || $matches[1] >= 400) {
return 0;
}
}
return strlen($header);
}
Yes it is fine and yes it will stop the transfer right there.
It will also cause the connection to get disconnected, which only is a concern if you intend to do many requests to the same host as then keeping the connection alive could be a performance benefit.
We've gotten permission to periodically copy a webcam image from another site. We use cURL functions elsewhere in our code, but when trying to access this image, we are unable to.
I'm not sure what is going on. The code we use for many other cURL functions is like so:
$image = 'http://island-alpaca.selfip.com:10202/SnapShotJPEG?Resolution=640x480&Quality=Standard'
$options = array(
CURLOPT_URL => $image,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_CONNECTTIMEOUT => 120,
CURLOPT_TIMEOUT => 120,
CURLOPT_MAXREDIRS => 10
);
$ch = curl_init();
curl_setopt_array($ch, $options);
$cURL_source = curl_exec($ch);
curl_close($ch);
This code doesn't work for the following URL (webcam image), which is accessible in a browser from our location: http://island-alpaca.selfip.com:10202/SnapShotJPEG?Resolution=640x480&Quality=Standard
When I run a test cURL, it just seems to hang for the length of the timeout. $cURL_source never has any data.
I've tried some other cURL examples online, but to no avail. I'm assuming there's a way to build the cURL request to get this to work, but nothing I've tried seems to get me anywhere.
Any help would be greatly appreciated.
Thanks
I don't see any problems with your code. You can get error sometimes because of different problems with network. You can try to wait for good response in loop to increase the chances of success.
Something like:
$image = 'http://island-alpaca.selfip.com:10202/SnapShotJPEG?Resolution=640x480&Quality=Standard';
$tries = 3; // max tries to get good response
$retry_after = 5; // seconds to wait before new try
while($tries > 0) {
$options = array(
CURLOPT_URL => $image,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_CONNECTTIMEOUT => 10,
CURLOPT_TIMEOUT => 10,
CURLOPT_MAXREDIRS => 10
);
$ch = curl_init();
curl_setopt_array($ch, $options);
$cURL_source = curl_exec($ch);
curl_close($ch);
if($cURL_source !== false) {
break;
}
else {
$tries--;
sleep($retry_after);
}
}
Can you fetch the URL from the server where this code is running? Perhaps it has firewall rules in place? You are fetching from a non-standard port: 10202. It must be allowed by your firewall.
I, like the others, found it easy to fetch the image with curl/php.
As it was said before, I can either see any problem with the code. However, maybe you should consider setting more timeout for the curl - to be sure that this slow loading picture finally gets loaded. So, as a possibility, try to increase CURLOPT_TIMEOUT to weird big number, as well as corresponding timeout for php script execution. It may help.
Maybe, the best variant is to mix the previous author's variant and this one.
I tried wget on the image URL and it downloads the image and then seems to hang - perhaps the server isn't correctly closing the connection.
However I got file_get_contents to work rather than curl, if that helps:
<?php
$image = 'http://island-alpaca.selfip.com:10202/SnapShotJPEG?Resolution=640x480&Quality=Standard';
$imageData = base64_encode(file_get_contents($image));
$src = 'data: '.mime_content_type($image).';base64,'.$imageData;
echo '<img src="',$src,'">';
Are you sure it's not working? Your code is working fine for me (after adding the missing semicolon after $image = ...).
The reason it might be giving you trouble is because it's not actually an image, it's an MJPEG. It uses an HTTP session that's kept open and with a multipart content (similar to what you see in MIME email), and the server pushes a new JPEG frame to replace the last one on an interval. CURL seems to be happy just giving you the first frame though.