We've gotten permission to periodically copy a webcam image from another site. We use cURL functions elsewhere in our code, but when trying to access this image, we are unable to.
I'm not sure what is going on. The code we use for many other cURL functions is like so:
$image = 'http://island-alpaca.selfip.com:10202/SnapShotJPEG?Resolution=640x480&Quality=Standard'
$options = array(
CURLOPT_URL => $image,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_CONNECTTIMEOUT => 120,
CURLOPT_TIMEOUT => 120,
CURLOPT_MAXREDIRS => 10
);
$ch = curl_init();
curl_setopt_array($ch, $options);
$cURL_source = curl_exec($ch);
curl_close($ch);
This code doesn't work for the following URL (webcam image), which is accessible in a browser from our location: http://island-alpaca.selfip.com:10202/SnapShotJPEG?Resolution=640x480&Quality=Standard
When I run a test cURL, it just seems to hang for the length of the timeout. $cURL_source never has any data.
I've tried some other cURL examples online, but to no avail. I'm assuming there's a way to build the cURL request to get this to work, but nothing I've tried seems to get me anywhere.
Any help would be greatly appreciated.
Thanks
I don't see any problems with your code. You can get error sometimes because of different problems with network. You can try to wait for good response in loop to increase the chances of success.
Something like:
$image = 'http://island-alpaca.selfip.com:10202/SnapShotJPEG?Resolution=640x480&Quality=Standard';
$tries = 3; // max tries to get good response
$retry_after = 5; // seconds to wait before new try
while($tries > 0) {
$options = array(
CURLOPT_URL => $image,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_CONNECTTIMEOUT => 10,
CURLOPT_TIMEOUT => 10,
CURLOPT_MAXREDIRS => 10
);
$ch = curl_init();
curl_setopt_array($ch, $options);
$cURL_source = curl_exec($ch);
curl_close($ch);
if($cURL_source !== false) {
break;
}
else {
$tries--;
sleep($retry_after);
}
}
Can you fetch the URL from the server where this code is running? Perhaps it has firewall rules in place? You are fetching from a non-standard port: 10202. It must be allowed by your firewall.
I, like the others, found it easy to fetch the image with curl/php.
As it was said before, I can either see any problem with the code. However, maybe you should consider setting more timeout for the curl - to be sure that this slow loading picture finally gets loaded. So, as a possibility, try to increase CURLOPT_TIMEOUT to weird big number, as well as corresponding timeout for php script execution. It may help.
Maybe, the best variant is to mix the previous author's variant and this one.
I tried wget on the image URL and it downloads the image and then seems to hang - perhaps the server isn't correctly closing the connection.
However I got file_get_contents to work rather than curl, if that helps:
<?php
$image = 'http://island-alpaca.selfip.com:10202/SnapShotJPEG?Resolution=640x480&Quality=Standard';
$imageData = base64_encode(file_get_contents($image));
$src = 'data: '.mime_content_type($image).';base64,'.$imageData;
echo '<img src="',$src,'">';
Are you sure it's not working? Your code is working fine for me (after adding the missing semicolon after $image = ...).
The reason it might be giving you trouble is because it's not actually an image, it's an MJPEG. It uses an HTTP session that's kept open and with a multipart content (similar to what you see in MIME email), and the server pushes a new JPEG frame to replace the last one on an interval. CURL seems to be happy just giving you the first frame though.
Related
I am sending about 600 Curl requests to different websites and at some point my page stop/break and here is the error I am getting.
Website.com unexpectedly closed the connection.
ERR_INCOMPLETE_CHUNKED_ENCODING
I am looping the function below through all my 600 websites.
function GetCash($providerUrl, $providerKey){
$url = check_protocol($providerUrl);
$post = [
'key' => Decrypt($providerKey),
'action' => 'balance'
];
// Sets our options array so we can assign them all at once
$options = [
CURLOPT_URL => $url,
//CURLOPT_POST => false,
CURLOPT_POSTFIELDS => $post,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_CONNECTTIMEOUT => 5,
CURLOPT_TIMEOUT => 5,
];
// Initiates the cURL object
$curl = curl_init();
curl_setopt_array($curl, $options);
$json = curl_exec($curl);
curl_close($curl);
//Big variable of all the values
$services = json_decode($json, true);
//Check for invalid API response
if($services['error'] == "Invalid API key"){
return FALSE;
}else{
return $services['balance'];
}
return FALSE;
}
If you are sending requests to 600 different websites in synchronous fashion, it is very likely that the request is simply exceeding PHP's time limit. Depending on what the page was outputting, it may abruptly truncate the data, resulting in this error. To see if this is the case, try only querying a few websites.
You may be able to run set_time_limit(0) in your PHP code to remove the time limit, but it still might hit some sort of browser timeout. For that reason, it is generally best to run long-running tasks from the command line, which has no time limits, like php /path/to/script.php.
If you still need the results to show up on an HTML page, you may want to consider spawning a background task, having it save its progress to a text file or database of some sort, and use AJAX requests to continually check the progress.
Currently I'm writing a PHP script that is supposed to check if a URL is current (returns a HTTP 200 code or redirects to such an URL).
Since several of the URLs that are to be tested return a file, I'd like to avoid using a normal GET request, in order not having to actually download a file.
I would normally use the HTTP HEAD method, however tests show, that many servers don't recognize it and return a different HTTP code than the corresponding GET request.
My idea was know to make a GET request and use CURLOPT_HEADERFUNCTION to define a callback function which checks the HTTP code in the first line of the header and then immediately terminate the request by having it return 0 (instead of the length of the header) if it's not a redirect code.
My question is: Is it ok, to terminate a HTTP request like that? Or will it have any negative effects on the server? Will this actually avoid the unnecessary download?
Example code (untested):
$url = "http://www.example.com/";
$ch = curl_init($url);
curl_setopt_array($ch, array(
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HEADER => true,
CURLINFO_HEADER_OUT => true,
CURLOPT_HTTPGET => true,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_HEADERFUNCTION => 'requestHeaderCallback',
));
$curlResult = curl_exec($ch);
curl_close($ch);
function requestHeaderCallback($ch, $header) {
$matches = array();
if (preg_match("/^HTTP/\d.\d (\d{3}) /")) {
if ($matches[1] < 300 || $matches[1] >= 400) {
return 0;
}
}
return strlen($header);
}
Yes it is fine and yes it will stop the transfer right there.
It will also cause the connection to get disconnected, which only is a concern if you intend to do many requests to the same host as then keeping the connection alive could be a performance benefit.
I was running my WebServer for months with the same Algorithm where I got the content of a URL by using this line of code:
$response = file_get_contents('http://femoso.de:8019/api/2/getVendorLogin?' . http_build_query(array('vendor'=>$vendor,'user'=>$login,'pw'=>$pw),'','&'));
But now something must have changed as out of sudden it stopped working.
In earlier days the URL looked like it should have been:
http://femoso.de:8019/api/2/getVendorLogin?vendor=100&user=test&pw=test
but now I get an error in my nginx log saying that I requested the following URL which returned a 403
http://femoso.de:8019/api/2/getVendorLogin?vendor=100&user=test&pw=test
I know that something changed on the target server, but I think that shouldn't affect me or not?!
I already spent hours and hours of reading and searching through Google and Stackoverflow, but all the suggested ways as
urlencode() or
htmlspecialchars() etc...
didn't work for me.
For your information, the environment is a zend application with a nginx server on my end and a php webservice with apache on the other end.
Like I said, it changed without any change on my side!
Thanks
Let's find out the culprit!
1) Is it http_build_query ? Try replacing:
'http://femoso.de:8019/api/2/getVendorLogin?' . http_build_query(array('vendor'=>$vendor,'user'=>$login,'pw'=>$pw)
with:
"http://femoso.de:8019/api/2/getVendorLogin?vendor={$vendor}&user={$login}&pw={$pw}"
2) Is some kind of post-processing in the place? Try replacing '&' with chr(38)
3) Maybe give a try and play a little bit with cURL?
$ch = curl_init();
curl_setopt_array($ch, array(
CURLOPT_URL => 'http://femoso.de:8019/api/2/getVendorLogin?' . http_build_query(array('vendor'=>$vendor,'user'=>$login,'pw'=>$pw),
CURLOPT_RETURNTRANSFER => true,
CURLOPT_HEADER => true, // include response header in result
//CURLOPT_FOLLOWLOCATION => true, // uncomment to follow redirects
CURLINFO_HEADER_OUT => true, // track request header, see var_dump below
));
$data = curl_exec($ch);
curl_close($ch);
var_dump($data, curl_getinfo($ch, CURLINFO_HEADER_OUT));
exit;
Sounds like your arg_separator.output is set to "&" in your php.ini. Either comment that line out or change to just "&"
I'm no expert but that's the way the computer reads the address since it's a special character. Something with encoding. Simple fix would be to to filter by utilizing str_replace(). Something along those lines.
I have a PHP file that invokes another PHP file via curl. I am trying to have the second file send a response back to the first to let it know that it started. The problem is the first can't wait for the first to finish execution because that can take a minute or more, I need it to send a response immediately then go about it's regular business. I tried using an echo at the top of the second file, but the first doesn't get that as a response.
How do I send back a response without finishing execution?
file1.php
<?php
$url = 'file2.php';
$params = array('data'=>$data,'moredata'=>$moredata);
$options = array(
CURLOPT_RETURNTRANSFER => true, // return web page
CURLOPT_HEADER => false, // don't return headers
CURLOPT_FOLLOWLOCATION => true, // follow redirects
CURLOPT_ENCODING => "", // handle all encodings
CURLOPT_USERAGENT => "Mozilla", // who am i
CURLOPT_AUTOREFERER => true, // set referer on redirect
CURLOPT_CONNECTTIMEOUT => 120, // timeout on connect
CURLOPT_TIMEOUT => 120, // timeout on response
CURLOPT_MAXREDIRS => 10, // stop after 10 redirects
CURLOPT_TIMEOUT => 10, // don't wait too long
CURLOPT_POST => true, // Use Method POST (not GET)
CURLOPT_POSTFIELDS => http_build_query($params)
);
$ch = curl_init($url);
curl_setopt_array( $ch, $options );
$response = curl_exec($ch); // See that the page started.
curl_close($ch);
echo 'Response: ' . $response;
?>
file2.php
<?php
/* This is the top of the file. */
echo 'I started.';
.
.
.
// Other CODE
.
.
.
?>
When I run file1.php it results in: 'Response: ' but I expect it to be 'Response: I started.' I know that file2.php gets started because 'Other CODE' get executed, but The echo doesn't get sent back to file1.php, why?
This could be just what you're looking for. Forking in PHP:
http://framework.zend.com/manual/en/zendx.console.process.unix.overview.html
A process divides in two. One is father of the other. The father can tell the client he just begun and the child can do the job. When the child finishes, he's able to report the father which can also report to the client.
Keep in mind there are many requirements for this to run:
Linux
CLI or CGI interface
shmop, pcntl and posix extensions (require recompiling)
The answer ended up being that CURL does not behave like a browser:
PHP Curl output buffer not receiving response
I ended up running my 2nd file first and my 1st file second. The 2nd file waited for a 'finished' file write that the 1st file did once it, obviously, finished.
At this point, it seems like the database would be a better place to store messages for files to be able to pass between each other, but a file would also work for a quick and dirty job.
So I've been finding a lot of posts here and other places on the interwebs regarding PHP, cURL and SSL. I've got a problem that I'm not seeing around.
Obviously, if I set SSL_VERIFYPEER/HOST to blindly accept I can get this to work, but I would like to use my cert to verify the connection.
So here is some code:
$options = array(
CURLOPT_URL => $oAuthResult['signed_url'],
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_HEADER => 0,
CURLOPT_SSL_VERIFYPEER => TRUE,
CURLOPT_SSL_VERIFYHOST => 2,
CURLOPT_CAINFO => getcwd() . '\application\third_party\certs\rootCerr.crt'
);
curl_setopt_array($ch, $options);
try {
$result = curl_exec($ch);
$errCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
if (curl_getinfo($ch, CURLINFO_HTTP_CODE) != 200) {
throw new Exception('<strong>Error trying to ExecuteWebRequest, returned: '.$errCode .'<br>URL:'.$url . '<br>POST data (if any):</strong><br>');
}
curl_close($ch);
} catch (Exception $e) {
//print the error stuff
}
The error code that is returned is 0...which means that everything is A-OK...but since nothing comes back to the screen...I'm pretty sure it's not working.
Anyone?
The $errCode you extract is the HTTP code which is 200-299 when OK. Getting 0 means it was never set due to a problem or similar.
You should rather use curl_errno() after curl_exec() to figure out if things went fine or not. (You can't check the curl_exec() return code for errors as easily, as you have CURLOPT_RETURNTRANSFER enabled which makes that function instead return the contents of the transfer it is set to get. Of course, getting no contents at all returned should also be a good indicator that something failed.)
I've implemented LibCurl Certs by using the CURLOPT_CAINFO as you have indicated...
However, by providing the file name itself wasn't good enough... It had crashed on me too.
For me, the file was referenced by relative path... Additionally, I had to make sure the cert was in Base64 format too. Then everything went through without a hitch..