New Spin on cURL not working over SSL - php

So I've been finding a lot of posts here and other places on the interwebs regarding PHP, cURL and SSL. I've got a problem that I'm not seeing around.
Obviously, if I set SSL_VERIFYPEER/HOST to blindly accept I can get this to work, but I would like to use my cert to verify the connection.
So here is some code:
$options = array(
CURLOPT_URL => $oAuthResult['signed_url'],
CURLOPT_RETURNTRANSFER => TRUE,
CURLOPT_HEADER => 0,
CURLOPT_SSL_VERIFYPEER => TRUE,
CURLOPT_SSL_VERIFYHOST => 2,
CURLOPT_CAINFO => getcwd() . '\application\third_party\certs\rootCerr.crt'
);
curl_setopt_array($ch, $options);
try {
$result = curl_exec($ch);
$errCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
if (curl_getinfo($ch, CURLINFO_HTTP_CODE) != 200) {
throw new Exception('<strong>Error trying to ExecuteWebRequest, returned: '.$errCode .'<br>URL:'.$url . '<br>POST data (if any):</strong><br>');
}
curl_close($ch);
} catch (Exception $e) {
//print the error stuff
}
The error code that is returned is 0...which means that everything is A-OK...but since nothing comes back to the screen...I'm pretty sure it's not working.
Anyone?

The $errCode you extract is the HTTP code which is 200-299 when OK. Getting 0 means it was never set due to a problem or similar.
You should rather use curl_errno() after curl_exec() to figure out if things went fine or not. (You can't check the curl_exec() return code for errors as easily, as you have CURLOPT_RETURNTRANSFER enabled which makes that function instead return the contents of the transfer it is set to get. Of course, getting no contents at all returned should also be a good indicator that something failed.)

I've implemented LibCurl Certs by using the CURLOPT_CAINFO as you have indicated...
However, by providing the file name itself wasn't good enough... It had crashed on me too.
For me, the file was referenced by relative path... Additionally, I had to make sure the cert was in Base64 format too. Then everything went through without a hitch..

Related

CURL is Giving different results on two different servers

This is a pretty simple example of using a CURL in my php script to test two servers results. All of a sudden my code broke and I have no idea what happened. Here is the scenario we have two or i thought so identical servers setup with WHM/Cpanel and two identical sets of cloned git repositories. One is our staging server the other our production box.
My problem is that one server staging returns the expected results for the simple scripts below. Our production box just returns nulls. I checked the configuration on both servers with phpinfo() and curl is installed correctly.
My question is has anyone out there had this problem before. I would really like to figure this out and it will probably fix the program we so desperately need.
Thanks again for any responses. Note the code below is only to show CURL working not validating any responses or error's that may have occurred; however, we will display any if they are present.
We tested the same code on two servers stagingpinnaclemedplus.com works, pmpcustomer.com returns null values.
// pageCurl.php
$data['key'] = $_POST['key'];
$data['pdf'] = $_POST['pdf'];
$data['session_id']= $_POST['session_id'];
echo json_encode($data);
// pagetestcurl.php
session_start();
$url = 'http://stagingpinnaclemedplus.com/pageCurl.php';
$postData['key']= 'LABEL_PATH';
$postData['pdf'] = 'off'; // Signifies for the PHP page to create PDF file not shown to browser
$postData['print_mode'] = 'c';
$postData['session_id'] = session_id();
$ch = curl_init();
curl_setopt_array(
$ch, array(
CURLOPT_URL => $url,
CURLOPT_POST => true,
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_POSTFIELDS => $postData,
CURLOPT_FOLLOWLOCATION => true
));
$result = curl_exec($ch);
$error = curl_error($ch);
curl_close($ch);
echo "</br>";
echo "Result:".print_r($result).'</br>';
if($error)
echo "Error:".var_dump($error);
When i run the code on the staging server we get what i expect:
{"key":"LABEL_PATH","pdf":"off","session_id":"r2jkkmbhd73maj9e8o72mdvqq3"} Result:1\
When i run it on my production box (note the url host name is changed to pmpcustomer.com) for testpagecurl.php. I get this result:
[Result: {"key":null,"pdf":null,"session_id":null}
string(0) "" Error:]

GET request with headers using cURL PHP

I m trying to make a GET request with headers using cURL PHP. I am getting an empty response form the server. I would like to know if I have made this request correctly using cURL PHP.
// curl GET request with headers
$url = $sendMailURL;
$requestHeaders = array(
$hConLength_.':'.$conLengthValue,
$hConType_.':'.$conTypeValue,
$hHost_.':'.$conHostValue,
$hDate_.':'.$conAmzDateValue);
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_HTTPHEADER, $requestHeaders);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$output = curl_exec($ch);
if(curl_error($ch)){
echo 'curl error: '.curl_error($ch);
}else{
print_r($output);
}
print_r($requestHeaders):
Array ( [0] => Content-Length:207
[1] => Content-Type:application/x-www-form-urlencoded
[2] => Host:email.eu-west-1.amazonaws.com
[3] => X-amz-date:20180115T224433Z )
I'm positive that you have found your answer, but in case there are others who will probably want to figure out how to check the headers or want to go down the rabbit hole trying to figure out how SignatureV4 works.
You need to set curl_option CURLINFO_HEADER_OUT
I used following to set all the options in one go, but I don't see why you can't set them one by one.
$conf = [
\CURLOPT_CUSTOMREQUEST => $method, // method
\CURLOPT_URL => $url, // url
\CURLOPT_HTTPHEADER => $requestHeaders, // headers
\CURLOPT_RETURNTRANSFER => true,
\CURLOPT_HEADER => true,
\CURLOPT_CONNECTTIMEOUT => $timeout, // set to anything that fits
\CURLOPT_PROTOCOLS => CURLPROTO_HTTPS, // whatever is your protocol
\CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1, // normally 1.1
\CURLINFO_HEADER_OUT => 1, // this will cause request headers to show up
];
curl_setopt_array($this->curl, $conf);
After running your request and before closing curl you can check for your headers and response headers all together.
$result = curl_exec($this->curl);
$info = curl_getinfo($this->curl, CURLINFO_HEADER_OUT); // run after exec before close
var_dump($info);
curl_close($this->curl);
you don't have to specify CURLINFO_HEADER_OUT when calling curl_getinfo but it will show a lot more info which makes it a little harder to find what you are looking for.
Since you mentioned in comments that you are trying to make SignatureV4 for AWS and I had the same issue, a few notes:
Although http headers are not case sensitive, when calculating hash they should be in lowercase and also specified in authorization token in lower case too. to make it easier, I just included headers in both request and hash as lowercase
You don't need to hash all the headers, since you will need to mention which headers are included in hash, in the authorization token.
Someone at AWS thought it is cool to sort querystring items alphabetically before calculating hash, so if you are sending querystring, make sure tokens are sorted before hashing it e.g.
Engine=standard&LanguageCode=en-US // will work
LanguageCode=en-US&Engine=standard // will not work
The only two headers that need to be in hash are x-amz-date and host. (at least that was the case for Polly) and they need to look like this
host:service.region.amazonaws.com
x-amz-date:20210920T180242Z
authorization:AWS4-HMAC-SHA256 Credential=xxxxxxxxxxxxxxxxxxxx/20210920/region/service/aws4_request, SignedHeaders=host;x-amz-date, Signature=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Is it ok to terminate a HTTP request in the callback function set by CURLOPT_HEADERFUNCTION?

Currently I'm writing a PHP script that is supposed to check if a URL is current (returns a HTTP 200 code or redirects to such an URL).
Since several of the URLs that are to be tested return a file, I'd like to avoid using a normal GET request, in order not having to actually download a file.
I would normally use the HTTP HEAD method, however tests show, that many servers don't recognize it and return a different HTTP code than the corresponding GET request.
My idea was know to make a GET request and use CURLOPT_HEADERFUNCTION to define a callback function which checks the HTTP code in the first line of the header and then immediately terminate the request by having it return 0 (instead of the length of the header) if it's not a redirect code.
My question is: Is it ok, to terminate a HTTP request like that? Or will it have any negative effects on the server? Will this actually avoid the unnecessary download?
Example code (untested):
$url = "http://www.example.com/";
$ch = curl_init($url);
curl_setopt_array($ch, array(
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HEADER => true,
CURLINFO_HEADER_OUT => true,
CURLOPT_HTTPGET => true,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_HEADERFUNCTION => 'requestHeaderCallback',
));
$curlResult = curl_exec($ch);
curl_close($ch);
function requestHeaderCallback($ch, $header) {
$matches = array();
if (preg_match("/^HTTP/\d.\d (\d{3}) /")) {
if ($matches[1] < 300 || $matches[1] >= 400) {
return 0;
}
}
return strlen($header);
}
Yes it is fine and yes it will stop the transfer right there.
It will also cause the connection to get disconnected, which only is a concern if you intend to do many requests to the same host as then keeping the connection alive could be a performance benefit.

Unable to cURL image. Not sure what to do

We've gotten permission to periodically copy a webcam image from another site. We use cURL functions elsewhere in our code, but when trying to access this image, we are unable to.
I'm not sure what is going on. The code we use for many other cURL functions is like so:
$image = 'http://island-alpaca.selfip.com:10202/SnapShotJPEG?Resolution=640x480&Quality=Standard'
$options = array(
CURLOPT_URL => $image,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_CONNECTTIMEOUT => 120,
CURLOPT_TIMEOUT => 120,
CURLOPT_MAXREDIRS => 10
);
$ch = curl_init();
curl_setopt_array($ch, $options);
$cURL_source = curl_exec($ch);
curl_close($ch);
This code doesn't work for the following URL (webcam image), which is accessible in a browser from our location: http://island-alpaca.selfip.com:10202/SnapShotJPEG?Resolution=640x480&Quality=Standard
When I run a test cURL, it just seems to hang for the length of the timeout. $cURL_source never has any data.
I've tried some other cURL examples online, but to no avail. I'm assuming there's a way to build the cURL request to get this to work, but nothing I've tried seems to get me anywhere.
Any help would be greatly appreciated.
Thanks
I don't see any problems with your code. You can get error sometimes because of different problems with network. You can try to wait for good response in loop to increase the chances of success.
Something like:
$image = 'http://island-alpaca.selfip.com:10202/SnapShotJPEG?Resolution=640x480&Quality=Standard';
$tries = 3; // max tries to get good response
$retry_after = 5; // seconds to wait before new try
while($tries > 0) {
$options = array(
CURLOPT_URL => $image,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_CONNECTTIMEOUT => 10,
CURLOPT_TIMEOUT => 10,
CURLOPT_MAXREDIRS => 10
);
$ch = curl_init();
curl_setopt_array($ch, $options);
$cURL_source = curl_exec($ch);
curl_close($ch);
if($cURL_source !== false) {
break;
}
else {
$tries--;
sleep($retry_after);
}
}
Can you fetch the URL from the server where this code is running? Perhaps it has firewall rules in place? You are fetching from a non-standard port: 10202. It must be allowed by your firewall.
I, like the others, found it easy to fetch the image with curl/php.
As it was said before, I can either see any problem with the code. However, maybe you should consider setting more timeout for the curl - to be sure that this slow loading picture finally gets loaded. So, as a possibility, try to increase CURLOPT_TIMEOUT to weird big number, as well as corresponding timeout for php script execution. It may help.
Maybe, the best variant is to mix the previous author's variant and this one.
I tried wget on the image URL and it downloads the image and then seems to hang - perhaps the server isn't correctly closing the connection.
However I got file_get_contents to work rather than curl, if that helps:
<?php
$image = 'http://island-alpaca.selfip.com:10202/SnapShotJPEG?Resolution=640x480&Quality=Standard';
$imageData = base64_encode(file_get_contents($image));
$src = 'data: '.mime_content_type($image).';base64,'.$imageData;
echo '<img src="',$src,'">';
Are you sure it's not working? Your code is working fine for me (after adding the missing semicolon after $image = ...).
The reason it might be giving you trouble is because it's not actually an image, it's an MJPEG. It uses an HTTP session that's kept open and with a multipart content (similar to what you see in MIME email), and the server pushes a new JPEG frame to replace the last one on an interval. CURL seems to be happy just giving you the first frame though.

'&' becomes '&' when trying to get contents from a URL

I was running my WebServer for months with the same Algorithm where I got the content of a URL by using this line of code:
$response = file_get_contents('http://femoso.de:8019/api/2/getVendorLogin?' . http_build_query(array('vendor'=>$vendor,'user'=>$login,'pw'=>$pw),'','&'));
But now something must have changed as out of sudden it stopped working.
In earlier days the URL looked like it should have been:
http://femoso.de:8019/api/2/getVendorLogin?vendor=100&user=test&pw=test
but now I get an error in my nginx log saying that I requested the following URL which returned a 403
http://femoso.de:8019/api/2/getVendorLogin?vendor=100&user=test&pw=test
I know that something changed on the target server, but I think that shouldn't affect me or not?!
I already spent hours and hours of reading and searching through Google and Stackoverflow, but all the suggested ways as
urlencode() or
htmlspecialchars() etc...
didn't work for me.
For your information, the environment is a zend application with a nginx server on my end and a php webservice with apache on the other end.
Like I said, it changed without any change on my side!
Thanks
Let's find out the culprit!
1) Is it http_build_query ? Try replacing:
'http://femoso.de:8019/api/2/getVendorLogin?' . http_build_query(array('vendor'=>$vendor,'user'=>$login,'pw'=>$pw)
with:
"http://femoso.de:8019/api/2/getVendorLogin?vendor={$vendor}&user={$login}&pw={$pw}"
2) Is some kind of post-processing in the place? Try replacing '&' with chr(38)
3) Maybe give a try and play a little bit with cURL?
$ch = curl_init();
curl_setopt_array($ch, array(
CURLOPT_URL => 'http://femoso.de:8019/api/2/getVendorLogin?' . http_build_query(array('vendor'=>$vendor,'user'=>$login,'pw'=>$pw),
CURLOPT_RETURNTRANSFER => true,
CURLOPT_HEADER => true, // include response header in result
//CURLOPT_FOLLOWLOCATION => true, // uncomment to follow redirects
CURLINFO_HEADER_OUT => true, // track request header, see var_dump below
));
$data = curl_exec($ch);
curl_close($ch);
var_dump($data, curl_getinfo($ch, CURLINFO_HEADER_OUT));
exit;
Sounds like your arg_separator.output is set to "&" in your php.ini. Either comment that line out or change to just "&"
I'm no expert but that's the way the computer reads the address since it's a special character. Something with encoding. Simple fix would be to to filter by utilizing str_replace(). Something along those lines.

Categories