We are having a RESTful API and RESTful clients, both in PHP. Client connecting to server via cURL http requests.
$handler = curl_init (self::API_ENDPOINT_URI . $resource);
$options =[
CURLOPT_RETURNTRANSFER => true,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_CUSTOMREQUEST => $method,
CURLOPT_TIMEOUT => 6000,
];
curl_setopt_array ($handler, $options);
$result = curl_exec ($handler);
curl_close ($handler);
Then in model somewhere we call it:
$request = json_decode($this->_doRequest('/client/some_id'));
There is a JSON response and we parse it. Everything is ok till... Till some users start creating multiple requests and PHP hangs. For example we have a client page which is making ~5 requests to API server. When user opens a 10 tabs in browser with 10 different clients it's ~50 requests which are going one by one. That means that before first tab won't finish his work other tabs won't start their work.
Is it any way to fix this issue in simple way?
We would like to use cURL multi handler for this but not sure how to get responses immediately.
Thanks.
Related
Is it possible to process woocommrce api using curl?
I am trying to do it but no success. This api works in insomnia or postman
To Process
curl https://example.com/wp-json/wc/v3/products -u consumer_key:consumer_secret
Following is what I am doing
https://www.example.com/wp-json/wc/v3/products
$Consumer_Key="ck_111111";
$Consumer_Secret= "cs_222222";
$options = array(
CURLOPT_URL => $URL,
CURLOPT_CUSTOMREQUEST => "GET",
CURLOPT_USERPWD => $Consumer_Key.":".$Consumer_Secret
);
$ch=curl_init();
curl_setopt_array($ch, $options);
// Execute request, store response and HTTP response code
$response=curl_exec($ch);
curl_close($ch);
print_r($response);
And The error I am getting is
Forbidden
You don't have permission to access this resource.
Additionally, a 403 Forbidden
error was encountered while trying to use an ErrorDocument to handle the request.
I don't understand a lot of PHP but I have the same code working well on Angular with the cURL:
getCategories() {
this.http.get(this.cUrl + "/wp-json/wc/v3/products/categories?per_page=100&consumer_key=" + this.wooApiClie + "&consumer_secret=" + this.wooApiSec).subscribe(res => {
this.categories = res;
console.log(this.categories);
})
This one is for the categories but it's the same for the products. Maybe it's because your variables $Consumer_key and $Consumer_secret are in caps? I repeat, I don't understand too much of PHP.
I am sending about 600 Curl requests to different websites and at some point my page stop/break and here is the error I am getting.
Website.com unexpectedly closed the connection.
ERR_INCOMPLETE_CHUNKED_ENCODING
I am looping the function below through all my 600 websites.
function GetCash($providerUrl, $providerKey){
$url = check_protocol($providerUrl);
$post = [
'key' => Decrypt($providerKey),
'action' => 'balance'
];
// Sets our options array so we can assign them all at once
$options = [
CURLOPT_URL => $url,
//CURLOPT_POST => false,
CURLOPT_POSTFIELDS => $post,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_CONNECTTIMEOUT => 5,
CURLOPT_TIMEOUT => 5,
];
// Initiates the cURL object
$curl = curl_init();
curl_setopt_array($curl, $options);
$json = curl_exec($curl);
curl_close($curl);
//Big variable of all the values
$services = json_decode($json, true);
//Check for invalid API response
if($services['error'] == "Invalid API key"){
return FALSE;
}else{
return $services['balance'];
}
return FALSE;
}
If you are sending requests to 600 different websites in synchronous fashion, it is very likely that the request is simply exceeding PHP's time limit. Depending on what the page was outputting, it may abruptly truncate the data, resulting in this error. To see if this is the case, try only querying a few websites.
You may be able to run set_time_limit(0) in your PHP code to remove the time limit, but it still might hit some sort of browser timeout. For that reason, it is generally best to run long-running tasks from the command line, which has no time limits, like php /path/to/script.php.
If you still need the results to show up on an HTML page, you may want to consider spawning a background task, having it save its progress to a text file or database of some sort, and use AJAX requests to continually check the progress.
I have a PHP script that connects to an URL through cURL and then does something, depending on the returned HTTP status code:
$ch = curl_init();
$options = array(
CURLOPT_RETURNTRANSFER => true,
CURLOPT_URL => $url,
CURLOPT_USERAGENT => "What?!?"
);
curl_setopt_array($ch, $options);
$out = curl_exec($ch);
$code = curl_getinfo($ch)["http_code"];
curl_close($ch);
if ($code == "200") {
echo "200";
} else {
echo "not 200";
}
Some webservers are slow to reply, and although the page is loaded in my browser after a few seconds my script, when it tries to connect to that server, tells me that it did not receive a positive ("200") reply. So, apparently, the connection initiated by cURL timed out.
But why? I don't set a timeout in my script, and according to other answers on this site the default timeout for cURL is definitely longer than the three or four seconds it takes for the page to load in my browser.
So why does the connecion time out, and how can I get it to last longer, if, apparently, it is already set to infinite?
Notes:
The same URL doesn't always time out. So sometimes cURL can connect.
It is not one specific URL that sometimes times out, but different URLs at different times.
I'm on a shared server, so I don't have root access to any files.
I tried to look at curl_getinfo($ch) and curl_error($ch) – as per #drew010's suggestion in the comments – but both were empty whenever the problem happened.
The whole script runs for a little more than one minute. In this time it connects to 300+ URLs successfully. Even when one of the URLs fails, the other connections are successfully made. So the script does not time out.
cURL does not time out either, because when I try to connect to an URL with a script sleeping for 59 seconds, cURL successfully connects. So apparently the slowness of the failing URL is not a problem in itself for cURL.
Update
Following #Karlos' suggestion in his answer, I used:
CURLOPT_VERBOSE => 1,
CURLOPT_STDERR => $curl_log
(using code from this answer) and found the following in $curl_log when an URL failed (URL and IP changed):
* About to connect() to www.somesite.com port 80 (#0)
* Trying 104.16.37.249... * connected
* Connected to www.somesite.com (104.16.37.249) port 80 (#0)
GET /wp_german/?feed=rss2 HTTP/1.1
User-Agent: myURL
Host: www.somesite.com
Accept: */*
* Recv failure: Connection reset by peer
* Closing connection #0
So, I have found the why – thank you #Karlos! – and apparently #Axalix was right and it is a network problem. I'll now follow suggestions given on this site for that kind of failure. Thanks to everyone for their help!
My experience working with curl showed me that sometimes when using the option:
CURLOPT_RETURNTRANSFER => true
the server might not give a successful reply or, at least, a successful reply within the timeframe that curl has to receive the response and cache it, so the results are returned by the curl into the variable you assign. In your code:
$out = curl_exec($ch);
In this stackoverflow question CURLOPT_RETURNTRANSFER set to true doesnt work on hosting server, you can see that that the option CURLOPT_RETURNTRANSFER is directly affected by the requested host web server implementation.
As you are using explicitly the response body, and your code relies on the response headers, a good way to solve this might be to:
CURLOPT_RETURNTRANSFER => false
and execute the curl code to work on the response headers.
Once you have the header with the code you are interested, you could run a php script that echoes the curl response and parse it by yourself:
<?php
$url=isset($_GET['url']) ? $_GET['url'] : 'http://www.example.com';
$ch= curl_init();
$options = array(
CURLOPT_RETURNTRANSFER => false,
CURLOPT_URL => $url,
CURLOPT_USERAGENT => "myURL"
);
curl_setopt_array($ch, $options);
curl_exec($ch);
curl_close($ch);
?>
In any case the reply to your question why your request does not get an error, I guess that the use of the option CURLOPT_NOSIGNAL and the different timeout options explained in the set_opt php manual might get you closer to it.
In order to dig further, the option CURLOPT_VERBOSE might help you to have extra information about the request behavior through the STDERR.
The reason may be your hosting provider is imposing some limits on outgoing connections.
Here is what can be done to secure your script:
Create a queue in DB with all the URLs that need to be fetched.
Run cron every minute or 5 minutes, take a few URLs from DB - mark them as in progress.
Try to fetch those URLs. Mark every fetched URL as success in DB.
Increment failure count for unsuccessful ones.
Continue going through queue until its empty.
If you implement such a solution you will be able to process every single URL under any unfavourable conditions.
Currently I'm writing a PHP script that is supposed to check if a URL is current (returns a HTTP 200 code or redirects to such an URL).
Since several of the URLs that are to be tested return a file, I'd like to avoid using a normal GET request, in order not having to actually download a file.
I would normally use the HTTP HEAD method, however tests show, that many servers don't recognize it and return a different HTTP code than the corresponding GET request.
My idea was know to make a GET request and use CURLOPT_HEADERFUNCTION to define a callback function which checks the HTTP code in the first line of the header and then immediately terminate the request by having it return 0 (instead of the length of the header) if it's not a redirect code.
My question is: Is it ok, to terminate a HTTP request like that? Or will it have any negative effects on the server? Will this actually avoid the unnecessary download?
Example code (untested):
$url = "http://www.example.com/";
$ch = curl_init($url);
curl_setopt_array($ch, array(
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HEADER => true,
CURLINFO_HEADER_OUT => true,
CURLOPT_HTTPGET => true,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_HEADERFUNCTION => 'requestHeaderCallback',
));
$curlResult = curl_exec($ch);
curl_close($ch);
function requestHeaderCallback($ch, $header) {
$matches = array();
if (preg_match("/^HTTP/\d.\d (\d{3}) /")) {
if ($matches[1] < 300 || $matches[1] >= 400) {
return 0;
}
}
return strlen($header);
}
Yes it is fine and yes it will stop the transfer right there.
It will also cause the connection to get disconnected, which only is a concern if you intend to do many requests to the same host as then keeping the connection alive could be a performance benefit.
My php/Yii application interacts with twilio. I know the sid of a queue. I want to get the current size of that queue. The thing is that I can't use the twilio php library (I don't want to get into the details). I'm using curl, but I keep getting 401 errors.
This is my code:
$curl = curl_init();
curl_setopt_array($curl,array(
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_URL => 'https://api.twilio.com/2010-04-01/Accounts/AccountId/Queues/QUeueID.json',
CURLOPT_USERPWD => 'token:{AuthToken}'));
curl_exec($curl);
I don't what I'm doing wrong. I'm trying to follow the documentation:
http://www.twilio.com/docs/api/rest/queue
EDIT: I turned it into a get request, from a post request.
Also, I got a 401 unauthorized error, not a 411. Sorry about that. Typo.
SECOND EDIT:
So, I figured it out in a conversation with Kevin. Turns out that I needed:
CURLOPT_USERPWD => 'AccountID:Token'
If you are just trying to retrieve the size of a queue, you want to make a GET request, not a POST. It looks like you are setting CURLOPT_POST in your curl request.