PHP CURL GET request to CloudFront URL very high TTFB - php

I have a very simple PHP CURL get request to retrieve a json file from my AWS CloudFront distribution.
public function get_directory() {
$json = 'https://d108fh6x7uy5wn.cloudfront.net/themes.json';
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $json);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
$array = curl_exec($curl);
curl_close($curl);
$array = json_decode($array, true);
$array = $array['themes'];
return $array;
}
I then have a foreach loop which displays the information contained in the json file.
However, it takes 3.6 minutes for the page to load, it's VERY slow. If I use my S3 origin URL instead of the CF URL the page loads instantly.
I am using this same method on other servers without any issues. The issue only occurs on this server which is using the InterWorx hosting control panel software.
Here is the result of the CURL log:
* About to connect() to d108fh6x7uy5wn.cloudfront.net port 443 (#0)
* Trying 2600:9000:2132:4e00:14:45cf:19c0:21...
* Connection timed out
* Trying 2600:9000:2132:7000:14:45cf:19c0:21...
* Connection timed out
* Trying 2600:9000:2132:2400:14:45cf:19c0:21...
* Connection timed out
* Trying 2600:9000:2132:2c00:14:45cf:19c0:21...
* Connection timed out
* Trying 2600:9000:2132:9600:14:45cf:19c0:21...
* Connection timed out
* Trying 2600:9000:2132:3000:14:45cf:19c0:21...
* Connection timed out
* Trying 2600:9000:2132:aa00:14:45cf:19c0:21...
* Connection timed out
* Trying 2600:9000:2132:5c00:14:45cf:19c0:21...
* Connection timed out
* Trying 13.224.38.161...
* Connected to d108fh6x7uy5wn.cloudfront.net (13.224.38.161) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=*.cloudfront.net
* start date: Mar 19 00:00:00 2021 GMT
* expire date: Mar 17 23:59:59 2022 GMT
* common name: *.cloudfront.net
* issuer: CN=Amazon,OU=Server CA 1B,O=Amazon,C=US
> GET /themes.json HTTP/1.1
Host: d108fh6x7uy5wn.cloudfront.net
Accept: */*
< HTTP/1.1 200 OK
< Content-Type: application/json
< Content-Length: 68529
< Connection: keep-alive
< Last-Modified: Tue, 21 Dec 2021 12:05:12 GMT
< Accept-Ranges: bytes
< Server: AmazonS3
< Date: Thu, 20 Jan 2022 09:52:02 GMT
< ETag: "8d0a0de6a39c797d137171347dd8ef52"
< X-Cache: Hit from cloudfront
< Via: 1.1 ce475d5a085e50a2b454f6aec0f8826e.cloudfront.net (CloudFront)
< X-Amz-Cf-Pop: YTO50-C1
< X-Amz-Cf-Id: Bmq-SnHe7-eqfiBf6a6qdoRMQExdhweFLipeL_PrZqM2slQG9OGWzg==
< Age: 34974
<
* Connection #0 to host d108fh6x7uy5wn.cloudfront.net left intact
It looks like the connection keeps timing out.
It's probably an issue with my CF distribution settings however I am a bit of a noob when it comes to AWS and am at a loss.
If anybody can point me in the right direction that would be greatly appreciated.

I've been trying to resolve this issue for 2 days and 5 minutes after posting my question I decided to change (what should of been obvious) a setting in my CloudFront distribution setting.
SOLUTION: Turn off the IPv6 setting.

Related

PHP curl on https returns error 400, but on http => 301 => https 200, mangled GET path?

(I edited the title, see EDIT below: GET path in the response contains spaces, requested path does not.)
I am working for a company that uses a "template system" to enable their markets (different countries) to host small web apps and making them "feel" like part of the main market website by pulling header, navigation, footer of the main website via curl:
So a php script uses curl to pull a special empty CMS page that contains only the relevant elements and placeholders for the web app to replace. This way, the web apps always have the current legal footer, header and navigation links to the main sections on the CMS and feel like being part of the cms albeit being hosted on their own somewhere else.
In this case, depending on an i18n parameter in the url, the script is curling a different template for each of ~50 markets.
Now the curious case: This is working for all templates but one, which is returning error 400 when being curled via https.
But when curled via http, 301 is returned, then another request to the previously faulty https URL is made, only this time it returns 200.
Does not work:
* Trying [server_ip]...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x562a5065a130)
* Connected to www.[company_name].pt ([server_ip]) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=*.[company_name].pt
* start date: Aug 23 19:02:44 2022 GMT
* expire date: Nov 21 19:02:43 2022 GMT
* subjectAltName: host "www.[company_name].pt" matched cert's "*.[company_name].pt"
* issuer: C=US; O=Let's Encrypt; CN=R3
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x562a5065a130)
> GET [path] HTTP/2
Host: www.[company_name].pt
User-Agent:Mozilla/5.0 (compatible; Integrator https://[app_name].[company_name].com)
Accept:text/html
Accept-Encoding:gzip,deflate
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive:115
Connection:keep-alive
Cache-Control:max-age=0
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
* The requested URL returned error: 400
* stopped the pause stream!
* Connection #0 to host www.[company_name].pt left intact
Does work:
* Trying [server_ip]...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x562a5066c0f0)
* Connected to www.[company_name].pt ([server_ip]) port 80 (#0)
> GET [path] HTTP/1.1
Host: www.[company_name].pt
User-Agent:Mozilla/5.0 (compatible; Integrator https://[app_name].[company_name].com)
Accept:text/html
Accept-Encoding:gzip,deflate
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive:115
Connection:keep-alive
Cache-Control:max-age=0
< HTTP/1.1 301 Moved Permanently
< Content-Type: text/html
< Content-Length: 185
< Connection: keep-alive
< Location: https://www.[company_name].pt/[path]
<
* Ignoring the response-body
* Connection #0 to host www.[company_name].pt left intact
* Issue another request to this URL: 'https://www.[company_name].pt/[path]'
* Trying [server_ip]...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x562a5066c0f0)
* Connected to www.[company_name].pt ([server_ip]) port 443 (#1)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=*.[company_name].pt
* start date: Aug 23 19:02:44 2022 GMT
* expire date: Nov 21 19:02:43 2022 GMT
* subjectAltName: host "www.[company_name].pt" matched cert's "*.[company_name].pt"
* issuer: C=US; O=Let's Encrypt; CN=R3
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x562a5066c0f0)
> GET [path]
Host: www.[company_name].pt
User-Agent:Mozilla/5.0 (compatible; Integrator https://[app_name].[company_name].com)
Accept:text/html
Accept-Encoding:gzip,deflate
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive:115
Connection:keep-alive
Cache-Control:max-age=0
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 200
< content-type: text/html; charset=utf-8
< content-length: 15305
< date: Thu, 22 Sep 2022 09:25:35 GMT
< server: Apache
< x-robots-tag: noindex
< strict-transport-security: max-age=31536000
< x-content-type-options: nosniff
< x-frame-options: SAMEORIGIN
< x-xss-protection: 1; mode=block
< referrer-policy: strict-origin-when-cross-origin
< cache-control: public, max-age=3600
< expires: Thu, 22 Sep 2022 10:25:35 GMT
< pragma: public
< accept-ranges: bytes
< content-encoding: gzip
<
* Connection #1 to host www.[company_name].pt left intact
Calling the template in the browser works, with http and https.
curl from CLI works also for both http and https and returns 200, the error only occurs with PHP.
The script (a bit blown up for error search):
function curl($url){
$headers[] = "User-Agent:Mozilla/5.0 (compatible; Integrator https://[app_name].[company_name].com)";
$headers[] = "Accept:text/html";
$headers[] = "Accept-Encoding:gzip,deflate";
$headers[] = "Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.7";
$headers[] = "Keep-Alive:115";
$headers[] = "Connection:keep-alive";
$headers[] = "Cache-Control:max-age=0";
/**/debug_to_console("init curl");
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_HTTPHEADER, $headers);
curl_setopt($curl, CURLOPT_ENCODING, "gzip");
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($curl, CURLOPT_SSLVERSION, 6);
curl_setopt($curl, CURLOPT_FAILONERROR, 1);
curl_setopt($curl, CURLOPT_VERBOSE, 1);
curl_setopt($curl, CURLOPT_STDERR, $verbose = fopen('php://temp', 'rw+'));
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
$data = curl_exec($curl);
if(curl_errno($curl)){
debug_to_console("curl url: " . $url);
debug_to_console("curl error: " . curl_error($curl));
//echo 'Request Error:' . curl_error($curl);
}
echo "Verbose information:\n<pre>", !rewind($verbose), htmlspecialchars(stream_get_contents($verbose)), "</pre>\n";
curl_close($curl);
return $data;
}
As mentioned, on other templates from other markets (on the same server) this is working fine. And I was told, that the instances are duplicated, so they all should have the same settings. Again, they are all on the same server. Unfortunately, I can't get them to check further until I'm pointing them into the right direction what to look at.
Tests with ssllabs.com showed no differences between the instances.
Being a Front End Dev, I'm really not getting any further at this point, after trying multiple suggestions for similar errors. I'm suspecting a wrong setting on that instance, but what instructions/hints can I give the server team?
Sorry I had to remove IPs and company/app names.
Thank you.
Edit: Not sure if this is relevant, for some reason, the line > GET /[path] HTTP/2 is different in the faulty curl attempt:
Every successful curl has one space between the path and HTTP/2, the faulty one has three spaces between the path and HTTP/2.
When using http instead of https, the line also has three spaces > GET /[path] HTTP/1.1), gets the 301 response and after redirect, the new attempt has only one space > GET /[path] HTTP/2.
The URL is not the problem, it is absolutely identical to other instances where there is no problem. Does the server add a space character itself on the initial curl attempt? What could cause added spaces? This has to be server side, the other instances don't mangle the initial GET path.

Curl works 90% of the time, 10% it fails with OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to api.site.com:443

I have a bug I am hoping someone can shed some insight on. I am fetching data from a client's API, and it works...90% of the time. About 1/10 times (but random frequency), curl_exec returns false with error:
OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to api.site.com:443
I've checked numerous threads on this error, and in all cases I see the issue is happening 100% of the time for the person. Has anyone dealt with something like this? If it's working sometimes and failing other times, where is the issue likely to be (the site/server or the api)?
I have experimented with different settings and nothing has fixed the issue. I've tried with and without CURLOPT_PINNEDPUBLICKEY, with and without CURLOPT_SSL_VERIFYHOST => 0, CURLOPT_SSL_VERIFYPEER => 0, with and without forcing HTTP and SSL versions. It fails always when I set these incorrectly, so I am pretty sure I have the right versions selected. The code exists in a custom joomla component which has been stable before this API switch. Here is my code:
$headers = array(
'X-apikey: '.$this->apikey,
"Cache-Control: no-cache",
);
$url = $this->url;
$curl = curl_init();
curl_setopt_array($curl, array(
CURLOPT_URL => $url,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 30,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_HTTPHEADER => $headers,
CURLOPT_SSL_VERIFYHOST => 0,
CURLOPT_SSL_VERIFYPEER => 0,
CURLOPT_SSLVERSION => CURL_SSLVERSION_TLSv1_2,
CURLOPT_PINNEDPUBLICKEY => '/path/to/api.site.com.pubkey.der',
));
$data = curl_exec($curl);
$result = json_decode($data);
curl_close($curl);
PHP version: 7.3.10
curl version: 7.66.0
openSSL library version: OpenSSL 1.1.1d 10 Sep 2019
Output from hanshenrik comment (removed company info):
Error output:
string(385) "* Trying xxx.xxx.xxx.xxx:443... * TCP_NODELAY set * Connected to api.site.com (xxx.xxx.xxx.xxx) port 443 (#0) * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /usr/local/etc/openssl#1.1/cert.pem CApath: /usr/local/etc/openssl#1.1/certs * OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to api.site.com:443 * Closing connection 0 "
Success Output
string(1324) "* Trying xxx.xxx.xxx.xxx:443... * TCP_NODELAY set * Connected to api.site.com (xxx.xxx.xxx.xxx) port 443 (#0) * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /usr/local/etc/openssl#1.1/cert.pem CApath: /usr/local/etc/openssl#1.1/certs * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 * ALPN, server did not agree to a protocol * Server certificate: * subject: C=US; ST=Illinois; L=Chicago; O=Company Name; CN=*.site.com * start date: May 14 15:20:46 2019 GMT * expire date: Aug 8 15:41:00 2020 GMT * issuer: C=US; ST=Arizona; L=Scottsdale; O=GoDaddy.com, Inc.; OU=http://certs.godaddy.com/repository/; CN=Go Daddy Secure Certificate Authority - G2 * SSL certificate verify ok. > GET /endpoint/id?fields=field1,field2,etc HTTP/1.1 Host: api.site.com Accept: / X-apikey: xxxxx Cache-Control: no-cache * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Tue, 08 Oct 2019 21:21:45 GMT < Server: Apache < Content-Length: 614 < Content-Type: application/json < * Connection #0 to host api.site.com left intact "

PHP CURL "NSS: client certificate not found (nickname not specified)" Issue

Request to a single specific https endpoint fails on one of my servers with "NSS: client certificate not found (nickname not specified)" error from PHP CURL.
When I ran the same query from the command line with verbose option I got the same error, but after that I also got a valid response (after the error in the same CURL request).
Then I've enabled verbose option for my PHP CURL request and displayed it for testing like this:
$error = curl_error($curl);
if ($error) {
echo "cURL Error #:" . $error;
rewind($verbose);
$verboseLog = htmlspecialchars(stream_get_contents($verbose));
echo "<pre>";
print_r($verboseLog);
} else {
echo $response;
}
And got this as output:
cURL Error #:NSS: client certificate not found (nickname not specified)
* About to connect() to <address> port 8001 (#0)
* Trying <ip>... * connected
* Connected to <address> (<ip>) port 8001 (#0)
* skipping SSL peer certificate verification
* NSS: client certificate not found (nickname not specified)
* SSL connection using TLS_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=*.<address>m,O=XXX A.S.,OU=IT,L=Umraniye,ST=Istanbul,C=TR
* start date: Jun 11 12:52:03 2015 GMT
* expire date: Jun 09 14:16:13 2018 GMT
* common name: *.<address>
* issuer: CN=GlobalSign Organization Validation CA - SHA256 - G2,O=GlobalSign nv-sa,C=BE
> POST /<url_part> HTTP/1.1
Host: <address>:8001
Accept: */*
Accept-Encoding: deflate, gzip
authorization: Basic UlBlahBlahnJrcjEyMzQ1Ng==
cache-control: no-cache
content-type: application/json
postman-token: eeefe018-7cac-1706-7b6d-847800a7ad0f
Content-Length: 333
< HTTP/1.1 200 OK
< set-cookie: sap-usercontext=sap-client=100; path=/
< set-cookie: SAP_SESSIONID_CFP_100=ohZvfO5ZOwq_LTE76Zgz9L-C0NdhlRHngM0AF6R3IFY%3d; path=/
< content-type: application/json; charset=utf-8
< content-length: 1150
< cache-control: max-age=0
< sap-cache-control: +180
< sap-isc-uagent: 0
<
* Connection #0 to host <address> left intact
* Closing connection #0
So, CURL request finished with the error, but in verbose log we see that after the error valid response was also fetched.
Here's my questions:
Is there any way to overcome this NSS issue as long as other different https endpoints work completely fine? Is it connected with endpoint's server configuration and can be resolved there?
If the first item fails, is it possible not to throw CURL NSS error, but get the response instead? CURLOPT_FAILONERROR option does not help.
CURL was built with NSS:
curl 7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
Protocols: tftp ftp telnet dict ldap ldaps http file https ftps scp sftp
Features: GSS-Negotiate IDN IPv6 Largefile NTLM SSL libz
Update:
The second item can be closed, because this error is really strange: curl_error returns the error, but curl_exec returns valid response at the same time.
Try giving a absolute path to the certificate and add a "./" prefix. For example "./path-to-your-cert"

Connect to site with unknown scheme via Guzzle

I have a list of URL without scheme specified, ex.:
github.com (works only with https);
what.ever (works only with http);
google.com (supports both schemes).
I need to get contents of its' root path (/) using Guzzle (v6), but I do not know their scheme: http or https.
Can I solve my task without making 2 requests?
Guzzle will follow redirects by default, so unless you have an explicit list of URL's that are https then I would prepend http if missing and allow the website to redirect if it only accepts https requests (which is what they should do).
<?php
require 'vendor/autoload.php';
use GuzzleHttp\Client;
$response = (new Client)->get('http://github.com/', ['debug' => true]);
Response:
> GET / HTTP/1.1
Host: github.com
User-Agent: GuzzleHttp/6.2.1 curl/7.51.0 PHP/5.6.30
< HTTP/1.1 301 Moved Permanently
< Content-length: 0
< Location: https://github.com/
< Connection: close
<
* Curl_http_done: called premature == 0
* Closing connection 0
* Trying 192.30.253.112...
* TCP_NODELAY set
* Connected to github.com (192.30.253.112) port 443 (#1)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: github.com
* Server certificate: DigiCert SHA2 Extended Validation Server CA
* Server certificate: DigiCert High Assurance EV Root CA
> GET / HTTP/1.1
Host: github.com
User-Agent: GuzzleHttp/6.2.1 curl/7.51.0 PHP/5.6.30
< HTTP/1.1 200 OK
< Server: GitHub.com
< Date: Wed, 31 May 2017 15:46:59 GMT
< Content-Type: text/html; charset=utf-8
< Transfer-Encoding: chunked
< Status: 200 OK
Generally — no, you cannot solve the problem without two requests (because one there might be no redirect).
You can do 2 async requests with Guzzle, then you will spend probably the same time, but with a proper general solution.
Just create two requests and wait for both:
$httpResponsePromise = $client->getAsync('http://' . $url);
$httpsResponsePromise = $client->getAsync('https://' . $url);
list($httpResponse, $httpsResponse) = \GuzzleHttp\Promise\all(
[$httpResponsePromise, $httpsResponsePromise]
);
That's all, now you have two responses (for each protocol) and you make them in parallel.

php file not running in command line... but works when run on localhost

I created a really simple small php script that I was hoping to run in the command line on my MacOSX terminal.
If I just type echo "test" that gets outputted just fine.
Now I added a curl request to that script to grab some data from an API and then type print_r $result.
Unfortunately the result is never outputted into the command line. If I now run the exact same php file in my MAMP localhost, everything works fine and the content is outputted correctly.
This is my code:
require 'connection.php';
$store = new connection('danny', 'https://store-bgf5e.mybigcommerce.com/api/v2/', 'XXXXX');
$customers = $store->get('/customers');
foreach($customers AS $customer) {
print "------ Getting Adresses for: " . $customer['first_name'] . ' ' . $customer['last_name'] . ' -------';
if( isset($customer['addresses']['resource']) ) {
print_r($store->get($customer['addresses']['resource']));
}
}
exit;
I know that the API returns the addresses just fine so the call definitely works.
This is what Curl is saying:
* Trying 63.141.159.120...
* TCP_NODELAY set
* Connected to store-bgf5e.mybigcommerce.com (63.141.159.120) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /Applications/MAMP/Library/OpenSSL/cert.pem CApath: none
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: C=AU; ST=New South Wales; L=Ultimo; O=Bigcommerce Pty Ltd; CN=*.mybigcommerce.com
* start date: Jun 4 00:00:00 2015 GMT
* expire date: Sep 1 12:00:00 2018 GMT
* subjectAltName: host "store-bgf5e.mybigcommerce.com" matched cert's "*.mybigcommerce.com"
* issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=DigiCert SHA2 High Assurance Server CA
* SSL certificate verify ok.
> GET /api/v2/customers HTTP/1.1
Host: store-bgf5e.mybigcommerce.com
Authorization: Basic ZGFubnk6M2QyNjE5NTdjZGNjMTNlYmU1MTJiNDRiMjE1NGJiZGZmMGRjNTUxMw==
Accept: application/json
Content-Type: application/json
< HTTP/1.1 200 OK
< Server: openresty
< Date: Fri, 05 May 2017 19:26:27 GMT
< Content-Type: application/json
< Transfer-Encoding: chunked
< Connection: keep-alive
< Set-Cookie: fornax_anonymousId=0ff167a5-e158-4ff8-8962-67b19bd4271b; expires=Mon, 03-May-2027 19:26:27 GMT; path=/; domain=.store-bgf5e.mybigcommerce.com
< Last-Modified: Thu, 13 Apr 2017 22:34:54 +0000
< X-BC-ApiLimit-Remaining: 2147483622
< X-BC-Store-Version: 7.6.0
<
* Curl_http_done: called premature == 0
* Connection #0 to host store-bgf5e.mybigcommerce.com left intact
To me it looks like Curl is executing everything correctly. It's almost like the successful curl request prevents any other PHP code from running. I would expect to see addresses being printed out to the command line because of "print_r".
Thanks in advance for any tips you might have.
may you can try to var_dump your result and see what happend. For your curl response I did not see any content-length header, I think curl return empty string.
and please can you post your php code ?
Figured this out. I was using this framework to connect to the BigCommerce API: https://github.com/adambilsing/PHP-cURL-lib-for-Bigcommerce-API/blob/master/connection.php
In the http_parse_headers method of the connection class there is an if statement on line 54:
if ($retVal['X-Bc-Apilimit-Remaining'] <= 100) {
sleep(300);
}
This would trigger the script to go to sleep even if $retVal['X-Bc-Apilimit-Remaining'] is null.
Changing it to this fixed the issue:
if ( isset($retVal['X-Bc-Apilimit-Remaining']) && $retVal['X-Bc-Apilimit-Remaining'] <= 100) {
sleep(300);
}
It's pretty obvious once you look into it. I wasn't even aware that there is a sleep command in php.
Thanks for everybody who was trying to make sense of this :)

Categories