After running an apt upgrade and a restart of my ubuntu server, cUrl (via Guzzle) repots an error that the host cannot be resolved.
cURL error 6: Could not resolve host: xx.xx (see http:\/\/curl.haxx.se\/libcurl\/c\/libcurl-errors.html)
My code is
$client = new Client();
$response = $client->post("https://xx.xx?r=/center/api", [
RequestOptions::HEADERS => [
'X-Requested-With' => 'XMLHttpRequest'
]
]);
This happens randomly and with multiple domains. Meanwhile i was running pings for those domains on the terminal and they were working.
On StackOverflow and Google I could only find solutions that were adding the host to the hosts file but for me that seems to be not a real solution.
Related
My Code fails to reach the Docker Socket
$client = new GuzzleHttp\Client();
$test = $client->request('GET','http://v1.40/containers/json',[
'curl' => [CURLOPT_UNIX_SOCKET_PATH => '/var/run/docker.sock']
]);
I only get a generic cURL error 7 from that and I´ve checked that the socket is available and working inside the Container with the cURL command from the cmd. Its only when i try to connect via PHP it fails ominously and frankly im out of ideas.
So just in case someone stumbles upon this in the future and has the same or a similar problem.
Guzzle was not the problem in this case but phpfpm. I did not realize that the phpfpm workers in the official php docker image used the www-data user by default. The way I used is to change the user in the www.conf (default /usr/local/etc/php-fpm.d/www.conf in docker)
user = root
group = root
you will have to append the -R flag to the run command to allow running the workers as root.
I have a local WordPress installation running at: https://catalogue3.test.
Note that all .test domains should resolve to localhost, as I use Laravel valet. When I execute the following code in my Laravel project however, I get an exception as shown below.
$client = new \GuzzleHttp\Client();
$response = $client->request('GET', "https://catalogue3.test", ['verify' => false]);
ConnectException
cURL error 6: Could not resolve: catalogue3.test (Domain name not
found) (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
When I run the command below in the terminal, the WordPress page is displayed.
curl https://catalogue3.test/ --insecure
Add
ip catalogue3.test
to your /etc/hosts file
I tried to add the domain to hosts and I tried to change dns in network settings, this answer is what worked for me.
Quick way to check if this is your problem is to do: curl --version
and php --ri curl
The versions should match. If they don't it's probably because brew
has installed curl-openssl. This can be removed by doing:
brew uninstall curl-openssl --ignore-dependencies
Maybe there's a way to configure the installed curl-openssl properly -
I've not investigated this yet.
I solved this adding catalogue3.test to /etc/hosts, even if I was using DnsMasq, and in theory, I wouldn't need it.
In my case (on macos) I had to add 127.0.0.1 as the first DNS server option in my WiFi settings.
Some useful info here too: https://github.com/laravel/valet/issues/736
I updated my curl to work on https. It does work on terminal however when I use cUrl in php it doesn't work for any https based url.
The error code i get is 77. I have looked into other solutions but no solution is working at all.
I have already tried adding verifyHost, SSL v 6, return transfer nothing works
A simple code is
$ch = curl_init("https://www.google.com");
$response = curl_exec($ch);
$error = curl_error($ch);
$number = curl_errno($ch);
curl_close($ch);
$response = array(
'Result' => array(
'error'=> $error,
'number' => $number,
//'message'=>$fields,
'count'=> $response
));
$this->jsonOutput($response);
In terminal curl https://www.google.com works fine.
What is going on, the curl in php was working just fine before.
As you are using yum, i assume you are working on a CentOS distro.
I have made a brief research and it seems it could be an issues with the NSS centos package, triggered by your yum update. You could try some basic process restart.
Try to restart your httpd service:
service httpd restart
Or via apache:
apachectl stop
apachectl start
and your php-fpm
sudo service php-fpm restart
I have to get the response from an external API on a Vagrant machine. The API url is:
http://local.foobar.vhost/controller/action/
Typing the URL on the host machine, with MAMP turned on and vhosts set properly, I get the proper response:
{"response": "success", "message": "not set"}
And the same goes for the cURL command:
curl 'http://local.foobar.vhost/controller/action/'
I get the same response here, so everything is ok.
But then I open vagrant with the vagrant ssh command, try to get the same response using curl command, and I get the following error:
Couldn't resolve host local.foobar.vhost
I also tried to add the port that MAMP uses, which is 80, so:
curl 'http://local.foobar.vhost:80/controller/action/'
But it gives me the same error.
How can I make this work?
Thanks!
From this code I'm getting the error below
require "vendor/autoload.php";
use Aws\Common\Aws;
use Aws\DynamoDb\DynamoDbClient;
use Aws\DynamoDb\Enum\ComparisonOperator;
use Aws\DynamoDb\Enum\KeyType;
use Aws\DynamoDb\Enum\Type;
$aws = Aws::factory(array(
'key' => '[clipped]',
'secret' => '[clipped]',
'region' => Region::US_WEST_1
));
$client = $aws->get("dynamodb");
$tableName = "ExampleTable";
$result = $client->createTable(array(
"TableName" => $tableName,
"AttributeDefinitions" => array(
array(
"AttributeName" => "Id",
"AttributeType" => Type::NUMBER
)
),
"KeySchema" => array(
array(
"AttributeName" => "Id",
"KeyType" => KeyType::HASH
)
),
"ProvisionedThroughput" => array(
"ReadCapacityUnits" => 5,
"WriteCapacityUnits" => 6
)
));
print_r($result->getPath('TableDescription'));
I'm getting the following error when trying to add a table into AWS's DynamoDB.
PHP Fatal error: Uncaught Aws\\DynamoDb\\Exception\\DynamoDbException: AWS Error Code:
InvalidSignatureException,
Status Code: 400,
AWS Request ID: [clipped],
AWS Error Type: client,
AWS Error Message: Signature expired: 20130818T021159Z is now earlier than
20130818T021432Z (20130818T022932Z - 15 min.),
User-Agent: aws-sdk-php2/2.4.3 Guzzle/3.7.2 curl/7.21.6 PHP/5.3.6-13ubuntu3.9\n thrown in
/var/www/vendor/aws/aws-sdk-php/src/Aws/Common/Exception/NamespaceExceptionFactory.php on
line 91
So far I've:
Checked to see if Authentication Key and Secret Key were correct, they were.
Updated cURL
When I put false authentication permissions in, the error didn't change.
It seems that your local system time might be incorrect. I've had a similar problem with AWS S3, where my system clock was skewed by 30 mins.
If you're running ubuntu, try updating your system time:
sudo ntpdate ntp.ubuntu.com
You can also restart your date service to solve the problem if you've already got ntpdate installed.
sudo service ntpdate stop
sudo service ntpdate start
If you are using docker-machine on Mac, you can resolve with this command:
docker-machine ssh default 'sudo ntpclient -s -h pool.ntp.org'
Quick note for vagrant projects: this is usually resolved by vagrant reload.
Not exactly OP question, but this is top google response for "InvalidSignatureException DynamoDB", which has many underlying causes.
For me, it was because my body contained emoji, 100% reproducible. Worked around by encoding the body (in my case stringified json) using encodeURIComponent.