I'm trying to run both my API and my client in the same Vagrant VM. In the client I'd like to use Guzzle. When I try to set up a simple test, I get the following error from curl:
Fatal error: Uncaught exception 'GuzzleHttp\Exception\RequestException' with message '[curl] (#6) See http://curl.haxx.se/libcurl/c/libcurl-errors.html for an explanation of cURL errors [url]
When I use a Github url instead, it all works fine. One thing I'm sure of, is that there is no typo in my url.
I have both client and API pointing to the ip-address of my VM, and both run fine separately.
I also ran into a topic on which it was suggested to use a cacert.pem certificate in php.ini, which I have tried, but it didn't work.
Any that knows how to solve this?
Stupid of me. I had to place 127.0.0.1 api.dev in the hosts file on my VM.
The error number 6 (CURLE_COULDNT_RESOLVE_HOST) means that libcurl failed to resolve a host name to an IP address.
Related
I am using sentry (version 2.10) with Laravel v6.20.27 and PHP v7.4.19. I follow the same steps mentioned in the documentation (https://docs.sentry.io/platforms/php/guides/laravel/other-versions/laravel5-6/). However I am not able to proceed as I am getting the below error
There was an error sending the event.
SDK: Failed to send the event to Sentry. Reason: "SSL peer certificate or SSH remote key was not OK for "https://oXXXXXXX.ingest.sentry.io/api/XXXXXXXX/store/".".
Please check the error message from the SDK above for further hints about what went wrong.
Please help me to solve this issue.
I faced the same issue on my local. I'm using Laragon on Windows.
I downloaded a new cacert.pem file using this link
Just replace it with the one you currently have in your ssl folder. Hope this will fix your issue.
When you generate SSL certificate, you have to specify the host or server name. If it doesn't match to your real host or server name you see this error.
Check this repo https://gitlab.com/damp-stack/mysql-ssl-docker/-/blob/master/gencerts.sh fake-server should be equal to your server name.
My dilemma is that I have two domains running on localhost, domain_a and domain_b. They're both running nginx, apache, and php-fpm. domain_a is running CodeIgniter 3.0.0, and domain_b is running CodeIgniter 4. In another VM, I had domain_a in a Docker container, and was able to hit the API endpoints in domain_b without any issues. Development work made it a requirement to have them both be on the same server, as it's close to how it will be in other environments.
For specifics, we're using the PHP oAuth module, and it throws an error that "making the request failed (dunno why)", which is extremely helpful. After some digging, I found that I could hit other endpoints without issue (such as google.com and a known endpoint outside these domains). I attempted to use cURL in place of oAuth (just a simple test to hit the endpoint), and I consistently get the same error.
tls_process_server_certificate:certificate verify failed
The certs I use are all self-signed for both domains, and I'm able to reach both domains from within the browser without issue. If it matters, both domains have user certs when logging in, but the users aren't the same, as each domain has their own self-signed CA.
My current code is this:
$conn = new OAuth($consumer_key, $consumer_secret, $oauth_sig_method);
$conn->enableDebug();
/*
if (is_on_local()) {
$conn->setCAPath('path/to/cert.cert');
}*/
$conn->disableSSLChecks();
$token = $conn->getRequestToken($auth_url);
I left the commented out part in to show what I've tried - I've tried pointing that to the system cert, domain_a CA, and domain_b CA, none of which worked. It looks like (for some reason) $conn->disableSSLChecks() isn't working, but I'm not sure of that. The error thrown is in the call to getRequestToken().
My etc/hosts file:
127.0.0.1 domain_a.tld
127.0.0.1 domain_b.tld
The actual TLD isn't tld, but again, they work in the browser and it worked before when domain_a was in Docker.
I've already tweaked domain_a enough so CI 3 works with PHP 8, so I'm convinced the problem is talking from one domain to the other. I'm running RHEL 8, and I've already got SELinux set to Permissive (actually disabled, I think, for development). There's nothing in httpd, nginx, php-fpm, or firewall logs. The only indicator I have is what I get from CI 3 logs in domain_a:
Severity: Warning --> OAuth::getRequestToken(): SSL operation failed with code 1.
OpenSSL Error messages:error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed in /path/to/file
Severity: Warning --> OAuth::getRequestToken(): Failed to enable crypto in /path/to/file
Severity: Warning --> OAuth::getRequestToken(domain_b/oauth/access): Failed to open stream: operation failed in /path/to/file
I feel like the answer is right there, I'm just not seeing it.
As usual, shortly after explaining my issue I found the fix.
Currently, I have the endpoint as https://domain_b.tld/oauth/access. After some tinkering, I got a different error about SSL version. That put me on the track to the correct answer:
http://domain_b.tld:port/oauth/access. I'm able to hit the endpoint now without issue. I've got a virtual hosts file that, even though both domains are on the same port, I had to specify it or the call fails.
If anyone else runs into this issue, check the base URL. I never would have thought about hitting http rather than https as a solution.
I am learning to work with youtube data api v3 (using PHP). So I downloaded sample api code and some how i manage to download and install composer in my working directory(version 1.4.x) successfully.
Ater this i run the serach.php script it shows following error
Fatal error: Uncaught exception 'GuzzleHttp\Exception\RequestException' with message 'cURL error 60: SSL certificate problem: unable to get local issuer certificate (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)' in C:\wamp\www\youtube feeds\vendor\guzzlehttp\guzzle\src\Handler\CurlFactory.php on line 187
( ! ) GuzzleHttp\Exception\RequestException: cURL error 60: SSL certificate problem: unable to get local issuer certificate (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) in C:\wamp\www\youtube feeds\vendor\guzzlehttp\guzzle\src\Handler\CurlFactory.php on line 187 .
I am using wamp with php 5.5.12 and apache 2.4.9. Also I enabled curl extension from tray and in php.ini file.
If just starting out, do not try to jump into the deep end.
Start with the "restfull" api side of things.
As an example, you can do this.
$url_link = 'https://www.googleapis.com/youtube/v3/videos?part=snippet&id=[VIDEO_ID]&key=[API_KEY]';
$video = file_get_contents($url_link);
$data= json_decode($video, true);
Then you can grab the required info in that call as you like. Like this
$vid = $data['id'];
LIB's are good for streamlining large programs and code, but not always needed.
The issue is due to a missing "cacert.pem" file (or provided by the host operating system that runs php). This file verifies certificate authorities, so that curl can connect to youtube securely (and know it's youtube, and not a victim of a man in the middle attack).
You cna download these files manually, and specify them in your php ini, but the better option is to use the "certainty" php package to manage these. I would advise using composer, it's very easy to start using.
So I have a website built in php and it was working perfectly on one server, I then moved the website to a server I have on Digital Ocean and am running into several errors, they seem to be based around http request failures while using the imagick library...
I was hoping to not to have to start debugging this from a code point of view as it was already working perfectly and would prefer to change server settings.
Fatal error: Uncaught exception 'ImagickException' with message 'Imagick::__construct(): HTTP request failed! HTTP/1.1 404 Not Found .
I cannot figure out what differences there, on both server allow_url_fopen is set to on.
The php version is different, 5.520 on the original server, 5.5.9 on the new server.
The versions of imagick are the same. I am also getting some other errors using the mpdf library but I will try deal with these later (Im hoping if I can resolved the first ones these ones will also get resolved).
My question is , is there possibly any other setting on the server I should be looking out for that may be causing these php errors?
EDIT:
Just to add more information, i can get rid of some of the errors by changing the file path https://www.example.com/myimage.jpg to /var/www/example/myimage.jpg . This solves some of the errors but I would rather get the root of the issue thats causing it not to work in the first place, because I feel that its the same problem thats causing other errors.
The error code says it: 404, file not found. You are probably using the wrong URL.
Are you able to fetch https://www.example.com/myimage.jpg using a webbrowser?
On several popular linux distros, /var/www/example/myimage.jpg would be served at https://www.example.com/example/myimage.jpg instead of https://www.example.com/myimage.jpg with the default configuration.
[edit]
It just came to my attention that the URL is HTTPS, there is a possibility that the script is rejecting the server certificate. Try with regular HTTP - no point in using SSL if the file is on the same machine.
I know there are countless threads on this, but nothing suggested has worked for me.
When trying to create a new SoapClient, I get the error:
Fatal error: Uncaught SoapFault exception: [WSDL] SOAP-ERROR: Parsing WSDL:
Couldn't load from 'https://m2mconnect.ee.co.uk/orange-soap/services/MessageServiceByCountry?wsdl' :
failed to load external entity "https://m2mconnect.ee.co.uk/orange-soap/services/MessageServiceByCountry?wsdl"
The WSDL file is: https://m2mconnect.ee.co.uk/orange-soap/services/MessageServiceByCountry?wsdl.
I thought it might be a https issue, but I've successfully loaded other wsdl clients. It seems to just be this EE one that won't work, and it's getting very frustrating!
I'm running my application on a vagrant instance with php5.5, on a mac os x host. I've tried running it on mac os x and I get the same problem.
I've also tried setting the 'ssl_method' option for SoapClient, but that has no effect.
I've tried to curl/wget the url and it gets an SSL error, whereas it completes the handshake with something like https://paypal.com
Does anyone have any ideas what could be causing this?
Update:
I ran this on the vagrant box: curl -v -SSLv3 https://m2mconnect.ee.co.uk/orange-soap/services/MessageServiceByCountry?wsdl and it connected successfully, however, creating a SoapClient with the option ssl_method set to SOAP_SSL_METHOD_SSLv3 doesn't work
I ended up forking BeSimpleSoapClient which wrapped SoapClient and uses Curl to get the SoapService. I modified their Curl class to allow setting of curl ssl version: https://github.com/robcaw/BeSimpleSoapClient
I was able to access the WSDL location right now with a browser.
So the only answer remaining is: IF you get SSL errors when trying to download the WSDL with curl or wget, then it is safe to assume PHP has the same error - and the solution would be to fix them - and as a first step, mentioning your findings in your question.
As it looks right now, PHP is not to blame, everything you did should work - the only thing missing is the network connection to fetch the WSDL.