Setup/Environment:
In our PHP application, we sometimes need to make HTTPS requests from PHP to other servers. The setup in question is as follows:
We are using PHP stream wrappers for doing the HTTP requests (using Guzzle HTTP). We are doing this because stream wrappers support using the Windows Certficiate Store for certificate verification.
The server runs on Windows.
We use a proxy on for the HTTPS requests.
The firewalls are configured to allow
Access to the servers we are doing our requests to.
Access to all certificate revocation lists relevant for the certificates used.
Our problem:
Sometimes, out of the blue, our HTTPS requests fail, with certificate validation errors. This problem persists, until someone opens a remote desktop session to the server and requests the very same URL we are trying to query in the servers Internet Explorer. After that, our PHP application can do its requests as it should.
Question:
What is the problem here? And what can we do to analyse this further?
If that were a Guzzle problem, it would happen every time.
However, do try to issue the same HTTPS call using cURL to both verify this is the case, and see if by any chance the cURL request also temporarily clears the issue, just as Internet Explorer does.
But this rather looks like a caching problem - the PHP server request is not able to properly access (priming certificates) the Certificate Store, it is only able to use its services after someone else has gained access, and only as long as the cache does not expire. To be sure this is the case, simply issue calls periodically and mark the time elapsed between user logging in and using IE, and Guzzle calls starting to fail. If I am right, that time will always be the same.
It could be a permission problem (I think it probably is, but what permissions to give to what, that I'm at a loss to guess at). Maybe calls aren't allowed unless fresh CRLs for that URL are available, and PHP doesn't get them). This situation could also either be fixed temporarily by running a IE connection attempt to the same URL from a PowerShell script launched by PHP in case of error, or (more likely, and hopefully) attempting to run said script will elicit some more informative error message.
update
I have looked into how PHP on Windows handles TLS through Guzzle, and nothing obvious came out. But I found an interesting page about TLS/SSL quirks.
More interestingly, I also found out several references on how PHP ends up using Schannel for TLS connections, and how Windows and specifically Internet Explorer have a, let us say, cavalier attitude about interoperability. So I would suggest you try activating the Schannel log on Windows and see whether anything comes out of it.
Additionally, on the linked page there is a reference to a client cache being used, and the related page ends up here ("ClientCacheTime").
Its not an application problem.
I am 99% sure this is routing problem and in some circumstances packets are dropped in the router. I would look at the network, change the environment or, if possible, do some network sniffing or monitoring.
If You have a decent network infrastructure, You can do SNPM traps for request count and timeout data collecting (from routers and switches) and ingest it in Elastic APM. This would give You quite detailed time-series analysis.
You can see this https://github.com/guzzle/guzzle/issues/394 verifyis the problem. and if you make the verify to be false that will make your system expose to security attack.
// Use the system's CA bundle (this is the default setting)
$client->request('GET', '/', ['verify' => true]);
// Use a custom SSL certificate on disk.
$client->request('GET', '/', ['verify' => '/path/to/cert.pem']);
// Disable validation entirely (don't do this!).
$client->request('GET', '/', ['verify' => false]);
These are the Request Options and you can see how to do the SSL certificate verification. They describe the issue as the following
Not all system's have a known CA bundle on disk. For example, Windows
and OS X do not have a single common location for CA bundles. When
setting "verify" to true, Guzzle will do its best to find the most
appropriate CA bundle on your system. When using cURL or the PHP
stream wrapper on PHP versions >= 5.6, this happens by default. When
using the PHP stream wrapper on versions < 5.6, Guzzle tries to find
your CA bundle in the following order:
Check if openssl.cafile is set in your php.ini file.
Check if curl.cainfo is set in your php.ini file.
Check if /etc/pki/tls/certs/ca-bundle.crt exists (Red Hat, CentOS, Fedora;
provided by the ca-certificates package)
Check if /etc/ssl/certs/ca-certificates.crt exists (Ubuntu, Debian; provided by
the ca-certificates package)
Check if /usr/local/share/certs/ca-root-nss.crt exists (FreeBSD; provided by
the ca_root_nss package)
Check if /usr/local/etc/openssl/cert.pem (OS X; provided by homebrew)
Check if C:\windows\system32\curl-ca-bundle.crt exists (Windows)
Check if C:\windows\curl-ca-bundle.crt exists (Windows)
The result of this lookup is cached in memory so that subsequent calls in the same
process will return very quickly. However, when sending only a single
request per-process in something like Apache, you should consider
setting the openssl.cafile environment variable to the path on disk to
the file so that this entire process is skipped
See also and how to ignore invalid ssl certificate errors in-guzzle 5 and guzzle-request-fails
Related
I have an Symfony2 application that has a long pooling mechanism implemented. The user logs in the application, and at a certain time a long pooling request is started to notify the user about some changes while he still works inside the application.
The php session is saved in the database so no session locking problems occur while opening other ajax requests during the long pooling duration.
After installing a SSL certificate the problems appeared and the long pooling seems to lock other requests while he is running, behaving like the normal php session. Although the php session is still saved/read from the database the application behaves like a locking mechanism is present and doesn't allow two request at the same time.
Is this a problem with configuring the SSL module or am I missing something about Symfony's behavior? If I disable the SSL everything works great and multiple requests at the same time are not a problem.
Late edit:
Apparently the problem was with the HTTP2 headers. If I use HTTP2 headers concurrent requests are queued and executed one after the other. Using HTTP1.1 everything is ok. This is really strange, because I checked the server config according to apache documentation and this should work with my SSL module. Anyone has experienced something like this?
Are doing it with jQuery or Angular, from the client ? If so, check JS console and debug network. Also, can you get an hand on the SSL apache conf of your server ? Some parameters may overload the default config of your server, and makes conflict with the working non-ssl config.
I am in this very unfortunate situation:
My website is using outdated software (security patches are applied) with OpenSSL 0.9.8o 01 Jun 2010 which doesn't support TLSv1.1/1.2
I also have payment gateway which is PCI DSS compliant therefore SSL and early TLS is disabled there
My website used to exchange data with payment gateway but as TLSv1.0 is dropped I can no longer use php's cURL library or even file_get_contents() (or wget/lynx/curl via shell)
Is there any workaround, any option how to connect TLSv1.1+ secured server without using built-in libraries?
I know some classes exists in PHP like phpseclib which is SSH client, great for people who can't use SSH2 module
Does something like that exists for PHP? Is there any way I can connect to my gateway?
So far my best idea is connecting to gateway thru other server (with updated software)
Once i used utility called stunnel for my non-TLS client, quote from website:
Stunnel is a proxy designed to add TLS encryption functionality to existing clients and servers without any changes in the programs' code. Its architecture is optimized for security, portability, and scalability (including load-balancing), making it suitable for large deployments.
Is there any workaround, any option how to connect TLSv1.1+ secured server without using built-in libraries?
I can think of five work-arounds:
1) It is possible (but tricky) to have multiple versions of OpenSSL (or even Curl) installed. You can even use LD_PRELOAD_LIBRARY to make an existing binary use library from somewhere else. I think this is a messy way to do it.
2) This would be really simple with Docker. Unfortunately, it requires a modern kernel, so you probably can't install it on your server. But you could install a more modern OS, then install your server into a Docker container with the older OS. But this may be about as much work as moving your website to a newer OS.
3) Instead of Docker, just use chroot. On a newer box, use "ldd" to find all dependencies. Copy them (plus curl) into a chroot. Copy that dir to your server and run "chrooot dir curl". The binary will see the newer libraries and work. This will only take a few minutes to setup for someone who knows what they are doing.
4) Use a statically-linked version of curl that has a newer OpenSSL compiled in.
5) Use a program that doesn't use OpenSSL. For example some go(lang) programs use their own encryption, and compile to a static binary. For example: https://github.com/astaxie/bat
The first 2 might be a bit impractical in your setup, but any of the last 3 will work.
I liked your initial idea of proxying to another server except you are circumventing the security restrictions imposed by the gateway, and when dealing with payment info, that is probably not a idea.
However, if you can run a Vagrant instance on your own server that has updated libraries, then you can proxy the insecure request to the Vagrant instance on localhost and it doesn't leave the box, then from the Vagrant instance that has updated libraries do the secure communication to your gateway.
I was going to suggest Stunnel. BUT dafyc well noted.
Those PCI restrictions are not implemented to slow people down (only.. lol). They exists for protection.
You will solve your problem with Stunnel.
But why don´t update the website server?
You have pinpointed the SSL outdate, but as a server, several other bugs will be available.
If they explore some other weakness and get root access, they will have stunnel password to start exploring what´s in the pipe.
So this does not seems good enough to assure the reliability that PCI wants you to have.
I allready posted one answer but than i read in comments that you cant install any tools on server. You can use PHP native functions called PHP Streams. This is code sample for old twitter API:
$url = 'https://api.twitter.com/1/statuses/public_timeline.json';
$contextOptions = array(
'ssl' => array(
'verify_peer' => true,
'cafile' => '/etc/ssl/certs/ca-certificates.crt',
'verify_depth' => 5,
'CN_match' => 'api.twitter.com',
'disable_compression' => true,
'SNI_enabled' => true,
'ciphers' => 'ALL!EXPORT!EXPORT40!EXPORT56!aNULL!LOW!RC4'
)
);
$sslContext = stream_context_create($contextOptions);
$result = file_get_contents($url, NULL, $sslContext);
I found another solution.
On secure server i set up two VirtualHosts - 443 for TLSv1.2 and another for my website only with TLSv1.0 support
More info here: https://serverfault.com/a/692894/122489
Thanks for all answers.
I have RHEL server on which PHP websites are hosted using Apache web server. In one website I'm using PHP Curl library to connect to some services. These services make use of SSL (https).
When I browse any PHP page which makes SSL calls using Curl, I get below error:
Problem with the SSL CA cert (path? access rights?)
If I run the same PHP script from command-line on my server, it works fine. Only when I browse it I get above error.
I have already tried the solution given in http://snippets.webaware.com.au/howto/stop-turning-off-curlopt_ssl_verifypeer-and-fix-your-php-config/ but it does not work.
If there is anything else I need to do, please let me know.
As mentioned earlier, I tried the solution given at http://snippets.webaware.com.au/howto/stop-turning-off-curlopt_ssl_verifypeer-and-fix-your-php-config/
However, the Apache server was not taking the php.ini changes even after restarting it. When I restarted my Linux machine, it worked. Looks like Apache might be caching php.ini somewhere and it got cleared only by restarting machine.
Hypothesis:
Could be that the user that's running Apache is so restricted that it can't get at the curl CA store. Try logging in as the web user, e.g. www-data and cd'ing to the directory with curl's files. At some time you might get an access-denied error. Finally try cat to display the file (still as Apache user). You could get an access-denied error there too.
Wherever the error occurs, either open up the permissions to allow the Apache user to access the file, or run Apache as another user.
I have dropped a php v5 openid library into a site and ran detect.php which fails on 'HTTP Fetching' (report at the end of this message). The discovery.php also fails. The server is running on HTTPS and has all the needed libraries added so should just work - as it has on other servers I have implemented it on.
Any attempt to run the consumer/try_auth.php fails with an error 'not a valid OpenID' which apparently is being caused by the failure to http fetch.
Any pointers would be appreciated.
OpenID Library Support Report
This script checks your PHP installation to determine if you are set
up to use the JanRain PHP OpenID library.
Setup Incomplete
Your system needs a few changes before it will be ready to run the
OpenID library.
Math support
Your PHP installation has gmp support. Good.
Cryptographic-quality randomness source
Using (insecure) pseudorandom number source, because
Auth_OpenID_RAND_SOURCE has been defined as null.
Data storage
No SQL database support was found in this PHP installation. See the
PHP manual if you need to use an SQL database. The library supports
the MySQL, PostgreSQL, and SQLite database engines, as well as
filesystem-based storage. In addition, PEAR DB is required to use
databases.
If you are using a filesystem-based store or SQLite, be aware that
open_basedir is in effect. This means that your data will have to be
stored in one of the following locations:
'' If you are using the filesystem store, your data directory must be
readable and writable by the PHP process and not available over the
Web.
HTTP Fetching
This PHP installation has support for libcurl. Good.
Fetching URL failed!
Your PHP installation appears to support SSL, so it will be able to
process HTTPS identity URLs and server URLs.
XML Support
XML parsing support is present using the Auth_Yadis_dom interface.
Query Corruption
Your web server does not corrupt queries. Good.
This error has been thrown because SSL verification fail when curl_getinfo on Auth_Yadis_ParanoidHTTPFetcher. To solve this issue, you should disable SSL verification by adding this option:
curl_setopt($c, CURLOPT_SSL_VERIFYPEER, false);
to Auth/Yadis/ParanoidHTTPFetcher.php at line 93 (after $c = curl_init(););
Hope this help.
I'm trying to send a cURL request from a Windows Server 2008 machine using PHP (version 5.3.12) and keep receiving the error Could not resolve proxy: http=127.0.0.1; Host not found. As far as I cal tell, I'm not using a proxy - CURLOPT_PROXY is not set, I've run netsh winhttp show proxy to make sure there's not a system-wide setting in place, I've even checked all the browsers on my machine to confirm none are configured to use a proxy (just in case this could possibly have an effect). I'm having trouble figuring out why cURL insists on telling me that 1) I'm using a proxy and 2) it can't connect to it.
I'm able to resolve the error by explicitly disabling the use of a proxy via curl_setopt($curl, CURLOPT_PROXY, '');, but this isn't the greatest solution - a lot of the places I use cURL are in libraries, and it'd be a pain (not to mention less than maintainable) to go around and hack this line into all of them. I'd rather find the root cause and fix it there.
If it helps, this has happened to me only with POST requests so far. Command-line cURL (from a Git bash prompt) works fine. These calls also work fine from our dev machine, so it seems to be something specific to my machine.
If I need to apply the above hack, I will, but I thought before I resorted to that I'd ask the good folks of SO - is there anywhere I'm missing that could be configuring the use of a proxy? Let me know if there's any additional helpful info I forgot to add.
cURL relies on environment variables for proxy settings. You can set these on Windows via "Advanced System Settings".
The variables you need to set and/or change for optimum control are "http_proxy", "HTTPS_PROXY", "FTP_PROXY", "ALL_PROXY", and "NO_PROXY".
However, if you just want to avoid using a proxy at all, you can probably get away with creating the system variable http_proxy and setting it to localhost and then additionally creating the system variable NO_PROXY and setting it to *, which will tell cURL to avoid using a proxy for any connection.
In any case, be sure to restart any command windows to force recognition of the change in system environment variables.
[source - cURL Manual]