PHP cURL error: Could not resolve proxy - php

I'm trying to send a cURL request from a Windows Server 2008 machine using PHP (version 5.3.12) and keep receiving the error Could not resolve proxy: http=127.0.0.1; Host not found. As far as I cal tell, I'm not using a proxy - CURLOPT_PROXY is not set, I've run netsh winhttp show proxy to make sure there's not a system-wide setting in place, I've even checked all the browsers on my machine to confirm none are configured to use a proxy (just in case this could possibly have an effect). I'm having trouble figuring out why cURL insists on telling me that 1) I'm using a proxy and 2) it can't connect to it.
I'm able to resolve the error by explicitly disabling the use of a proxy via curl_setopt($curl, CURLOPT_PROXY, '');, but this isn't the greatest solution - a lot of the places I use cURL are in libraries, and it'd be a pain (not to mention less than maintainable) to go around and hack this line into all of them. I'd rather find the root cause and fix it there.
If it helps, this has happened to me only with POST requests so far. Command-line cURL (from a Git bash prompt) works fine. These calls also work fine from our dev machine, so it seems to be something specific to my machine.
If I need to apply the above hack, I will, but I thought before I resorted to that I'd ask the good folks of SO - is there anywhere I'm missing that could be configuring the use of a proxy? Let me know if there's any additional helpful info I forgot to add.

cURL relies on environment variables for proxy settings. You can set these on Windows via "Advanced System Settings".
The variables you need to set and/or change for optimum control are "http_proxy", "HTTPS_PROXY", "FTP_PROXY", "ALL_PROXY", and "NO_PROXY".
However, if you just want to avoid using a proxy at all, you can probably get away with creating the system variable http_proxy and setting it to localhost and then additionally creating the system variable NO_PROXY and setting it to *, which will tell cURL to avoid using a proxy for any connection.
In any case, be sure to restart any command windows to force recognition of the change in system environment variables.
[source - cURL Manual]

Related

Access GCS from cloudrun

I have a php project here that I want to run in Google Cloud Run, since it also works with files, I thought it makes sense to start a GSC instance and put files there.
Locally this has also worked so far. However, as soon as I run the project in Cloud Run, I get the error when accessing GCS:
cURL error 5: Could not resolve proxy: null (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for https://storage.googleapis.com/storage/v1/b/<redacted>/o?delimiter=%2F&includeTrailingDelimiter=true&prefix=public%2Fvar%2Ftmp%2Fthumbnails%2F_default_upload_bucket%2F&prettyPrint=false
If I understand correctly, cURL wants to go through a proxy, however, I have not configured anything like that. I only have a VPC connector on the cloud run, but it is configured so that only requests to private IPs go through the VPC connector.
The framework used is Symfony 5 and the library to connect is "flysystem" with the "google-cloud-storage" adapter.
Is there something here that I am essentially misunderstanding?
I have been able to solve the problem. However, I am still not quite clear why I had to do that.
Anyway, the following environment variables must be set for this to work:
HTTPS_PROXY = ""
HTTP_PROXY = ""

PHP stream wrappers and Windows Certificate Store with Proxy

Setup/Environment:
In our PHP application, we sometimes need to make HTTPS requests from PHP to other servers. The setup in question is as follows:
We are using PHP stream wrappers for doing the HTTP requests (using Guzzle HTTP). We are doing this because stream wrappers support using the Windows Certficiate Store for certificate verification.
The server runs on Windows.
We use a proxy on for the HTTPS requests.
The firewalls are configured to allow
Access to the servers we are doing our requests to.
Access to all certificate revocation lists relevant for the certificates used.
Our problem:
Sometimes, out of the blue, our HTTPS requests fail, with certificate validation errors. This problem persists, until someone opens a remote desktop session to the server and requests the very same URL we are trying to query in the servers Internet Explorer. After that, our PHP application can do its requests as it should.
Question:
What is the problem here? And what can we do to analyse this further?
If that were a Guzzle problem, it would happen every time.
However, do try to issue the same HTTPS call using cURL to both verify this is the case, and see if by any chance the cURL request also temporarily clears the issue, just as Internet Explorer does.
But this rather looks like a caching problem - the PHP server request is not able to properly access (priming certificates) the Certificate Store, it is only able to use its services after someone else has gained access, and only as long as the cache does not expire. To be sure this is the case, simply issue calls periodically and mark the time elapsed between user logging in and using IE, and Guzzle calls starting to fail. If I am right, that time will always be the same.
It could be a permission problem (I think it probably is, but what permissions to give to what, that I'm at a loss to guess at). Maybe calls aren't allowed unless fresh CRLs for that URL are available, and PHP doesn't get them). This situation could also either be fixed temporarily by running a IE connection attempt to the same URL from a PowerShell script launched by PHP in case of error, or (more likely, and hopefully) attempting to run said script will elicit some more informative error message.
update
I have looked into how PHP on Windows handles TLS through Guzzle, and nothing obvious came out. But I found an interesting page about TLS/SSL quirks.
More interestingly, I also found out several references on how PHP ends up using Schannel for TLS connections, and how Windows and specifically Internet Explorer have a, let us say, cavalier attitude about interoperability. So I would suggest you try activating the Schannel log on Windows and see whether anything comes out of it.
Additionally, on the linked page there is a reference to a client cache being used, and the related page ends up here ("ClientCacheTime").
Its not an application problem.
I am 99% sure this is routing problem and in some circumstances packets are dropped in the router. I would look at the network, change the environment or, if possible, do some network sniffing or monitoring.
If You have a decent network infrastructure, You can do SNPM traps for request count and timeout data collecting (from routers and switches) and ingest it in Elastic APM. This would give You quite detailed time-series analysis.
You can see this https://github.com/guzzle/guzzle/issues/394 verifyis the problem. and if you make the verify to be false that will make your system expose to security attack.
// Use the system's CA bundle (this is the default setting)
$client->request('GET', '/', ['verify' => true]);
// Use a custom SSL certificate on disk.
$client->request('GET', '/', ['verify' => '/path/to/cert.pem']);
// Disable validation entirely (don't do this!).
$client->request('GET', '/', ['verify' => false]);
These are the Request Options and you can see how to do the SSL certificate verification. They describe the issue as the following
Not all system's have a known CA bundle on disk. For example, Windows
and OS X do not have a single common location for CA bundles. When
setting "verify" to true, Guzzle will do its best to find the most
appropriate CA bundle on your system. When using cURL or the PHP
stream wrapper on PHP versions >= 5.6, this happens by default. When
using the PHP stream wrapper on versions < 5.6, Guzzle tries to find
your CA bundle in the following order:
Check if openssl.cafile is set in your php.ini file.
Check if curl.cainfo is set in your php.ini file.
Check if /etc/pki/tls/certs/ca-bundle.crt exists (Red Hat, CentOS, Fedora;
provided by the ca-certificates package)
Check if /etc/ssl/certs/ca-certificates.crt exists (Ubuntu, Debian; provided by
the ca-certificates package)
Check if /usr/local/share/certs/ca-root-nss.crt exists (FreeBSD; provided by
the ca_root_nss package)
Check if /usr/local/etc/openssl/cert.pem (OS X; provided by homebrew)
Check if C:\windows\system32\curl-ca-bundle.crt exists (Windows)
Check if C:\windows\curl-ca-bundle.crt exists (Windows)
The result of this lookup is cached in memory so that subsequent calls in the same
process will return very quickly. However, when sending only a single
request per-process in something like Apache, you should consider
setting the openssl.cafile environment variable to the path on disk to
the file so that this entire process is skipped
See also and how to ignore invalid ssl certificate errors in-guzzle 5 and guzzle-request-fails

Odd cURL timeout behaviour

I have a cURL functionality that has started to timeout suddenly when it tries to connect a third party service.
I am a bit lost what could be the issue as nothing has changed (what I know of). Here are what I have tried:
Run cURL on other two servers to rule out blacklisting. Those servers
have same settings, different IP and they have not done cURL calls to
the service before. Still timing out.
Check our other cURL functionalities to make sure it is not the cURL library. Those work ok so happy days.
Ran same code on local machine to make sure if the settings is the issue. That worked ok.
So from above I have concluded it is something in those three servers' settings that are not working correctly. However, my knowledge on cURL server settings stuff is a bit lacking which is why I am here.
BTW, the third party service does not require whitelisting so that cannot be that either.
I have also tried putting the cURL timeout to minutes and that did not work either.
The third party service provides SSL and non-SSL connection and I have tried connecting to both without success.
Any help is appreciated.

Is it possible to get FTP to work in VirtualBox using NAT networking or get php's ftp functions to work

I'm trying to get php's ftp methods to work from within a VM. I can connect using ftp_connect but not actually do anything afterwards.
HOST: Ubuntu 14.10
GUEST: Debian 7
Stack: Vagrant - VirtualBox - Debian - LAMP
I'm using vagrant to run a virtual box VM that runs a lamp stack. In php I'm running some method calls (ftp_pasv, ftp_nlist) that are not working.
I discovered that because of the FTP protocol using random ports for connections, the issue is caused by the use of NAT networking in virtualbox. I have the perfect vagrant-virtualbox setup except for this one issue. Does anyone know of a method to get ftp to work on the guest OS in this scenario. I know I could try using a bridged setup, but that means a bunch more work setting it up and the machine will be available to the public. So I would prefer to try to get it working behind that NAT.
I also have tried to use ftp_pasv to get passive mode turned on, which would fix the issue. But the method returns false when I call it to turn on passive mode.
As far as I know this isn't possible. Maybe if you want to hack some source code and compile custom solutions it will work. But that's harder than just using a different setup. I've resorted to using curl to make the ftp connections. Which works for listing files and downloading them.
Anyone that comes across this question and actually finds a solution please post it here.
The problem is most likely related to the network configuration. The fact that e.g. creating a directory works in contrary to getting the directory listing indicates, that theres an issue with the back channel.
A potential root cause is the configuration of the network router. It seems that some routers handle packages different if they are sent from different mac adresses (host vs guest system).
I had this issue and it turned out that upgrading Virtual Box solved the issue. Possibly some bug in the NAT interface.

cURL proxy server

I've got two PC's, but both have another IP. The simple question is how to use one of the PC's as a proxy for the other one, by using cURL, so requests from both PC's will have the same IP.
Is there a way to turn one PC into a proxy server and then make cURL make requests using that IP?
Yes, there are lot's of proxy packages running out of the box (you could even configure apache to do it). Wouldn't recommend rolling your own in PHP if that's what you're after. You can configure curl easily to use a proxy, see the curl_setopt possibilities.
if you are runnning a webserver on each machine , then you can install a php proxy script.
see Google PHP Proxy search results : at least 4 choices on first page.
if you are not running a webserver. Then I suggest you download a standalone proxy such as squid.
This options works for windows or linux. You can download squid for windows here. just unzip and run squid, not setup required.

Categories