I'm using WAMP. In the past weeks I struggled a lot to make php and curl work behind a corporate proxy, finally I did it: Apache behind corporate proxy
The problem is that now I can't make them work at home! (of course initially they were working at home without proxy). When I run a CURL command from php I get the following error: Curl error: Failed to connect to localhost port 3128
I removed all the environment variable https_proxy and http_proxy, on apache I removed the "proxy_module", on IE I removed the proxy, now when I run the following command there are no results:
reg query "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings" | find /i "proxyserver"
It seems that CURL is taking the proxy configuration from somewhere in the environment variable, in fact if I add the applicative code:
curl_setopt($ch,CURLOPT_PROXY, '');
then everything is working fine (but I don't want to change the applicative code). Where else can I look for the proxy confing?
Thanks very much
Related
I have 2 virtual Servers running Debian10 inside my company, that host the same php-application. One Server is for production, one is for developement. IT says, both servers where configured the same, when they handed me over.
Application runs fine on both servers, except for collecting data from other websites / RESTapis, with file_get_contents() or cURL.
This only works on Server No 1, Server No 2 runs into timeouts, because of problems with the company-proxy/ssl/..??
I am not a pro in configuring Linux-Server and trying to narrow the problem down, because the IT says, they can not help with the configuration.
On Server No 1 I when I run a test-script with file_get_contents($url), I get data back.
When I run
curl https://company.de -I
, I get information about the website
On Server No 2 I run the same test-script and get a timeout (false).
When I run the same curl-command, I get a timeout.
BUT when I run the command on No 2 and add the company-proxy manually
curl --proxy http://proxy.company.de:8080 https://company.de -I
I get a result.
On No 1 and No 2 when I check
env | grep proxy
It shows that there is no proxy set in the environment and config-file in etc/profile.d/ is also empty.
I think it might have something to do with the ssl-connection, because in some context I got a warning from McAffee and a refused connection, but I can not reproduce it..
PHP-Version is the same on both servers(7.3.31). OpenSSL is OpenSSL/1.1.1d on No 1 and OpenSSL/1.1.1n on No 2
I could try to make everything work on No 2, but I want the exact same Environment for developing and production. Any advice how to narrow down the real issue is highly welcome. (especially with the SSL-thing..)
What can i do to find the difference between the two server-configs or the environment they are in?
I've gotten trouble around the error "Couldn't resolve host '...'".
I have also researched through many topics and couldn't find the workaround.
First time, same the code, I could do curl without no problem. But today it suddenly stopped working. Here were my attempts
Same the code in my localhost, curl worked fine.
In my server CentOs 6 (using Cpanel Whm), my structure of directories looks like following
public_html
-YiiWebsiteFolder
-curl_test.php
I could run curl to same URL in curl_test.php without the problem, it worked fine. It also worked even I put the curl_test.php inside the YiiWebsiteFolder, so problem wouldn't not be the permission.
But if I run same code to call curl through Yii (YiiWebsiteFolder), ran it in Yii controller and action, it would raise the error 'Couldn't resolve host ...'.
(my URL rewrite is very normal, my site URL looks like "mydomain.com/index.php/test/myaction" )
So I guessed the cause could be Yii, was not DNS problem like some help topics said.
http://forums.cpanel.net/f34/file_get_contents-couldnt-resolve-host-name-120065.html
Couldn't resolve host and DNS Resolution failed
Both Yii config main.php files of my local machine and my server are same.
Edited: I have found this guy who had same problem like me
cURL doesn't work when it's used in a PHP application and running > through a web browser
cURL doesn't work when it's used in a PHP application and running
through a web browser. However, that same PHP page with cURL, when run
via the terminal, does what it's supposed to do and fetches the
information that I want
But he has found it out the problem is the DAEMON array, but I don't use Apache DAEMON (even I don't sure what it is).
I have tried all of possible solutions such as restarting my network and my apache to change the order when they started, modify etc/resolv.conf / (add or remove 12.0.0.1 and try some public DNS)
service network stop
service network start
service network restart
/sbin/service httpd stop
/sbin/service httpd start
I've spent many hours to troubleshoot the problem but no succeed at all.
Any help is really appreciated.
This does not solve your specific problem but I thought I should post to say I had the same problem in that PHP cURL could not resolve any host name when the PHP script was run from a web browser but the same script run from the terminal command line worked fine (i.e. cURL resolved host names to return the expected response). In my case it was resolved with the following:
/sbin/service httpd stop
/sbin/service httpd start
I use composer on a network where the only way to access the internet is using HTTP or socks proxy. I have http_proxy and https_proxy environment variables. When compose tries to access HTTPS URLs I get this:
file could not be downloaded: failed to open stream: Cannot connect to HTTPS server through proxy
As far as I know the only way to connect to a https website is using a connect verb. How can I use composer behind this proxy?
If you are using Windows, you should set the same environment variables, but Windows style:
set http_proxy=<your_http_proxy:proxy_port>
set https_proxy=<your_https_proxy:proxy_port>
That will work for your current cmd.exe. If you want to do this more permanent, y suggest you to use environment variables on your system.
If you're on Linux or Unix (including OS X), you should put this somewhere that will affect your environment:
export HTTP_PROXY_REQUEST_FULLURI=0 # or false
export HTTPS_PROXY_REQUEST_FULLURI=0 #
You can put it in /etc/profile to globally affect all users on the machine, or your own ~/.bashrc or ~/.zshrc, depending on which shell you use.
If you're on Windows, open the Environment Variables control panel, and add either a system or user environment variables with both HTTP_PROXY_REQUEST_FULLURI and HTTPS_PROXY_REQUEST_FULLURI set to 0 or false.
For other people reading this (not you, since you said you have these set up), make sure HTTP_PROXY and HTTPS_PROXY are set to the correct proxy, using the same methods. If you're on Unix/Linux/OS X, setting both upper and lowercase versions of the variable name is the most complete approach, as some things use only the lowercase version, and IIRC some use the upper case. (I'm often using a sort of hybrid environment, Cygwin on Windows, and I know for me it was important to have both, but pure Unix/Linux environments might be able to get away with just lowercase.)
If you still can't get things working after you've done all this, and you're sure you have the correct proxy address set, then look into whether your company is using a Microsoft proxy server. If so, you probably need to install Cntlm as a child proxy to connect between Composer (etc.) and the Microsoft proxy server. Google CNTLM for more information and directions on how to set it up.
If you have to use credentials try this:
export HTTP_PROXY="http://username:password#webproxy.com:port"
Try this:
export HTTPS_PROXY_REQUEST_FULLURI=false
solved this issue for me working behind a proxy at a company few weeks ago.
This works , this is my case ...
C:\xampp\htdocs\your_dir>SET HTTP_PROXY="http://192.168.1.103:8080"
Replace with your IP and Port
on Windows insert:
set http_proxy=<proxy>
set https_proxy=<proxy>
before
php "%~dp0composer.phar" %*
or on Linux insert:
export http_proxy=<proxy>
export https_proxy=<proxy>
before
php "${dir}/composer.phar" "$#"
iconoclast's answer did not work for me.
I upgraded my php from 5.3.* (xampp 1.7.4) to 5.5.* (xampp 1.8.3) and the problem was solved.
Try iconoclast's answer first, if it doesn't work then upgrading might solve the problem.
You can use the standard HTTP_PROXY environment var. Simply set it to the URL of your proxy. Many operating systems already set this variable for you.
Just export the variable, then you don't have to type it all the time.
export HTTP_PROXY="http://johndoeproxy.cu:8080"
Then you can do composer update normally.
Operation timed out (IPv6 issues)#
You may run into errors if IPv6 is not configured correctly. A common error is:
The "https://getcomposer.org/version" file could not be downloaded: failed to
open stream: Operation timed out
We recommend you fix your IPv6 setup. If that is not possible, you can try the following workarounds:
Workaround Linux:
On linux, it seems that running this command helps to make ipv4 traffic have a higher prio than ipv6, which is a better alternative than disabling ipv6 entirely:
sudo sh -c "echo 'precedence ::ffff:0:0/96 100' >> /etc/gai.conf"
Workaround Windows:
On windows the only way is to disable ipv6 entirely I am afraid (either in windows or in your home router).
Workaround Mac OS X:
Get name of your network device:
networksetup -listallnetworkservices
Disable IPv6 on that device (in this case "Wi-Fi"):
networksetup -setv6off Wi-Fi
Run composer ...
You can enable IPv6 again with:
networksetup -setv6automatic Wi-Fi
That said, if this fixes your problem, please talk to your ISP about it to try and resolve the routing errors. That's the best way to get things resolved for everyone.
Hoping it will help you!
according to above ideas, I created a shell script that to make a proxy environment for composer.
#!/bin/bash
export HTTP_PROXY=http://127.0.0.1:8888/
export HTTPS_PROXY=http://127.0.0.1:8888/
zsh # you can alse use bash or other shell
This piece of code is in a file named ~/bin/proxy_mode_shell and it will create a new zsh shell instance when you need proxy. After update finished, you can simply press key Ctrl+D to quit the proxy mode.
add export PATH=~/bin:$PATH to ~/.bashrc or ~/.zshrc if you cannot run proxy_mode_shell directly.
This one is best explained by code I think. From the web directory:
vi get.php
Add this php to get.php
<?
echo file_get_contents("http://IPOFTHESERVER/");
?>
IPOFTHESERVER is the IP of the server that nginx and PHP are running on.
php get.php
Returns the contents of the (default) website hosted at that I.P. BUT
http://IPOFTHESERVER/get.php
..returns a 504 Gateway Time-out. It's the same with curl. It's the same using the PHP exec command and GET. However, from the command line directly it all works fine.
I've replicated it on 2 nginx servers. For some reason nginx won't allow me to make an HTTP connection to the server its running on, via PHP (unless it's via the command line).
Anyone got any ideas why?
Thanks!
Check that your not running into worker depletion on the PHP side of things, this was the issue on my lab server setup which was configured to save RAM.
Basically I forgot that your using a single worker to process the main page been displayed to the end-user, then the get_file_contents() function is basically generating a separate HTTP request to the same web server, effectively requiring 2 workers for a single page load.
As the first page was using the last worker there was none avaliable for the get_file_contents function, therefore Nginx eventually replied with a 504 on the first page because there was no reply on the reverse proxy request.
Check if allow_url_fopen is set to true in your php.ini.
$page1 = file_get_contents('http://www.google.com');
$page2 = file_get_contents('http://localhost:8000/prueba');
When I echo the results, with Google it works but not with my site. And when I put the address on the explorer works. And this happen with all the site that i make in django. :(
Warning: file_get_contents(http://localhost:8000/prueba) [function.file-get-contents]: failed to open stream: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. in C:\xampp\htdocs\squirrelmail\plugins\captcha\backends\b2evo\b2evo.php on line 138
Fatal error: Maximum execution time of 60 seconds exceeded in C:\xampp\htdocs\squirrelmail\plugins\captcha\backends\b2evo\b2evo.php on line 138
For anyone having this problem using PHP Built-in web server (with Laravel in my case), it is caused by your request being blocked by file_get_contents() / curl functions.
Docs of dev server say that
PHP applications will stall if a request is blocked.
Since the PHP built-in server is single threaded, requesting another url on your server
will halt first request and it gets timed out.
As a solution, you can use proper web server (nginx, apache etc.).
Edit: As of now, I really suggest you to use Laravel Sail as a development environment for PHP projects. It saves you lots of time with setup and configuration of different services (webserver, databases, queues, etc.).
As zub0r pointed out, the built-in PHP server is single threaded. If you do not want to install and configure a web server like nginx, and do not want to use Homestead or Valet, there is another easy solution:
Start another instance of your build-in PHP server with another port and use this in the internal requests of your app.
php -S localhost:8000
\\ in another console
php -S localhost:8001
I use this in my Laravel app when I request some local dummy API via Guzzle and it works fine.
To get the result of a content from a PHP local file, you can use:
exec('php file.php', $content);
Sometimes the $content variable is an array, so just point to the correct key, like $content[3]
Hope this helps you.