How to identify proxy/SSL - Issues on virtual Server (Debian) inside company - php

I have 2 virtual Servers running Debian10 inside my company, that host the same php-application. One Server is for production, one is for developement. IT says, both servers where configured the same, when they handed me over.
Application runs fine on both servers, except for collecting data from other websites / RESTapis, with file_get_contents() or cURL.
This only works on Server No 1, Server No 2 runs into timeouts, because of problems with the company-proxy/ssl/..??
I am not a pro in configuring Linux-Server and trying to narrow the problem down, because the IT says, they can not help with the configuration.
On Server No 1 I when I run a test-script with file_get_contents($url), I get data back.
When I run
curl https://company.de -I
, I get information about the website
On Server No 2 I run the same test-script and get a timeout (false).
When I run the same curl-command, I get a timeout.
BUT when I run the command on No 2 and add the company-proxy manually
curl --proxy http://proxy.company.de:8080 https://company.de -I
I get a result.
On No 1 and No 2 when I check
env | grep proxy
It shows that there is no proxy set in the environment and config-file in etc/profile.d/ is also empty.
I think it might have something to do with the ssl-connection, because in some context I got a warning from McAffee and a refused connection, but I can not reproduce it..
PHP-Version is the same on both servers(7.3.31). OpenSSL is OpenSSL/1.1.1d on No 1 and OpenSSL/1.1.1n on No 2
I could try to make everything work on No 2, but I want the exact same Environment for developing and production. Any advice how to narrow down the real issue is highly welcome. (especially with the SSL-thing..)
What can i do to find the difference between the two server-configs or the environment they are in?

Related

file_get_contents 1 minute timeout for https?

I'm having difficulty with PHP's file_get_contents hanging for 60s when accessing certain resources over https.
I'm not sure whether it's a client or server end issue.
On the client
Working on the command line:
$ URL="https://example.com/some/path"
$ wget "$URL" -O /dev/null -q # takes a few milliseconds
$ curl "$URL" >/dev/null # takes a few milliseconds
$ php -r 'file_get_contents("'"$URL"'")' # takes 61s!
On the server
An line is written to the Apache (2.4) access log for the correct SSL vhost immediately, with a 200 (success) response. This is confusing a confusing timeline:
0s php's file_get_contents triggered on client
0.2s server's apache access log logs a successful (200).
???who knows what is going on here???
60.2s client receives the file.
Tested from Ubuntu 14.04 and Debian 8 clients. The resources in question are all on Debian 8 servers running Apache 2.4 with ITK worker and PHP 5.6. I've tried it with the firewall turned off (default ACCEPT policy), so it's not that. Nb. the servers have IPv6 disabled, which could be related as I've noticed timeouts like this when something tries IPv6 first. But the hosts being accessed do not have AAAA records, and the apache logs show that (a) the SSL was established Ok and that (b) the request was valid and received.
One possible answer: are you sure the client only receives the file after 60.2 seconds? If I remember correctly, file_get_contents() has a nasty habit of waiting for the remote connection to close before it considers the request completed. This means that if your server is using HTTP keep alive, which effectively keeps the connection for a period of time once all data transfer has completed, your application may appear to hang.
Does something like this help?
$context = stream_context_create(['http' => ['header' => 'Connection: close\r\n']]);
file_get_contents("https://example.com/some/path", false, $context);
NB: You may need to use 'https' for the key in that array; I don't recall off the top of my head.
try to trace script
php -r 'file_get_contents("'"$URL"'")' & will run script in background and show script pid. And than strace -p %pid%
Thanks to the answers for pointing me deeper!
This seems to be a fault/quirk of using the ITK Apache MPM worker.
With that module this problem (file_get_contents not closing connection) exhibits itself.
Without this module the problem goes away.
It's not the first bug I've found with that module since upgrading to Debian Jessie / Apache 2.4. I'll try to report it.
Ah ha! I was right. It was a bug and there's a fix released, currently in Debian jessie proposed updates.

PHP CURL is using a environment variable that I didn't set

I'm using WAMP. In the past weeks I struggled a lot to make php and curl work behind a corporate proxy, finally I did it: Apache behind corporate proxy
The problem is that now I can't make them work at home! (of course initially they were working at home without proxy). When I run a CURL command from php I get the following error: Curl error: Failed to connect to localhost port 3128
I removed all the environment variable https_proxy and http_proxy, on apache I removed the "proxy_module", on IE I removed the proxy, now when I run the following command there are no results:
reg query "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings" | find /i "proxyserver"
It seems that CURL is taking the proxy configuration from somewhere in the environment variable, in fact if I add the applicative code:
curl_setopt($ch,CURLOPT_PROXY, '');
then everything is working fine (but I don't want to change the applicative code). Where else can I look for the proxy confing?
Thanks very much

PHP-cgi stops working randomly without error log

I'm using nginx with (WT-NMP - portable mysql nginx php app.). Everything is working perfectly except php-cgi, it is stopping randomly and i have to start it again, in fact i realized that if i add to post my website quickly (in wordpress), it is definitely stopping. But sometimes it is stopping without for no reason, maybe there is a reason but i cant see because no error shows in errorlogs.
I searched on the internet and I found something about my problem but I couldn't use these methods on my windows server.
The solution I found: SET PHP_FCGI_MAX_REQUESTS=0 to the script that starts php-cgi.exe
#ECHO OFF
ECHO Starting PHP FastCGI...
SET PHP_FCGI_MAX_REQUESTS=0
SET PATH="D:\Program Files\php;%PATH%"
"C:\Program Files\php\php-cgi.exe" -b 127.0.0.1:9000 -c "C:\Program Files\php\php.ini"
Any idea about my problem? or any idea how to use this codes on my windows server. I searched for windows and found only one solution, it was about the IIS.
The proccess: adding PHP_FCGI_MAX_REQUESTS=0 to environment variables.
The following setting works fine for Windows 2012 R2 Server !!!
http://i.stack.imgur.com/HrIaO.jpg
Control Panel-> System->Advanced System Settings->Advanced->Environment Variables->System Variables-> New -> Variable Name: PHP_FCGI_MAX_REQUESTS & Variable Value: 0

Yii curl Couldn't resolve host error CentOs 6

I've gotten trouble around the error "Couldn't resolve host '...'".
I have also researched through many topics and couldn't find the workaround.
First time, same the code, I could do curl without no problem. But today it suddenly stopped working. Here were my attempts
Same the code in my localhost, curl worked fine.
In my server CentOs 6 (using Cpanel Whm), my structure of directories looks like following
public_html
-YiiWebsiteFolder
-curl_test.php
I could run curl to same URL in curl_test.php without the problem, it worked fine. It also worked even I put the curl_test.php inside the YiiWebsiteFolder, so problem wouldn't not be the permission.
But if I run same code to call curl through Yii (YiiWebsiteFolder), ran it in Yii controller and action, it would raise the error 'Couldn't resolve host ...'.
(my URL rewrite is very normal, my site URL looks like "mydomain.com/index.php/test/myaction" )
So I guessed the cause could be Yii, was not DNS problem like some help topics said.
http://forums.cpanel.net/f34/file_get_contents-couldnt-resolve-host-name-120065.html
Couldn't resolve host and DNS Resolution failed
Both Yii config main.php files of my local machine and my server are same.
Edited: I have found this guy who had same problem like me
cURL doesn't work when it's used in a PHP application and running > through a web browser
cURL doesn't work when it's used in a PHP application and running
through a web browser. However, that same PHP page with cURL, when run
via the terminal, does what it's supposed to do and fetches the
information that I want
But he has found it out the problem is the DAEMON array, but I don't use Apache DAEMON (even I don't sure what it is).
I have tried all of possible solutions such as restarting my network and my apache to change the order when they started, modify etc/resolv.conf / (add or remove 12.0.0.1 and try some public DNS)
service network stop
service network start
service network restart
/sbin/service httpd stop
/sbin/service httpd start
I've spent many hours to troubleshoot the problem but no succeed at all.
Any help is really appreciated.
This does not solve your specific problem but I thought I should post to say I had the same problem in that PHP cURL could not resolve any host name when the PHP script was run from a web browser but the same script run from the terminal command line worked fine (i.e. cURL resolved host names to return the expected response). In my case it was resolved with the following:
/sbin/service httpd stop
/sbin/service httpd start

Nginx and PHP-cgi - can't file_get_contents of any website on the server

This one is best explained by code I think. From the web directory:
vi get.php
Add this php to get.php
<?
echo file_get_contents("http://IPOFTHESERVER/");
?>
IPOFTHESERVER is the IP of the server that nginx and PHP are running on.
php get.php
Returns the contents of the (default) website hosted at that I.P. BUT
http://IPOFTHESERVER/get.php
..returns a 504 Gateway Time-out. It's the same with curl. It's the same using the PHP exec command and GET. However, from the command line directly it all works fine.
I've replicated it on 2 nginx servers. For some reason nginx won't allow me to make an HTTP connection to the server its running on, via PHP (unless it's via the command line).
Anyone got any ideas why?
Thanks!
Check that your not running into worker depletion on the PHP side of things, this was the issue on my lab server setup which was configured to save RAM.
Basically I forgot that your using a single worker to process the main page been displayed to the end-user, then the get_file_contents() function is basically generating a separate HTTP request to the same web server, effectively requiring 2 workers for a single page load.
As the first page was using the last worker there was none avaliable for the get_file_contents function, therefore Nginx eventually replied with a 504 on the first page because there was no reply on the reverse proxy request.
Check if allow_url_fopen is set to true in your php.ini.

Categories