I am seeing a very bizarre problem with a PHP application I am building.
I have 2 virtual hosts on my development server (windows 7 64-bit) sometestsite.com and endpoint.sometestsite.com.
In my hosts file, I configured sometestsite.com and endpoint.sometestsite.com to point to 127.0.0.1.
Everything works when the server was running Apache 2.4.2 with PHP 5.4.9 as a fcgi module.
I then removed Apache and installed nginx-1.2.5 (windows build). I got php-cgi.exe running as a service and everything seems to work fine.
The problem is that a CURL call from sometestsite.com to endpoint.sometestsite.com that previously worked would time out.
I then moved that piece of code by itself to a small PHP file for testing:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://endpoint.sometestsite.com/test');
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, array('provider' => urlencode('provider'),
'key' => urlencode('asdf')));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
//Execute and get the data back
$result = curl_exec($ch);
var_dump($result);
This is what I receive in the PHP logs:
PHP Fatal error: Maximum execution time of 30 seconds exceeded in D:\www\test5.php on line 22
PHP Stack trace:
PHP 1. {main}() D:\www\test5.php:0
However, if I run the same request using CLI CURL (via Git Bash), it works fine:
$ curl -X POST 'http://endpoint.sometestsite.com/test' -d'provider=provider&key=asdf'
{"test": "OK"}
This is quite strange as the PHP is exactly the same version and has the same configuration as when Apache was used.
I am not sure if this is a web server configuration issue or a problem with PHP's CURL yet.
Can anyone provide some insight/past experiences as to why this is happening?
Nginx does not spawn your php-cgi.exe processes for you. If you came from Apache like me and used mod_fcgid, you will find that you have many php-cgi.exe processes in the system.
Because Nginx does not spawn the PHP process for you, you will need to start the process yourself. In my case, I have php-cgi.exe -b 127.0.0.1:9000 running as a service automatically. Nginx then pushes all requests for PHP to the PHP handler and receives a response.
Problem: PHP-FPM does not work on windows (as of 5.4.9). FPM is a neat little process manager that sits in the background and manages the spawning and killing of PHP processes when processing requests.
Because this is not possible, on Windows, we can only serve 1 request at a time, similar to the problem experienced here.
In my case, the following happens: Call a page in my application on sometestsite.com which makes a call to php-cgi.exe on 127.0.0.1:9000. Inside, a CURL request calls a page on endpoint.sometestsite.com. However, we are unable to spawn any new PHP processes to serve this second request. The original php-cgi.exe is blocked by serving the request that is running the CURL request. So, we have a deadlock and everything then times out.
The solution I used (it is pretty much a hack) is to use this python script to spawn 10 PHP processes.
You then use an upstream block in nginx (as per the docs for the script) to tell nginx that there are 10 processes available.
Things then worked perfectly.
Having said that, please do not ever use this in production (you are probably better off running nginx and php-fpm on Linux anyway). If you have a busy site, 10 processes may not be enough. However, it can be hard to know how many processes you need.
However, if you do insist on running nginx with php on windows, consider running PHP-FPM within Cygwin as per this tutorial.
Be sure that you run script on console from same user that used for run cgi process. If they not same - they may have different permissions. For me problem was in firewall rules that disallow open external connections for owner of cgi process.
Related
I have a Windows Server 2016 VPS with Plesk and PHP 7.1x.
I am trying to execute a simple AutoHotKey script from PHP using the following command:
<?php shell_exec('start /B "C:\Program Files\AutoHotkey\AutoHotkey.exe" C:\inetpub\vhosts\mydomain.com\App_Data\myahkscript.ahk'); ?>
This is the only line on the page. I have tried different ahk scripts, the current one simply creates a MsgBox.
When I execute my php page, on VPS Task Manager I see three processes created with the expected USR: cmd.exe, conhost.exe and php-cgi.exe. However, my PHP page just sits waiting on the server and nothing actually happens on the server.
I have also tried the same line except replacing shell_exec with exec. This seems to make no difference. I have tried without start /b with both commands. In that case the PHP page completes but no new processes are started.
I cannot find any errors in any logs: Mod_Security, Plesk Firewall, IIS.
Any ideas?
EDIT:
I tried my command from the VPS command prompt and immediately slapped in the face with the obvious issue of the space in 'Program Files'. I quoted the string as shown above and the command works. This eliminated the hang when running from PHP. However, the command still does nothing when executed from the web page.
EDIT:
Based on suggestions from the referenced post 'debugging exec()':
var_dump: string(0)""
$output: Array()
$return_val: 1
One point was that I would probably not be able to invoke GUI applications. That puts a damper on the idea.
I'm having difficulty with PHP's file_get_contents hanging for 60s when accessing certain resources over https.
I'm not sure whether it's a client or server end issue.
On the client
Working on the command line:
$ URL="https://example.com/some/path"
$ wget "$URL" -O /dev/null -q # takes a few milliseconds
$ curl "$URL" >/dev/null # takes a few milliseconds
$ php -r 'file_get_contents("'"$URL"'")' # takes 61s!
On the server
An line is written to the Apache (2.4) access log for the correct SSL vhost immediately, with a 200 (success) response. This is confusing a confusing timeline:
0s php's file_get_contents triggered on client
0.2s server's apache access log logs a successful (200).
???who knows what is going on here???
60.2s client receives the file.
Tested from Ubuntu 14.04 and Debian 8 clients. The resources in question are all on Debian 8 servers running Apache 2.4 with ITK worker and PHP 5.6. I've tried it with the firewall turned off (default ACCEPT policy), so it's not that. Nb. the servers have IPv6 disabled, which could be related as I've noticed timeouts like this when something tries IPv6 first. But the hosts being accessed do not have AAAA records, and the apache logs show that (a) the SSL was established Ok and that (b) the request was valid and received.
One possible answer: are you sure the client only receives the file after 60.2 seconds? If I remember correctly, file_get_contents() has a nasty habit of waiting for the remote connection to close before it considers the request completed. This means that if your server is using HTTP keep alive, which effectively keeps the connection for a period of time once all data transfer has completed, your application may appear to hang.
Does something like this help?
$context = stream_context_create(['http' => ['header' => 'Connection: close\r\n']]);
file_get_contents("https://example.com/some/path", false, $context);
NB: You may need to use 'https' for the key in that array; I don't recall off the top of my head.
try to trace script
php -r 'file_get_contents("'"$URL"'")' & will run script in background and show script pid. And than strace -p %pid%
Thanks to the answers for pointing me deeper!
This seems to be a fault/quirk of using the ITK Apache MPM worker.
With that module this problem (file_get_contents not closing connection) exhibits itself.
Without this module the problem goes away.
It's not the first bug I've found with that module since upgrading to Debian Jessie / Apache 2.4. I'll try to report it.
Ah ha! I was right. It was a bug and there's a fix released, currently in Debian jessie proposed updates.
I'm using WAMP. In the past weeks I struggled a lot to make php and curl work behind a corporate proxy, finally I did it: Apache behind corporate proxy
The problem is that now I can't make them work at home! (of course initially they were working at home without proxy). When I run a CURL command from php I get the following error: Curl error: Failed to connect to localhost port 3128
I removed all the environment variable https_proxy and http_proxy, on apache I removed the "proxy_module", on IE I removed the proxy, now when I run the following command there are no results:
reg query "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings" | find /i "proxyserver"
It seems that CURL is taking the proxy configuration from somewhere in the environment variable, in fact if I add the applicative code:
curl_setopt($ch,CURLOPT_PROXY, '');
then everything is working fine (but I don't want to change the applicative code). Where else can I look for the proxy confing?
Thanks very much
This one is best explained by code I think. From the web directory:
vi get.php
Add this php to get.php
<?
echo file_get_contents("http://IPOFTHESERVER/");
?>
IPOFTHESERVER is the IP of the server that nginx and PHP are running on.
php get.php
Returns the contents of the (default) website hosted at that I.P. BUT
http://IPOFTHESERVER/get.php
..returns a 504 Gateway Time-out. It's the same with curl. It's the same using the PHP exec command and GET. However, from the command line directly it all works fine.
I've replicated it on 2 nginx servers. For some reason nginx won't allow me to make an HTTP connection to the server its running on, via PHP (unless it's via the command line).
Anyone got any ideas why?
Thanks!
Check that your not running into worker depletion on the PHP side of things, this was the issue on my lab server setup which was configured to save RAM.
Basically I forgot that your using a single worker to process the main page been displayed to the end-user, then the get_file_contents() function is basically generating a separate HTTP request to the same web server, effectively requiring 2 workers for a single page load.
As the first page was using the last worker there was none avaliable for the get_file_contents function, therefore Nginx eventually replied with a 504 on the first page because there was no reply on the reverse proxy request.
Check if allow_url_fopen is set to true in your php.ini.
I am trying to track down an issue with a cURL call in PHP. It works fine in our test environment, but not in our production environment. When I try to execute the cURL function, it just hangs and never ever responds. I have tried making a cURL connection from the command line and the same thing happens.
I'm wondering if cURL logs what is happening somewhere, because I can't figure out what is happening during the time the command is churning and churning. Does anyone know if there is a log that tracks what is happening there?
I think it is connectivity issues, but our IT guy insists I should be able to access it without a problem. Any ideas? I'm running CentOS and PHP 5.1.
Updates: Using verbose mode, I've gotten an error 28 "Connect() Timed Out". I tried extending the timeout to 100 seconds, and limiting the max-redirs to 5, no change. I tried pinging the box, and also got a timeout. So I'm going to present this back to IT and see if they will look at it again. Thanks for all the help, hopefully I'll be back in a half-hour with news that it was their problem.
Update 2: Turns out my box was resolving the server name with the external IP address. When IT gave me the internal IP address and I replaced it in the cURL call, everything worked great. Thanks for all the help everybody.
In your php, you can set the CURLOPT_VERBOSE variable:
curl_setopt($curl, CURLOPT_VERBOSE, TRUE);
This then logs to STDERR, or to the file specified using CURLOPT_STDERR (which takes a file pointer):
curl_setopt($curl, CURLOPT_STDERR, $fp);
From the command line, you can use the following switches:
--verbose to report more info to the command line
--trace <file> or --trace-ascii <file> to trace to a file
You can use --trace-time to prepend time stamps to verbose/file outputs
You can also use curl_getinfo() to get information about your specific transfer.
http://in.php.net/manual/en/function.curl-getinfo.php
Have you tried setting CURLOPT_MAXREDIRS? I've found that sometimes there will be an 'infinite' redirect loop for some websites that a normal browser user doesn't see.
If at all possible, try sudo ing as the user PHP runs under (possibly the one Apache runs under).
The curl problem could have various reasons that require a user input, for example an untrusted certificate that is stored in the trusted certificates cache of the root user, but not the PHP one. In that case, the command would be waiting for an input that never happens.
Update: This applies only if you run curl externally using exec - maybe it doesn't apply.