file_get_contents 1 minute timeout for https? - php

I'm having difficulty with PHP's file_get_contents hanging for 60s when accessing certain resources over https.
I'm not sure whether it's a client or server end issue.
On the client
Working on the command line:
$ URL="https://example.com/some/path"
$ wget "$URL" -O /dev/null -q # takes a few milliseconds
$ curl "$URL" >/dev/null # takes a few milliseconds
$ php -r 'file_get_contents("'"$URL"'")' # takes 61s!
On the server
An line is written to the Apache (2.4) access log for the correct SSL vhost immediately, with a 200 (success) response. This is confusing a confusing timeline:
0s php's file_get_contents triggered on client
0.2s server's apache access log logs a successful (200).
???who knows what is going on here???
60.2s client receives the file.
Tested from Ubuntu 14.04 and Debian 8 clients. The resources in question are all on Debian 8 servers running Apache 2.4 with ITK worker and PHP 5.6. I've tried it with the firewall turned off (default ACCEPT policy), so it's not that. Nb. the servers have IPv6 disabled, which could be related as I've noticed timeouts like this when something tries IPv6 first. But the hosts being accessed do not have AAAA records, and the apache logs show that (a) the SSL was established Ok and that (b) the request was valid and received.

One possible answer: are you sure the client only receives the file after 60.2 seconds? If I remember correctly, file_get_contents() has a nasty habit of waiting for the remote connection to close before it considers the request completed. This means that if your server is using HTTP keep alive, which effectively keeps the connection for a period of time once all data transfer has completed, your application may appear to hang.
Does something like this help?
$context = stream_context_create(['http' => ['header' => 'Connection: close\r\n']]);
file_get_contents("https://example.com/some/path", false, $context);
NB: You may need to use 'https' for the key in that array; I don't recall off the top of my head.

try to trace script
php -r 'file_get_contents("'"$URL"'")' & will run script in background and show script pid. And than strace -p %pid%

Thanks to the answers for pointing me deeper!
This seems to be a fault/quirk of using the ITK Apache MPM worker.
With that module this problem (file_get_contents not closing connection) exhibits itself.
Without this module the problem goes away.
It's not the first bug I've found with that module since upgrading to Debian Jessie / Apache 2.4. I'll try to report it.
Ah ha! I was right. It was a bug and there's a fix released, currently in Debian jessie proposed updates.

Related

PHP-Swoole error accept() failed, Error: Too many open files[24]

I have installed php/swoole on my server and configured it using Laravel swoole
Now The problem is everything works fine until total requests per second number increase more than 1000 requests.
Swoole will log an error and not responding to user !
I have set Operating system ulimit number to 50000
But still get same error ! Searching all over internet and find nothing !
Os Centos 7
Server is good enough to handle more than 1k requests per second
If you have any experience about this please share it with me
Note:
When swoole starts, it logs this error too:
set_max_connection: max_connection is exceed the maximum value, it's reset to 1024
Ok I have figured it out.
lets first say that how I ran PHP-Swoole process: .
I have made a systemd service in centos in order to launch swoole process all the time in any situation..
So the ULimit command sets ulimit for current shell that you are in it. Not the systemd shell that is running swoole starter process..
For that you need to add LimitNOFILE=100000 option under systemd [Service] section block.
And with a restart everything works fine.

PHP CURL is using a environment variable that I didn't set

I'm using WAMP. In the past weeks I struggled a lot to make php and curl work behind a corporate proxy, finally I did it: Apache behind corporate proxy
The problem is that now I can't make them work at home! (of course initially they were working at home without proxy). When I run a CURL command from php I get the following error: Curl error: Failed to connect to localhost port 3128
I removed all the environment variable https_proxy and http_proxy, on apache I removed the "proxy_module", on IE I removed the proxy, now when I run the following command there are no results:
reg query "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings" | find /i "proxyserver"
It seems that CURL is taking the proxy configuration from somewhere in the environment variable, in fact if I add the applicative code:
curl_setopt($ch,CURLOPT_PROXY, '');
then everything is working fine (but I don't want to change the applicative code). Where else can I look for the proxy confing?
Thanks very much

PHP CURL timing out but CLI CURL works

I am seeing a very bizarre problem with a PHP application I am building.
I have 2 virtual hosts on my development server (windows 7 64-bit) sometestsite.com and endpoint.sometestsite.com.
In my hosts file, I configured sometestsite.com and endpoint.sometestsite.com to point to 127.0.0.1.
Everything works when the server was running Apache 2.4.2 with PHP 5.4.9 as a fcgi module.
I then removed Apache and installed nginx-1.2.5 (windows build). I got php-cgi.exe running as a service and everything seems to work fine.
The problem is that a CURL call from sometestsite.com to endpoint.sometestsite.com that previously worked would time out.
I then moved that piece of code by itself to a small PHP file for testing:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://endpoint.sometestsite.com/test');
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, array('provider' => urlencode('provider'),
'key' => urlencode('asdf')));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
//Execute and get the data back
$result = curl_exec($ch);
var_dump($result);
This is what I receive in the PHP logs:
PHP Fatal error: Maximum execution time of 30 seconds exceeded in D:\www\test5.php on line 22
PHP Stack trace:
PHP 1. {main}() D:\www\test5.php:0
However, if I run the same request using CLI CURL (via Git Bash), it works fine:
$ curl -X POST 'http://endpoint.sometestsite.com/test' -d'provider=provider&key=asdf'
{"test": "OK"}
This is quite strange as the PHP is exactly the same version and has the same configuration as when Apache was used.
I am not sure if this is a web server configuration issue or a problem with PHP's CURL yet.
Can anyone provide some insight/past experiences as to why this is happening?
Nginx does not spawn your php-cgi.exe processes for you. If you came from Apache like me and used mod_fcgid, you will find that you have many php-cgi.exe processes in the system.
Because Nginx does not spawn the PHP process for you, you will need to start the process yourself. In my case, I have php-cgi.exe -b 127.0.0.1:9000 running as a service automatically. Nginx then pushes all requests for PHP to the PHP handler and receives a response.
Problem: PHP-FPM does not work on windows (as of 5.4.9). FPM is a neat little process manager that sits in the background and manages the spawning and killing of PHP processes when processing requests.
Because this is not possible, on Windows, we can only serve 1 request at a time, similar to the problem experienced here.
In my case, the following happens: Call a page in my application on sometestsite.com which makes a call to php-cgi.exe on 127.0.0.1:9000. Inside, a CURL request calls a page on endpoint.sometestsite.com. However, we are unable to spawn any new PHP processes to serve this second request. The original php-cgi.exe is blocked by serving the request that is running the CURL request. So, we have a deadlock and everything then times out.
The solution I used (it is pretty much a hack) is to use this python script to spawn 10 PHP processes.
You then use an upstream block in nginx (as per the docs for the script) to tell nginx that there are 10 processes available.
Things then worked perfectly.
Having said that, please do not ever use this in production (you are probably better off running nginx and php-fpm on Linux anyway). If you have a busy site, 10 processes may not be enough. However, it can be hard to know how many processes you need.
However, if you do insist on running nginx with php on windows, consider running PHP-FPM within Cygwin as per this tutorial.
Be sure that you run script on console from same user that used for run cgi process. If they not same - they may have different permissions. For me problem was in firewall rules that disallow open external connections for owner of cgi process.

PHP from commandline starts gui programs but apache doesn't

First, I read some threads by people with similar problems but all answers didn't go beyond export DISPLAY=:0.0 and xauth cookies. So here is my problem and thanks in advance for your time!
I have developed a little library which renders shelves using OpenGL and GLSL.
Last few days I wrapped it in a php extension and surprisingly easy it works now.
But the problem is it works only when I execute the php script using the extension from commandline
$php r100.php(i successfuly run this from the http user). The script is in the webroot of apache and if I request it from the browser I get ** CRITICAL **: Unable to open display in apache's error_log.
So, to make things easier to test and to be sure that the problem is not in the library/extension, at the moment I just want to start xmms with following php script.
<?php
echo shell_exec("xmms");
?>
It works only from the shell too.
I've played with apache configuration so much now that I really dont know what to try.
I tried $xhost + && export DISPLAY=:0.0
In the http.conf I have these
SetEnv DISPLAY :0.0 SetEnv XAUTHORITY /home/OpenGL/.Xauthority
So my problem seems to be this:
How can I make apache execute php script with all privileges that the http user has, including the environment?
Additional information:
HTTP is in video and users groups and has a login shell(bash).
I can login as http and execute scripts with no problem and can run GUI programs which show up on display 0.
It seems that apache does not provide the appropriate environment for the script.
I read about some difference between CLI/CGI but cant run xmms with php-cgi too...
Any ideas for additional configuration?
Regards
Sounds bit hazard, but basically you can add even export DISPLAY=:0.0 to apache start-up script (like in Linux /etc/init.d/httpd or apache depending distro).
And "xhost +" need to be run on account which is connected to local X server as user, though I'm only wondering how it will work as php script should only live while apache http request is on-going.
Edit:
Is this is kind of application launcher?, you can spawn this with exec("nohub /usr/bin/php script.php &"); .. now apache should be released and php should continue working in background.
In your console, allow everyone to use the X server:
xhost +
In your PHP script, set the DISPLAY variable while executing the commands:
DISPLAY=:0 glxgears 2>&1

Why am I getting a SegFault when I call pdftk from PHP/Apache but not PHP/CLI or directly

When I call /usr/local/bin/pdftk from PHP in Apache (via shell_exec(), exec(), system(), etc.), it returns the SYNOPSIS message as expected.
When I call /usr/local/bin/pdftk input.pdf fill_form input.fdf output output.pdf flatten via shell_exec(), nothing returns.
When I copy and paste the exact same string to the same path in the shell (as the apache user), the output.pdf file is generated as expected.
Moving the pdftk command into a PHP shell script (shebang is #!/usr/bin/php) and executing it with php script.php works perfectly.
Calling that shell script (with its stderr redirected to stdout) from PHP in Apache (via shell_exec(script.php);) results in this line:
sh: line 1: 32547 Segmentation fault /usr/local/bin/pdftk input.pdf fill_form input.fdf output output.pdf flatten 2>&1
Whenever I run the script from the command line (via PHP or directly), it works fine. Whenever I run the script through PHP via Apache, it either fails without any notification or gives the SegFault listed above.
It's PHP 4.3.9 on RHEL4. Please don't shoot me. I've set memory to 512M with ini_set() and made sure that the apache user had read/write to all paths (with fopen()) and by logging in as apache ...
Just went and checked /var/log/messages to find this:
Oct 4 21:17:58 discovery kernel: audit(1286241478.692:1764638):
avc: denied { read } for pid=32627 comm="pdftk" name="zero"
dev=tmpfs ino=2161 scontext=root:system_r:httpd_sys_script_t
tcontext=system_u:object_r:zero_device_t tclass=chr_file
NOTE: Disabling SELinux "fixed" the problem. Has this moved into a ServerFault question? Can anybody give me the 30 second SELinux access controls primer here?
php-cli & php-cgi (or the module, depends on what your server uses) are different binaries. They don't even have to share the same version to live happily side by side on your server. They also may not share the same configuration. Increasing memory usually does nothing to help Segfaults. Points to check:
Are they the same version?
Do they have the same settings (consult the *.ini locations loaded in the phpinfo(); output, and possibly the whole output itself), if not: try what happens if you alter the one for your webserver to the one for the cli as far as possible.
Segfaults occur more in extensions then in the core afaik, and sometimes seemingly unrelated. Try to disable unneeded extensions one by one to see if the problem goes away.
Still no success? You may want to run apache with gdb, but I have no experience with that, it might tell you something though.
No luck? Recompile either the module of cgi your webserver uses.
It's PHP 4.3.9 on RHEL4. Please don't shoot me.
I feel more sad for you then anger, we're beyond the 5.3 mark, come over, it's a lot more happy here.

Categories