PHP Exec (ffmpeg) fails on IIS every other request - php

<?PHP
exec("ffmpeg.exe -i something.mp4 -ss 1 -t 1 -r 1 -s 320x240 -y something.jpg");
?>
Calling this script results in a server error 500 every other request.
PHP 7.01, IIS 10.
I have already ruled out that the problem might be related to the specific ffmpeg paramters of my call.
The execution is < than 1 second, so it can't be a PHP or IIS script execution timeout.
No matter how much time passes between one "call" to the script and the next, the odd numbered calls result in error 500, the even numbered calls are just fine.
Note that when I say "call" I actually refer to calling the script (i.e. http://server/script.php ) - whereas if I put 2,3, or 100 calls to Exec within the same script, they will all succeed.
Edit: Quite randomly, I tried to trigger a timeout by calling the same Exec("ffmpeg etc. ) line 100 times in a loop. To my surprise, the Error 500 disappears. So I removed the loop and added a similar pause by adding a call to sleep(10): the error 500 returns, and it's instant - like the server fails to run the script even before parsing it. Now I am totally lost..
Any hint?

Well, it seems that changing the FastCGI protocol for PHP from Named Pipe to TCP, fixed the problem.
It would still be interesting, though, understanding what makes Named Pipes fail immediately every other time. Setting Named Pipe Flushing didn't help.

Related

PHP Warning: exec() unable to fork

So here is a little background info on my setup. Running Centos with apache and php 5.2.17. I have a website that lists products from many different retailers websites. I have crawler scripts that run to grab products from each website. Since every website is different, each crawler script had to be customized to crawl the particular retailers website. So basically I have 1 crawler per retailer. At this time I have 21 crawlers that are constantly running to gather and refresh the products from these websites. Each crawler is a php file and once the php script is done running it checks to ensure its the only instance of itself running and at the very end of the script it uses exec to start itself all over again while the original instance closes. This helps protect against memory leaks since each crawler restarts itself before it closes. However recently I will check the crawler scripts and notice that one of them Isnt running anymore and in the error log I find the following.
PHP Warning: exec() [<a href='function.exec'>function.exec</a>]: Unable to fork [nice -n 20 php -q /home/blahblah/crawler_script.php >/dev/null &]
This is what is supposed to start this particular crawler over again however since it was "unable to fork" it never restarted and the original instance of the crawler ended like it normally does.
Obviously its not a permission issue because each of these 21 crawler scripts runs this exec command every 5 or 10 minutes at the end of its run and most of the time it works as it should. This seems to happen maybe once or twice a day. It seems as though its a limit of some sort as I have only just recently started to see this happen ever since I added my 21st crawler. And its not always the same crawler that gets this error it will be any one of them at a random time that are unable to fork its restart exec command.
Does anyone have an idea what could be causing php to be unable to fork or maybe even a better way to handle these processes as to get around the error all together? Is there a process limit I should look into or something of that nature? Thanks in advance for help!
Process limit
"Is there a process limit I should look into"
It's suspected somebody (system admin?) set limitation of max user process. Could you try this?
$ ulimit -a
....
....
max user processes (-u) 16384
....
Run preceding command in PHP. Something like :
echo system("ulimit -a");
I searched whether php.ini or httpd.conf has this limit, but I couldn't find it.
Error Handling
"even a better way to handle these processes as to get around the error all together?"
The third parameter of exec() returns exit code of $cmd. 0 for success, non zero for error code. Refer to http://php.net/function.exec .
exec($cmd, &$output, &$ret_val);
if ($ret_val != 0)
{
// do stuff here
}
else
{
echo "success\n";
}
In my case (large PHPUnit test suite) it would say unable to fork once the process hit 57% memory usage. So, one more thing to watch for, it may not be a process limit but rather memory.
I ran into same problem and I tried this and it worked for me;
ulimit -n 4096
The problem is often caused by the system or the process or running out of available memory. Be sure that you have enough by running free -m. You will get a result like the following:
total used free shared buffers cached
Mem: 7985 7722 262 19 189 803
-/+ buffers/cache: 6729 1255
Swap: 0 0 0
The buffers/cache line is what you want to look at. Notice free memory is 1255 MB on this machine. When running your program keep trying free -m and check free memory to see if this falls into the low hundreds. If it does you will need to find a way to run you program while consumer less memory.
For anyone else who comes across this issue, it could be several problems as outlined in this question's answer.
However, my problem was my nginx user did not have a proper shell to execute the commands I wanted. Adding .bashrc to the nginx user's home directory fixed this.

PHP CLI script not timing out

We have a node js script that runs a command to execute the following command:
/usr/local/bin/php -q /home/www/441.php {"id":"325241"}
This script does a lot of things, however it does not seem to respect the time limit. The first line of this file is:
set_time_limit(1800);
Yet if we check what processes are running on the server (ps -aux | grep php) we will see a lot of these commands that have been open since last week.
Any ideas on how we can clean this up?
I found the following comment on the PHP user guide for max_execution_time
Keep in mind that for CLI SAPI
max_execution_time is hardcoded to 0.
So it seems to be changed by ini_set
or set_time_limit but it isn't,
actually. The only references I've
found to this strange decision are
deep in bugtracker
(http://bugs.php.net/37306) and in
php.ini (comments for
'max_execution_time' directive).
So it would seem that there's a bug in the CLI module that means max_execution_time is effectively ignored.
The commenter mentioned a page in the bug tracker about this at http://bugs.php.net/37306 but the tracker seems to be down.
set_time_limit only has meaning to the php part of the program. If you had a query on a database that takes 5h to finish, those 5h are not counted by php, so they fall out of scope of the set_time_limit limitation. Having said that, it seems weird that a php process is still running after a week, if it is not calling another program that runs forever (which, in this case, the set_time_limit neither affects that calling).
Also, what does the -q flag? I can't find it on man php nor php --help nor in php's command line options.
If you start the script in nodejs, why not kill it there too, after 1800s?
var pid = startPHPProcess();
setTimeout(function() {
killPHPProcess(pid);
}, 1800);

Keep getting 11 status left. when proccessing my image. Please help me

I need to thumb images in a separate process while the server sends an HTTP response, so I'm exec'ing a PHP CLI script. When the script is run directly by CLI, it works fine; but when I exec it, Imagick forces the exit status to 11 despite my exit(0). The latest point at which I can exit to prevent the 11 status is just before flattenImages is called.
PHP CLI source: http://codepad.org/WTHOiWw0 (designed for execution either as ordinary PHP or via CLI)
example CLI invocation: php -f lib/php/thumb_test.php -- img=om3e2a
issue history: https://stackoverflow.com/questions/5255
I tried to minimize that test-case by taking out all the validation and database interaction, but when I tried the 11 status left.
I finally thought to check Apache's error.log, and the 11 status was accompanied by this:
PHP Warning: Module 'imagick' already loaded in Unknown on line 0
I found the solution here:
http://www.somacon.com/p520.php
Apparently I accidentally put an extra << extension="imagick.so" >> line in php.ini. Removing it allowed the CLI script to return status 0.

Shell command works on command line but not in PHP exec

I have a command that when run direct on the command line works as expected. It runs for over 30 seconds and does not throw any errors. When the same command is called through a PHP script through the php function exec() (which is contained in a script called by a cron) it throws the following error:
Maximum execution time of 30 seconds
exceeded
We have a number of servers and i have run this command on a very similar server with the exact same dataset without any issues so i'm happy there is no script-level issue. I'm becoming more inclined to think this is related to something at the server level - either in the PHP setup or the server setup in some way but really not sure where to look. For those that are interested both servers have a max execution time of 30 seconds.
the command itself is called like this -
from command line as:
root#server>php -q /path/to/file.php
this works...
and via cron within a PHP file as:
exec("php -q /path/to/file.php");
this throws the max execution time error. it was always my understanding that there was no execution time limit when PHP is run from the command line.
I should point out that the script that is called, calls a number of other scripts and it is one of these scripts that is erroring. Looking at my logs, the max execution time error actually occurs before 30 seconds has even elapsed too! So, less than 30 seconds after being called, a script, called by a cron script that appears to be running as CLI is throwing a max execution error.
To check that the script is running as i expected (as CLI with no max execution time) i performed the following check:
A PHP script containing this code:
// test.php
echo exec("php test2.php");
where test2.php contains:
echo ini_get('max_execution_time');
and this script is run like this:
root#server> php test.php
// returns 0
This proves a script called in this way is running under CLI with a max execution time of 0 which just proves my thoughts, i really cannot see why this script is failing on max execution time!
it seems that your script takes too much time to execute, try to
set time limit, http://php.net/manual/en/function.set-time-limit.php
or check this post:
Asynchronous shell exec in PHP
Does the command take over 30 seconds on the command line? Have you tried increased the execution timeout in the php.ini?
You can temporarily set the timeout by including this at the top of the script. This will not work when running in safe mode as is specified in the documents for setting max_execution_time with ini_set().
<?php
ini_set('max_execution_time', 60); // Set to be longer than
// 60 seconds if needed
// Rest of script...
?>
One thing of note in the docs is this:
When running PHP from the command line
the default setting is 0.
What does php -v | grep cli, run from both the shell and in the exec command from the cron-loaded php file show?
Does explictly typing /usr/bin/php (modify as appropriate) make any difference?
I've actually found what the issue is (kinda). It seems that its maybe a bug with PHP reporting max_execution_time to be exceeded when the error is actually with max_input_time as described here
I tried changing the exec call to php -d max_execution_time=0 -q /path/to/file.php and i got the error "Maximum execution time of 0 seconds exceeded" which makes no sense, i changed the code to be php -d max_input_time=0 -q /path/to/file.php and the code ran without erroring. Unfortunately, its still running 10 minutes later. At least this proves that the issue is with max_input_time though
I'm surprised that no one above has actually timed the completed exec call. The problem is that exec(x) is taking a much longer time than command line x. I have a very complex perl script (with 8 levels of internal recursion) that takes about 40 sec to execute from the command line. Using exec inside a php script to call the same perl program takes about 300 sec to execute, i.e., a factor of about 7X longer time. This is such an unexpected effect that people aren't increasing their max execution time sufficiently to see their programs complete. As a result, they are mystified by the timeout. (BTW, I am running on WAMP in a fast machine with nominally 8 cpus, and the rest of my php program is essentially trivial, so the time difference must be completely in the exec.)
create wrapper.sh file as below
export DISPLAY=:0<br>
xhost + 2>>/var/www/err.log<br>
/usr/bin/php "/var/www/read_sms1.php" 2>>/var/www/err.log<br>
and put it in cron as below
bash /var/www/wrapper.sh<br>
y read_sms1.php contain<br>
$ping_ex = exec("/usr/local/bin/gnokii --getsms SM 1 end ", $exec_result, $pr);
and above solution workedfine for me in ubuntu 12.04

Run a PHP-script from a PHP-script without blocking

I'm building a spider which will traverse various sites and data mining them.
Since I need to get each page separately this could take a VERY long time (maybe 100 pages).
I've already set the set_time_limit to be 2 minutes per page but it seems like apache will kill the script after 5 minutes no matter.
This isn't usually a problem since this will run from cron or something similar which does not have this time limit. However I would also like the admins to be able to start a fetch manually via a HTTP-interface.
It is not important that apache is kept alive for the full duration, I'm, going to use AJAX to trigger a fetch and check back once in a while with AJAX.
My problem is how to start the fetch from within a PHP-script without the fetch being terminated when the script calling it dies.
Maybe I could use system('script.php &') but I'm not sure it will do the trick.
Any other ideas?
$cmd = "php myscript.php $params > /dev/null 2>/dev/null &";
# when we call this particular command, the rest of the script
# will keep executing, not waiting for a response
shell_exec($cmd);
What this does is sends all the STDOUT and STDERR to /dev/null, and your script keeps executing. Even if the 'parent' script finishes before myscript.php, myscript.php will finish executing.
if you don't want to use exec you can use a php built in function !
ignore_user_abort(true);
this will tell the script to resume even if the connection between the browser and the server is dropped ;)

Categories