currently I implement the sleep() in my PHP script to pause the execution for 15 minutes to give some buffer for other tasks to finish before continue to execute the script.
<?php
------do something here-----
file_put_contents("logs/pre_sleep.log", "$id sleep in $datetime", FILE_APPEND);
sleep(900);
----continue do something here----
file_put_contents("logs/done.log", "$id done in $datetime", FILE_APPEND);
?>
When the PHP is called from client, the process will be logged in "pre_sleep.log" and before the script finish will be logged in "done.log". However, based on the log I write, I notice the some execution will not continue even has passed for 15 minutes. The process has been logged in "pre_sleep.log" where "done.log" couldn't find the same process.
Is it possible the process been killed by others within 15 minutes? This happens seem like very randomly, because most of the processes logged in both log files but some appear in only one.
It is possible the process gets shut down after the sleep (or even before the sleep).
My best guess is, your PHP timeout is in play. Put the following at the top of your script file.
set_time_limit(0);
// or
ini_set('max_execution_time', 0);
It will allow for scripts to run forever
Related
I want to call proc_open to execute a script in the background, and the background process will terminate after a few seconds. Basically, the script is a C/Java/Python script that will compile and run the user submitted code, so I want the process to be able to be terminated after some time.
What I want to achieve is that when the execution time of the background running script exceeds, say 3 seconds, halt the process as well as stop writing to the file. Let's say I run a for loop to write 1 million lines of some string to a file, and at time >= 3 seconds, the process stops. When I retrieve back the file, I will get like 200k lines of string. Then I can display the output of the file back to the browser.
I am currently using the function exec_timeout from https://blog.dubbelboer.com/2012/08/24/execute-with-timeout.html.
Then I execute a command exec_timeout("exec nohup java -cp some_dir compiled_java_file &", 3), the background process will not be terminated even if it already exceeds the timeout value, instead it will continue to write to the file until it completes. Then only I can echo the result back to the browser. If the user submits a infinite running code, the process would just hanging there until I kill it in ec2 linux instance.
Any idea of why it is not functioning as expected? Or any better function available to achieve my goal? My application is developed in PHP and hosted on AWS Elastic Beanstalk.
On proc_terminate manual, first user contributed notes:
As explained in http://bugs.php.net/bug.php?id=39992, proc_terminate()
leaves children of the child process running. In my application, these
children often have infinite loops, so I need a sure way to kill
processes created with proc_open(). When I call proc_terminate(), the
/bin/sh process is killed, but the child with the infinite loop is
left running.
On exec_timeout:
proc_terminate($process, 9);
should be replaced by:
$status = proc_get_status($process);
if($status['running'] == true) { //process ran too long, kill it
//get the parent pid of the process we want to kill
$ppid = $status['pid'];
//use ps to get all the children of this process, and kill them
$pids = preg_split('/\s+/', `ps -o pid --no-heading --ppid $ppid`);
foreach($pids as $pid) {
if(is_numeric($pid)) {
echo "Killing $pid\n";
posix_kill($pid, 9); //9 is the SIGKILL signal
}
}
proc_close($process);
}
I have a CLI script that runs for days. It processes batches, each of which take around 7 minutes. Sometimes I need to stop the script, but I need to stop it only once a batch has been processed, which is a 2 second sleep I have put in. Is there any way I can catch input at any stage of the scripts execution, if that input = x, then stop the script at the end of the next batch; else continue.
I have come across:
$handle = fopen ("php://stdin","r");
$line = fgets($handle);
but this require input.
I don't think you going to get it the way you are thinking. You can catch the StdOut but I don't think it will do you much good in terms of stopping the script. If I was using this on the cli and it ran all the time but I wanted to pause it for a certain amount of time you can do many things but this is probably how I would tackle it for a "quick fix".
Restructure your php code a tiny bit and put the batching process inside a function if it's not already. Then you can create an infinite loop using while. Then I would have it check for the existence of a pause file after each batch process. If the file exists, then don't start the next batch. Basically pausing it. If it doesn't exist proceed on as normal.
So for example.
You php file could look like this little example.
<?php
//path to pause file
$filename = "/root/pause";
while(1){
if(!file_exists($filename)){
batch();
}
}
function batch(){
//batch processing
echo "batching\n";
//fake processing using a usleep pause
usleep(3000000);
}
?>
Then when you want to pause the script. just create the file pause and when the current processing completes it will stop.
So to create the file on Linux, cd to the directory in the script and run the command
touch pause
or you can use the full path like touch /path/to/pause. Just make sure it's in the same directory as in your script. When you are done, just delete the file rm -f pause and it will resume processing the batches.
Note that when it's paused and it's just looping and not processing, it could cause a little jump in cpu usage, however it should be fine.
Long term you can look at this little example to get you going in that direction.
http://www.phpmysqlitutorials.com/2013/05/08/php-standard-input-and-loops-on-the-command-line/
I'm trying to create a browser-started self-calling/repeating PHP script on Windows with PHP (currently 5.3.24 but soon will be latest). It will act as a daemon to monitor changes in a database (every few seconds, so cron/schedule is out) and then call other PHP scripts to perform work when changes are found. For the purposes of this question please ignore the fact that I'd be better off doing this in C# or some other language :)
To keep things simple I started out by trying to use popen to run a second PHP script in the background...
// BatchMonitor.php
SaveToMonitorTable(1); // save 1st test entry to see if the script reached this point
$Command = '"" "C:\Program Files (x86)\PHP\v5.3\php.exe" C:\inetpub\wwwroot\Test.php --Instance=' . $Data->Instance;
pclose(popen("start /B $Command", "r"));
SaveToMonitorTable(2); // save 2nd test entry to see if the script reached this point
exit();
// Test.php
SaveToTestTable(1);
Sleep(10);
SaveToTestTable(2);
exit();
If I run BatchMonitor.php in the browser it works fine. As expected it will save 1 to the monitor table, call Test.php which saves 1 to the test table, the original BatchMonitor.php will continue without waiting for a response and save 2 to the monitor table before exiting, then 10 seconds later the test page saves 2 to the test table before exiting. The second script starts fine, the first script does not wait for a reply and all parameters are correctly passed between scripts. With everything working as intended I then changed the system to work as a repeating loop by calling itself (with delay) instead of another script...
// BatchMonitor.php
SaveToMonitorTable(1); // save 1st test entry to see if the script reached this point
$Command = '"" "C:\Program Files (x86)\PHP\v5.3\php.exe" C:\inetpub\wwwroot\BatchMonitor.php --Instance=' . $Data->Instance;
pclose(popen("start /B $Command", "r"));
SaveToMonitorTable(2); // save 2nd test entry to see if the script reached this point
exit();
If I run BatchMonitor.php in the browser it runs once and that is it. It will save 1 to the database, wait 10 seconds and then save 2 to the database before exiting. The page returns successfully with no script or PHP errors but it doesn't repeat as it should.
Both BatchMonitor.php and Test.php use line-for-line identical functions to get the parameters and both files run correctly and identical on the first iteration. If I use exec instead of popen then the page loops correctly with all logic working as expected (with the one obvious flaw of creating a never-ending chain of scripts awaiting for response values that will never come).
Am I missing something obvious? Does popen have some sort of secret rule that prevents a page/process from opening duplicates of itself? Are there any alternatives to using popen or exec? I read about WScript.Shell but it might be a while before I can schedule that to get enabled so for now it's not an option and I'm hoping there is something more standard that I can use.
I dont feel like this should cbe your actual answer, But why do you disbandon scheduled tasks/cronjobs because you want something done every X seconds? Having the script minute.php calling 5seconds.php with ofcouse 5 second intervals in between would create a repeated taak evert 5 seconds right?
Strangely enough you are kinda using the same sort of mechanism from your browser already.
My only concern would be to take the processed time in account and create a safe script which ensures no more than 1 '5seconds.php' can run at any given time.
I have this PHP code:
<?php
include_once("connect_to_mysql.php");
$max=300;
while($max--)
{
sleep(1);
doMyThings();
}
?>
it is supposed to repeat a mysql query 300 times with gap of 1 second between each. But the problem is after a minute or so in the browser i get this message: No Data Received. Unable to load the webpage because the server sent no data.
The problem is the following: Your code will at least (without considering the amount of time needed by doMyThings()) last 300 seconds. Most PHP environments set the default script running time to about 60 secs, the script stops and nothing is printed out.
Next thing is (if script execution time is set high enough to allow long running scripts), the script has to run until its finished (that is, ~300 secs) and after that, data is written onto the output stream. Until there, you won't see any output.
To circumvent those two problems, see this code:
<?php
// If allowed, unlimited script execution time
set_time_limit(0);
// End output buffering
ob_end_flush();
include_once("connect_to_mysql.php");
$max=300;
// End output buffering IE and Safari Workaround
// They will only display the webpage if it's completely loaded or
// at least 5000 bytes have been "printed".
for($i=0;$i<5000;$i++)
{
echo ' ';
}
while($max > 0)
{
sleep(1);
doMyThings();
$max--;
// Manual output buffering
ob_flush();
flush();
}
?>
Maybe this post is also of interest to you: Outputting exec() ping result progressively
The browser will not wait a whole 5 minutes for you to complete your queries.
You need to find a different solution. Consider executing the PHP script in CLI.
It seems that you have a timeout executing 300 times doMyThings();
You can try with set_time_limit(0);
Set the number of seconds a script is allowed to run. If this is reached, the script returns a fatal error. The default limit is 30 seconds or, if it exists, the max_execution_time value defined in the php.ini.
When you execute long time php code on server side, you need change max_execution_time directive in php.ini. But browser will not wait how long as you want so you need use async technology like AJAX
I have an HTML form that submits to a PHP page which initiates a script. The script can take anywhere from 3 seconds to 30 seconds to run - the user doesn't need to be around for this script to complete.
Is it possible to initiate a PHP script, immediately print "Thanks" to the user (or whatever) and let them go on their merry way while your script continues to work?
In my particular case, I am sending form-data to a php script that then posts the data to numerous other locations. Waiting for all of the posts to succeed is not in my interest at the moment. I would just like to let the script run, allow the user to go and do whatever else they like, and that's it.
Place your long term work in another php script, for example
background.php:
sleep(10);
file_put_contents('foo.txt',mktime());
foreground.php
$unused_but_required = array();
proc_close(proc_open ("php background.php &", array(), $unused_but_required));
echo("Done);
You'll see "Done" immediately, and the file will get written 10 seconds later.
I think proc_close works because we've giving proc_open no pipes, and no file descriptors.
In the script you can set:
<?php
ignore_user_abort(true);
That way the script will not terminate when the user leaves the page. However be very carefull when combining this whith
set_time_limit(0);
Since then the script could execute forever.
You can use set_time_limit and ignore_user_abort, but generally speaking, I would recommend that you put the job in a queue and use an asynchronous script to process it. It's a much simpler and durable design.
You could try the flush and related output buffer functions to immediately send the whatever is in the buffer to the browser:
Theres an API wrapper around pcntl_fork() called php_fork.
But also, this question was on the Daily WTF... don't pound a nail with a glass bottle.
I ended up with the following.
<?php
// Ignore User-Requests to Abort
ignore_user_abort(true);
// Maximum Execution Time In Seconds
set_time_limit(30);
header("Content-Length: 0");
flush();
/*
Loooooooong process
*/
?>