php never ending loop - php

I need a function that executes by itself in php without the help of crone. I have come up with the following code that works for me well but as it is a never-ending loop will it cause any problem to my server or script, if so could you give me some suggestion or alternatives, please. Thanks.
$interval=60; //minutes
set_time_limit(0);
while (1){
$now=time();
#do the routine job, trigger a php function and what not.
sleep($interval*60-(time()-$now));
}

We have used the infinite loop in a live system environment to basically wait for incoming SMS and then process it. We found out that doing it this way makes the server resource intensive over time and had to restart the server in order to free up memory.
Another issue we encountered is when you execute a script with an infinite loop in your browser, even if you hit the stop button it will continue to run unless you restart Apache.
while (1){ //infinite loop
// write code to insert text to a file
// The file size will still continue to grow
//even when you click 'stop' in your browser.
}
The solution is to run the PHP script as a deamon on the command line. Here's how:
nohup php myscript.php &
the & puts your process in the background.
Not only we found this method to be less memory intensive but you can also kill it without restarting apache by running the following command :
kill processid
Edit: As Dagon pointed out, this is not really the true way of running PHP as a 'Daemon' but using the nohup command can be considered as the poor man's way of running a process as a daemon.

You can use time_sleep_until() function. It will return TRUE OR FALSE
$interval=60; //minutes
set_time_limit( 0 );
$sleep = $interval*60-(time());
while ( 1 ){
if(time() != $sleep) {
// the looping will pause on the specific time it was set to sleep
// it will loop again once it finish sleeping.
time_sleep_until($sleep);
}
#do the routine job, trigger a php function and what not.
}

There are many ways to create a daemon in php, and have been for a very long time.
Just running something in background isn't good. If it tries to print something and the console is closed, for example, the program dies.
One method I have used on linux is pcntl_fork() in a php-cli script, which basically splits your script into two PIDs. Have the parent process kill itself, and have the child process fork itself again. Again have the parent process kill itself. The child process will now be completely divorced and can happily hang out in background doing whatever you want it to do.
$i = 0;
do{
$pid = pcntl_fork();
if( $pid == -1 ){
die( "Could not fork, exiting.\n" );
}else if ( $pid != 0 ){
// We are the parent
die( "Level $i forking worked, exiting.\n" );
}else{
// We are the child.
++$i;
}
}while( $i < 2 );
// This is the daemon child, do your thing here.
Unfortunately, this model has no way to restart itself if it crashes, or if the server is rebooted. (This can be resolved through creativity, but...)
To get the robustness of respawning, try an Upstart script (if you are on Ubuntu.) Here is a tutorial - but I have not yet tried this method.

while(1) means it is infinite loop. If you want to break it you should use break by condition.
eg,.
while (1){ //infinite loop
$now=time();
#do the routine job, trigger a php function and what no.
sleep($interval*60-(time()-$now));
if(condition) break; //it will break when condition is true
}

Related

How to Execute a PHP task and put on hold/sleep another task that is being executed right after until the first one is completed/half through?

I have a PHP Code that does some tasks.
Lets say someone executes the code by doing so https://localhost/code.php.
I have an employee that executes the script over curl from a separate server, what is the best way to prevent him from launching the script twice, before the (already running) script is actually completed/finished goes to the end.
TLDR: I would need a function, to wait until the task/code (that's running now) completes and the secondary task that is trying to be launched has given (sleep for few seconds or until the first tasks completes).
TLDR2: Looking for function [The title says it]
Any ideas? thanks.
While a session won't work with cURL, the idea is valid -- you need to set something persistent outside of your script. So, how about writing to a local file, or writing to a database?
if ( file_exists('lock.txt') ) die;
file_put_contents ('lock.txt', 'This file prevents script execution', LOCK_EX);
(... your script code here...)
unlink ('lock.txt');
If you know that there is only one user who will hit your server you can simply use session data.
<?php
session_start();
if (true === $_SESSION["NOT_FINISHED"] ?? false) {
die("Previous job is not finished yet!");
} else {
$_SESSION["NOT_FINISHED"] = true;
// start whatever job need to be done here
...
// when job is done and finished lets release out busy flag
unset( $_SESSION["NOT_FINISHED"]);
}

PHP proc_open() with timeout

I want to call proc_open to execute a script in the background, and the background process will terminate after a few seconds. Basically, the script is a C/Java/Python script that will compile and run the user submitted code, so I want the process to be able to be terminated after some time.
What I want to achieve is that when the execution time of the background running script exceeds, say 3 seconds, halt the process as well as stop writing to the file. Let's say I run a for loop to write 1 million lines of some string to a file, and at time >= 3 seconds, the process stops. When I retrieve back the file, I will get like 200k lines of string. Then I can display the output of the file back to the browser.
I am currently using the function exec_timeout from https://blog.dubbelboer.com/2012/08/24/execute-with-timeout.html.
Then I execute a command exec_timeout("exec nohup java -cp some_dir compiled_java_file &", 3), the background process will not be terminated even if it already exceeds the timeout value, instead it will continue to write to the file until it completes. Then only I can echo the result back to the browser. If the user submits a infinite running code, the process would just hanging there until I kill it in ec2 linux instance.
Any idea of why it is not functioning as expected? Or any better function available to achieve my goal? My application is developed in PHP and hosted on AWS Elastic Beanstalk.
On proc_terminate manual, first user contributed notes:
As explained in http://bugs.php.net/bug.php?id=39992, proc_terminate()
leaves children of the child process running. In my application, these
children often have infinite loops, so I need a sure way to kill
processes created with proc_open(). When I call proc_terminate(), the
/bin/sh process is killed, but the child with the infinite loop is
left running.
On exec_timeout:
proc_terminate($process, 9);
should be replaced by:
$status = proc_get_status($process);
if($status['running'] == true) { //process ran too long, kill it
//get the parent pid of the process we want to kill
$ppid = $status['pid'];
//use ps to get all the children of this process, and kill them
$pids = preg_split('/\s+/', `ps -o pid --no-heading --ppid $ppid`);
foreach($pids as $pid) {
if(is_numeric($pid)) {
echo "Killing $pid\n";
posix_kill($pid, 9); //9 is the SIGKILL signal
}
}
proc_close($process);
}

php some forked processes keep running

I have a php script that divides a task into multiple parts and runs each part in a separate child process. The code looks like this:
foreach($users as $k => $arr) {
if(($pid = pcntl_fork()) === -1) continue;
if($pid) {
pcntl_wait($status,WNOHANG);
continue;
}
ob_start();
posix_setsid();
dbConnect();
// do stuff to data
exit();
}
I'm running this script using crontab on a Debian server, but the problem is some processes keep running and reserve memory. After a while the server's memory is flooded.
I need to find a way to make sure all processes finish correctly.
I think the issue is the use of WNOHANG in the pcntl_wait call. This means the pcntl_wait function exist before the child process - which you want, in order to be able to fork the other child processes concurrently. But it has the side-effect that the main parent finishes before some of the children. This link http://www.devshed.com/c/a/PHP/Managing-Standalone-Scripts-in-PHP/2/ describes how to loop using pcntl_wait with WNOHANG until all children complete.
The stuff you do to the data takes to long or forever. You need to debug the operations you execute.

When does a PHP <5.3.0 daemon script receive signals?

I've got a PHP script in the works that is a job worker; its main task is to check a database table for new jobs, and if there are any, to act on them. But jobs will be coming in in bursts, with long gaps in between, so I devised a sleep cycle like:
while(true) {
if ($jobs = get_new_jobs()) {
// Act upon the jobs
} else {
// No new jobs now
sleep(30);
}
}
Good, but in some cases that means there might be a 30 second lag before a new job is acted upon. Since this is a daemon script, I figured I'd try the pcntl_signal hook to catch a SIGUSR1 signal to nudge the script to wake up, like:
$_isAwake = true;
function user_sig($signo) {
global $_isAwake;
daemon_log("Caught SIGUSR1");
$_isAwake = true;
}
pcntl_signal(SIGUSR1, 'user_sig');
while(true) {
if ($jobs = get_new_jobs()) {
// Act upon the jobs
} else {
// No new jobs now
daemon_log("No new jobs, sleeping...");
$_isAwake = false;
$ts = time();
while(time() < $ts+30) {
sleep(1);
if ($_isAwake) break; // Did a signal happen while we were sleeping? If so, stop sleeping
}
$_isAwake = true;
}
}
I broke the sleep(30) up into smaller sleep bits, in case a signal doesn't interrupt a sleep() command, thinking that this would cause at most a one-second delay, but in the log file, I'm seeing that the SIGUSR1 isn't being caught until after the full 30 seconds has passed (and maybe the outer while loop resets).
I found the pcntl_signal_dispatch command, but that's only for PHP 5.3 and higher. If I were using that version, I could stick a call to that command before the if ($_isAwake) call, but as it currently stands I'm on 5.2.13.
On what sort of situations is the signals queue interpreted in PHP versions without the means to explicitly call the queue parsing? Could I put in some other useless command in that sleep loop that would trigger a signal queue parse within there?
Fixed my own problem: The answer is the "ticks" declaration. I had, as part of the Daemon process startup done the declare(ticks=1); action, but it wasn't seeming to carry over to the main script (since that was inside a function, in an include file?. Adding a declare(ticks=1) line before the while(true) loop causes signals to come through immediately (i.e. the sleep(1) command causes a tick, so after waking up from sleep, signals are processed).

Speeding up a PHP App

I have a list of data that needs to be processed. The way it works right now is this:
A user clicks a process button.
The PHP code takes the first item that needs to be processed, takes 15-25 secs to process it, moves on to the next item, and so on.
This takes way too long. What I'd like instead is that:
The user clicks the process button.
A PHP script takes the first item and starts to process it.
Simultaneously another instance of the script takes the next item and processes it.
And so on, so around 5-6 of the items are being process simultaneously and we get 6 items processed in 15-25 secs instead of just one.
Is something like this possible?
I was thinking that I use CRON to launch an instance of the script every second. All items that need to be processed will be flagged as such in the MySQL database, so whenever an instance is launched through CRON, it will simply take the next item flagged to be processed and remove the flag.
Thoughts?
Edit: To clarify something, each 'item' is stored in a mysql database table as seperate rows. Whenever processing starts on an item, it is flagged as being processed in the db, hence each new instance will simply grab the next row which is not being processed and process it. Hence I don't have to supply the items as command line arguments.
Here's one solution, not the greatest, but will work fine on Linux:
Split the processing PHP into a separate CLI scripts in which:
The command line inputs include `$id` and `$item`
The script writes its PID to a file in `/tmp/$id.$item.pid`
The script echos results as XML or something that can be read into PHP to stdout
When finished the script deletes the `/tmp/$id.$item.pid` file
Your master script (presumably on your webserver) would do:
`exec("nohup php myprocessing.php $id $item > /tmp/$id.$item.xml");` for each item
Poll the `/tmp/$id.$item.pid` files until all are deleted (sleep/check poll is enough)
If they are never deleted kill all the processing scripts and report failure
If successful read the from `/tmp/$id.$item.xml` for format/output to user
Delete the XML files if you don't want to cache for later use
A backgrounded nohup started application will run independent of the script that started it.
This interested me sufficiently that I decided to write a POC.
test.php
<?php
$dir = realpath(dirname(__FILE__));
$start = time();
// Time in seconds after which we give up and kill everything
$timeout = 25;
// The unique identifier for the request
$id = uniqid();
// Our "items" which would be supplied by the user
$items = array("foo", "bar", "0xdeadbeef");
// We exec a nohup command that is backgrounded which returns immediately
foreach ($items as $item) {
exec("nohup php proc.php $id $item > $dir/proc.$id.$item.out &");
}
echo "<pre>";
// Run until timeout or all processing has finished
while(time() - $start < $timeout)
{
echo (time() - $start), " seconds\n";
clearstatcache(); // Required since PHP will cache for file_exists
$running = array();
foreach($items as $item)
{
// If the pid file still exists the process is still running
if (file_exists("$dir/proc.$id.$item.pid")) {
$running[] = $item;
}
}
if (empty($running)) break;
echo implode($running, ','), " running\n";
flush();
sleep(1);
}
// Clean up if we timeout out
if (!empty($running)) {
clearstatcache();
foreach ($items as $item) {
// Kill process of anything still running (i.e. that has a pid file)
if(file_exists("$dir/proc.$id.$item.pid")
&& $pid = file_get_contents("$dir/proc.$id.$item.pid")) {
posix_kill($pid, 9);
unlink("$dir/proc.$id.$item.pid");
// Would want to log this in the real world
echo "Failed to process: ", $item, " pid ", $pid, "\n";
}
// delete the useless data
unlink("$dir/proc.$id.$item.out");
}
} else {
echo "Successfully processed all items in ", time() - $start, " seconds.\n";
foreach ($items as $item) {
// Grab the processed data and delete the file
echo(file_get_contents("$dir/proc.$id.$item.out"));
unlink("$dir/proc.$id.$item.out");
}
}
echo "</pre>";
?>
proc.php
<?php
$dir = realpath(dirname(__FILE__));
$id = $argv[1];
$item = $argv[2];
// Write out our pid file
file_put_contents("$dir/proc.$id.$item.pid", posix_getpid());
for($i=0;$i<80;++$i)
{
echo $item,':', $i, "\n";
usleep(250000);
}
// Remove our pid file to say we're done processing
unlink("proc.$id.$item.pid");
?>
Put test.php and proc.php in the same folder of your server, load test.php and enjoy.
You will of course need nohup (unix) and PHP cli to get this to work.
Lots of fun, I may find a use for it later.
Use an external workqueue like Beanstalkd which your PHP script writes a bunch of jobs too. You have as many worker processes pulling jobs from beanstalkd and processing them as fast as possible. You can spin up as many workers as you have memory / CPU. Your job body should contain as little information as possible, maybe just some IDs which you hit the DB with. beanstalkd has a slew of client APIs and itself has a very basic API, think memcached.
We use beanstalkd to process all of our background jobs, I love it. Easy to use, its very fast.
There is no multithreading in PHP, however you can use fork.
php.net:pcntl-fork
Or you could execute a system() command and start another process which is multithreaded.
can you implementing threading in javascript on the client side? seems to me i've seen a javascript library (from google perhaps?) that implements it. google it and i'm sure you'll find something. i've never done it, but i know its possible. anyway, your client-side javascript could activate (ajax) a php script once for each item in separate threads. that might be easier than trying to do it all on the server side.
-don
If you are running a high traffic PHP server you are INSANE if you do not use Alternative PHP Cache: http://php.net/manual/en/book.apc.php . You do not have to make code modifications to run APC.
Another useful technique that can work along with APC is using the Smarty template system which allows you to cache output so that pages do not have to be rebuilt.
To solve this problem, I've used two different products; Gearman and RabbitMQ.
The benefit of putting your jobs into some sort of queuing software like Gearman or Rabbit is that you have multiple machines, they can all participate in processing items off the queue(s).
Gearman is easier to setup, so I'd suggest poking around with it a bit first. If you find you need something more heavy duty with queue robustness; Look into RabbitMQ
http://www.danga.com/gearman/
http://pear.php.net/package/Net_Gearman (PEAR library)
You can use pcntl_fork() and family to fork a process - however you may need something like IPC to communicate back to the parent process that the child process (the one you fork'd) is finished.
You could have them write to shared memory, like via memcache or a DB.
You could also have the child process write the completed data to a file, that the parent process keeps checking - as each child process completes the file is created/written to/updated, and parent process can grab it, one at a time, and them throw them back to the callee/client.
The parent's job is to control the queue, to make sure the same data isn't processed twice and also to sanity check the children (better kill that runaway process and start over...etc)
Something else to keep in mind - on windows platforms you are going to be severely limited - I dont even think you have access to pcntl_ unless you compiled PHP with support for it.
Also, can you cache the data once its been processed, or is it unique data every time? that would surely speed things up..?

Categories