So this is as much a theoretical question as a language-specific one, but consider this:
I need PHP to execute a rather system-intensive process (using PHP exec();) that will be running in the background, but then when a user leaves that specific page, the process will be killed.
I quickly realized that a dead man's switch would be an easy way to implement this since I'm not making use of any session variables or other server-side variables, which could end up looking like:
if($_SERVER['REQUEST_URI'] !== 'page_with_session.php'){
//Instead of 'session_destroy();' this would be used to kill said process
}
In any case, a while loop in PHP, resetting a timer in a Python script or re-calling said script every 15 seconds so that it doesn't get to the end and kill the process. However, when the user leaves the page, the script will have been called but not able to reset before killing the process.
Are there any gaping holes in this idea? If not, how would the implementation in PHP/JS look? The order I see it working in would be:
Page is hit by user
<?php exec('killer.py') ?>
killer.py:
Listen for 20 seconds - If no response...
os.system('pkill process')
<?php while(true){sleep(15); exec('killer.py no_wait_dont');} ?>
Any thoughts you guys have would be greatly appreciated!
Mason
Javascript is a lot easier, and about as safe (that is, not much).
Just write a javascript ping function that, once every 10 seconds, posts something to ping.php (via ajax). This ping.php would log when the last ping was received in the user session (say in $_SESSION['last_ping'])
You can check for user activity from other pages by comparing $_SESSION['last_ping'] to the current time. You would have to pepper your runtime-intensive pages with this, but it would certainly work.
Implement a heartbeat in JS. If it stops for more than a certain time then kill the subprocess:
js sends a request
php/python start the subprocess in a background and return pid to js
js pings php/python with given pid
php/python signals the subprocess corresponding to pid via IPC e.g., by sending SIGUSR1
if subprocess doesn't receive a signal in time; it dies i.e., there is a constant self-destruct countdown in the subprocess
If you can't add the self-destruct mechanism to the subprocess then you need a watcher process that would receive signals and kill the subprocess. It is less reliable because you need to make sure that the watcher process is running.
Related
I have a page streaming mjpegs. I used ffmpeg to generate the mjpegs, and it uses enough CPU that I would like to only have it run when someone is actively viewing the page. My thought was to start it with exec() however, it keeps running when I leave the page, and actually starts multiple instances if I then go back to the page.
Is there a way to kill a process when someone is no longer on a page? My only thought was to use ajax to send a keep alive signal to another program on the server which would kill the process if the signal isn't recieved for > 10 seconds, however it seems like there must be a less convoluted method for doing this.
There's no way to know from PHP when a user leaves the page unless you make another request from javascript, your approach of the ajax request is a good idea. You can also use the javascript event onbeforeunload to make a request when the user unloads the page to terminate the process.
I am developing an web application using php and mysql along with AJAX. In one of my script there is a provision to fetch data from mysql table. But what if I want to cancel the execution of the php script which I am calling to get the data, in the middle of the execution? Let me clear it more. Like if it takes say 30 minutes to complete an AJAX call due to the heavy loop and I want to exit from that call before completion by clicking some button. How can I achieve that goal. Otherwise, my script is running well except that it hangs if I don't want to wait for the final AJAX response text and try to switch to other page of the web application.
You can create script like this:
$someStorage->set($sessionId . '_someFuncStop', true);
Call it through AJAX, when button STOP pressed.
In your script with loop check that var from storage
while(1) {
if ($someStorage->get($sessionId . '_someFuncStop') === true ) break;
}
To my best knowledge, PHP doesn't support some sort of event listeners that can interrupt running script by an external cause.
There are 2 paths you might want to consider (if you don't want to write shell scripts on the server that would terminate system processes that execute the script):
ignore_user_abort function, thoug it is not 100% reliable
http://php.net/manual/en/function.ignore-user-abort.php
Inside the loop you wish to terminate, create a database call (or read from a file), where you can set some kind of flag and if that flag is set, run a break/return/die command inside the script. The button you mentioned can then write to database/file and set the interrupt flag.
(In general this is not really useful for script interruptions, since most scripts run in tens of milliseconds and the flag would not be set fast enough to terminate the script, in tens of minutes however, this is a viable solution.)
Lets imagine a request is done, which lasts for a while, until its running, Php is echo-ing content. To flush the content, I use:
echo str_repeat(' ', 99999).str_repeat(' ', 99999); ob_implicit_flush(true);
so, while the request is being processed, we actually can see an output.
Now I would like to have a button like "stop it" so that from Php I could kill this process (Apache process I guess). How to?
Now I would like to have a button like "stop it" so that from Php I could kill this process
It's not clear what you're asking here. Normally PHP runs in a single thread (of execution that is - not talking about light weight processes here). Further for any language running as CGI/FastCGI/mod_php there aer no input events - input from the HTTP channel is only read once at the beeginning of execution.
It is possible (depending on whether the thread of execution is regularly re-entering the PHP interpreter) to ask PHP to run a function at intervals (register_tick_function()) which could poll for some event communicated via another channel (e.g. a different HTP request setting a semaphore).
Sending an stream of undefined and potentially very large length to the browser is a really bad idea. The right solution (your example is somewhat contrived) may be to to spawn a background process on the webserver and poll the output via Ajax. You would still need to implement some sort of control channel though.
Sometimes the thread of execution goes out of PHP and stays there for a long time. In many cases if the user terminates a PHP script which has a long running database query, the PHP may stop running but the SQL will keep on running until completion. There are solutions - bu you didn't say if that was the problem.
I have two scripts (A & B) that a user can call.
If the user calls A, I do a bunch of database access and retrieve a result to return to the user. After I've worked out what I need to send back, I then do a bunch of extra processing and modifying of the database.
I am wondering if it's possible to return the result to the user, and then perform the rest of the processing in some sort of background task.
A further condition would be that if the user that called script A then calls script B, any processing task that user triggered by calling A must be complete, or script B must wait until it completes.
Is there a way to do this?
Php can't perform tasks after closing a request because the request (and the responce sent to browser) are really closed when the php process finish.
Also, php is good for short actions, not long running program like daemons because php lack of a good garbage collector (so it'll eat up all availlable memory before crashing).
What you are looking for is called a queue. When you need to perform some resource (or time) intensive tasks, you put a task into a queue. Then later a worker process will take one item from the queue then perform the task.
This enable you to limit ressource usage by limiting the number of workers to avoid peaks and service failures.
Take a look at resque (for a self hosted solution) or iron.io (for a cloud, setup free solution)
If you are on a shared host (so, no queue and no cron are available) then I recommend you to look at iron.io push queue. That sort of queue will call your server (via HTTP) to send task to it while the queue isn't empty. This way, all the polling/checking queue is done on the iron.io side and you only have to setup a regular page that will perform your task.
Also, if you want the script B to wait for the script A to finish, you'll have to create some sort of locking system. But I'll try to avoid that if I were you because that can cause a deadlock (one thread waiting another, but the other will never finish thus blocking the waiting thread forever)
You could do something like this:
a.php
<?php
echo "hi there!";
//in order to run another program in background you have to
//redirect std e err output and use &
//otherwise php will wait for the output
$cmd = "/usr/bin/php ".__DIR__."/b.php";
`$cmd > /dev/null 2> /dev/null &`;
echo "<br>";
echo "finished!";
b.php
<?php
$f = fopen(__DIR__."/c.txt", "w");
//some long running task
for($i=0; $i<300; $i++){
fwrite($f, "$i\n");
sleep(1);
}
fclose($f);
Notes:
Not the most elegant solution but it does the job.
If you want just one b.php running at time, you can add a pid check.
The process will run with http user (apache or other) make sure it will have the proper permissions.
I guess you are looking for Ignore_user_abort().
This function keeps your script alive for cleanup tasks when the browser has closed the connection.
http://php.net/manual/en/function.ignore-user-abort.php
You can virtually fork off a browser request by a good combination of header(), set_time_limit(), ignore_user_abort(), connection_status().
It's quite funny in combination with Ajax.
to do something like that it's better to use nodejs,PHP is synchronized and do its job line by line.(Also you can do develop asynchronous code in something else like python by asyncore lib).
But for your question ,you should develop something like this:
set_time_limit(0);
ignore_user_abort();
// Working With database
// Echo data to user
ob_end_flush();
ob_flush();
flush();
// Second part of database access,this can be take longer
take look at ob_flush() php function,this will not wait to end process ,then render html for user.
I have a PHP script that takes about 10 minutes to complete.
I want to give the user some feedback as to the completion percent and need some ideas on how to do so.
My idea is to call the php page with jquery and the $.post command.
Is there a way to return information from the PHP script without ending the script?
For example, from my knowledge of this now, if I return the variable, the PHP script will stop running.
My idea is to split the script into multiple PHP files and have the .post run each after a return from the previous is given.
But this still will not give an accurate assessment of time left because each script will be a different size.
Any ideas on a way to do this?
Thanks!
You can echo and flush() output, but that's suboptimal and rather fragile solution.
For long operations it might be good idea to launch script in the background and store/updte script status in shared location.
e.g. you could lanuch script using fopen('http://… call, proc_open PHP CLI process or even just openg long-running script in an <iframe>.
You could store status in the database or in shared memory (using apc_store()).
This will let user to check status of the script at any time (by refreshing page, or using AJAX) and user won't lose track of the script if browser's connection times out.
It also lets you avoid starting same long script twice.