I have an error reporting function in a web application, and when a user posts an error, an email sends out to the developer email. Current workflow is like user input -> db write -> send email -> redirect.
To prevent the email function holding back the rest of the user process, I want to separate it, so the email sending process can do their thing but the users can keep working. So its like
user input -> db write --> redirect
\-> send email
First thing that came into my mind is that I send an ajax request to start the email process so the error reporting and the email function can work asynchronously, but I'm curious if there would be a better way to do this.
There are different ways to do this depending on how complex you want to get. PHP does have process control stuff, but especially if your site is going to be hosted on a shared hosting provider, the availability of that functionality in your hosting environment is not guaranteed.
Probably one of the more foolproof ways to do this in terms of ensuring compatibility across a wide range of hosting environments is to start a new "mails to send" database table and log a row to that for each mail you want to send. Then set up a cron task on your server which runs a script which pulls all rows from this table, sends the email, then removes the row from the table. With CodeIgniter, if you call the index.php script like
php -f index.php foo bar
it will call the same method that would be called if the /foo/bar path is called via the standard routing system (eg, the bar method of the Foo controller). Then, inside that method, you can use if (!is_cli()) { exit(); } to make sure that method isn't being called from a web request, and flock() to make sure you don't have two processes sending emails at one time (in case one hangs or takes a long time or something). Select everything from the "mails to send" table, iterate through them row by row, send the mail, do some logging if applicable, then delete the row. (More info on doing CLI scripts on CodeIgniter.)
Then set up a cron task to run this script every X minutes, perhaps set up some way to check the table now and then to make sure it's not growing too large (which would be a sign that the mail sending script is failing somehow), and there you go.
As mentioned above, there are more clever ways to implement something like this, but this is a fairly broadly-compatible and foolproof one.
Forking off a process from a PHP script that is handling a web server request is possible, but you should be wary of a few things:
Your original PHP process and the forked off process will probably share a lot of data objects. While some variables may be copied and can be manipulated independently in the child and parent processes, some vars (like db connections or file handles, etc.) may refer to the same computing resource. You should be careful to avoid re-using resource variables. Have the child reconnect to your db and re open files, etc.
If a process forks off, it might have trouble connecting to a db or mail gateway, etc. You should probably make this forked child process loop to try and accomplish its goal and then eventually die after a certain number of attempts if it's doing something important.
There are probably a dozen other warnings about multithreading, race conditions, resource usage, etc. Just remember that forking off processes means that you are consuming more resources, that those resources might be in use by other processes, and you might get weird, hard-to-troubleshoot problems.
All that said, I have had some luck using a combination of the exec command to fork off a separate process and then calling the posix_setsid function so that the forked-off child process can fully detach itself from apache and continue running even if the parent process finishes or apache restarts or whatever.
USE THIS CODE AT YOUR OWN RISK.
From your CodeIgniter script, you can fork off a child process using exec, like so:
// the PARENT process (a CodeIgniter script running in response to http request)
// put this code in a controller or something
// here's a command to run the PHP executable in the background on some file
// routing the output and error messages to another file
$cmd = "/usr/bin/php /path/to/child-script.php > /tmp/foo/out.txt 2>&1 & echo \$!";
// will contain an array of output lines from the exec command
$cmd_output = NULL;
// will contain the success/failure code returned by the OS.
// Will be zero for a valid command, non-zero otherwise
$cmd_result = NULL;
// $return will contain the last line from the result of
// the command which should be the PID of the process we have spawned
$cmd_return = exec ( $cmd, $cmd_output, $cmd_result );
NOTE: that you could instead use pcntl_fork to fork off the process. The reason I have not is because that means the child process would inherit every variable in the codeigniter parent script which can lead to some confusing behavior. E.g., if one process changes the db connection, that change might suddenly appear in the other script, etc. The use of exec() here more thoroughly separates the two scripts and makes things simpler. If you want to change $cmd above to run a CodeIgniter controller, see CodeIgniter from the Command Line.
In the child process, you must call posix_setsid() so that the child script detaches itself from the parent. If you do not, then the child process might be killed when the parent process finishes (or is killed) or if the web server restarts or crashes or whatever.
Here is the code in the child process, which requires the POSIX extension to be installed, enabled, and not disabled in PHP.ini by the disable_functions directive. This extension is very often disabled (and for good reason) in your web server's PHP.ini, but sometimes still available for CLI scripts. Another good reason to use exec to fork off a command-line process.
// child-script.php
// CHILD process. Put your emailing code or other long-running junk in here
// because this is a brand-new, separate process, you might need to load configuration files, connect to the db, etc.
if (!extension_loaded("posix")){
throw new Exception("This script requires posix extension");
}
// NOTE: the output and results of this file will be completely unavailable to the parent script and you might never know what happens if you don't write a log file or something. Consider opening a log file somewhere
// process id of this process, should match contents of $cmd_return above
$mypid = posix_getpid();
// session id -- not to be confused with php session id
// apparently this call will make this process a 'session leader'
$sid = posix_setsid();
if ($sid < 0) {
throw new Exception("setsid failed, returned $sid");
}
// this output should be routed by the exec command above to
// /tmp/foo/out.txt
echo "setsid success, sid=$sid\n";
// PUT YOUR EMAILING CODE HERE
Related
I'm trying to figure out the best way to send a response to an incoming API POST to our system, without returning.
We receive an XML post from another service, and consume it, and then our response to them is in XML also. What we are currently doing is digesting the incoming post, do some stuff on our end, then do a php return with the XML.
I would like to change this so that we can respond to their call with the XML, but then do some processing after the fact, without making some type of exec/background call.
What's the best way to send a response in PHP without returning? If we do an "echo" will that close the connection and allow us to process more afterwards without the other server "waiting?
Calling PHP's echo will not close the connection, in fact you can call echo multiple times in your PHP script and the output will be added to the response. The connection will only close when
The end of the script is reached
exit() or the alias die() are called
A fatal/parse error or an uncaught Exception occurs or your server runs out of memory
The maximum script execution time which you can set in php.ini is exceeded
Usually, the calling client code will also have some kind of timeout, so if your 'digesting' code could take longer and you want to take care about this as well as point 4 in the list, you can store the request data for later processing, for example in a database or serialized in files. Having successfully stored the data, you then have basically 2 options to go:
Option 1: Spawn a background PHP process
To spawn a background PHP process that will survive the livecycle of the calling script, use exec and nohup. The basic usage could look like this:
exec('RESOURCE_ID=123 nohup /path/to/your/php/executable your_script.php > /dev/null');
Within the first segment of the command, RESOURCE_ID=123, you can pass a unique identifier of the previously stored request data, maybe a database entry id or the storage filename, to the background script. Use getenv('RESOURCE_ID') in your background script to retrieve the variable.
[EDIT] > /dev/null for output redirection is crucial for running the process in the background, otherwise the parent script will wait for the output of the background process. I also suggest to write the output as well as error output to an actual file like &> my_script.out, this will have the same effect. You could also get the process id of the background process by appending & echo $! to the command, exec() will then return it.
After starting the background script, you can send your 'OK' response and exit the parent script.
Option 2: Cronjob for processing, as suggested by Jim Panse
The more complex your system grows, you probably need more control over the execution of your 'digesting' code. Perhaps you want to balance server load peaks, or restart failed tasks, or throttle malicious usage of your API. If you need this kind of control, you are better off with this option.
Since i guess you want your system-to-system communication synchronously there are multiple things you can consider.
Even though time consuming requests you usually still want a fast response.
To satisfy this you can't process the request immediately.
So, just save the request and process it later (give the client a 202 response back). Systems like queues are very popular to save time consuming jobs for running them later. Another time controlled script (cronjob) could then do polling and process the stacked messages/data.
If you want to provide the results to the client too, return them a unique resource id on the initial rest call and implement another resource with exactly this parameter as the input. If your system finished processing, the result will appear there.
Spawning a process from within another php script isn't very handy since it's very difficult to debug and error-prone.
I personally would't go for this solution.
Problem:
I'm trying to see if I can have a back and forth between a program running on the server-side and JavaScript running on the client-side. All the outputs from the program are sent to JavaScript to be displayed to the user, and all the inputs from the user are sent from JavaScript to the program.
Having JavaScript receive the output and send the input is easily done with AJAX. The problem is that I do not know how to access an already running program on the server.
Attempt:
I tried to use PHP, but ran into some hurdles I couldn't leap over. Now, I can execute a program with PHP without any issue using proc_open. I can hook into the stdin and stdout streams, and I can get output from the program and send it input as well. But I can do this only once.
If the same PHP script is executed(?) again, I end up running the program again. So all I ever get out of multiple executions is whatever the program writes to stdout first, multiple times.
Right now, I use proc_open in the script which is supposed to only take care of input and output because I do not know how to access the stdout and stdin streams of an already running program. The way I see it, I need to maintain the state of my program in execution over multiple executions of the same PHP script; maintain the resource returned by proc_open and the pipes hooked into the stdin and stdout streams.
$_SESSION does NOT work. I cannot use it to maintain resources.
Is there a way to have such a back and forth with a program? Any help is really appreciated.
This sounds like a job for websockets
Try something like http://socketo.me/ or http://code.google.com/p/phpwebsocket/
I've always used Node for this type of thing, but from the above two links and a few others, it looks like there's options for PHP as well.
There may be a more efficient way to do it, but you could get the program to write it's output to a text file, and read the contents of that text file in with php. That way you'd have access to the full stream of data from the running program. There are issues with managing the size of the file, and handling requests from multiple clients, but it's a simple approach that might be good enough for your needs.
You are running the same program again, because it's the way PHP works. In your case client does a HTTP request and runs the script. Second request will run the script again. I'm not sure if continuous interaction is possible, so I would suggest making your script able to handle discrete transactions.
In order to figure different steps of the same "interaction", you will have to save data about previous ones in database. Basically, you need to give some unique hash to every client to identify them in your script, then it will know who does the request and will be able to differ consecutive requests from one user from requests of different users.
If your script is heavy and runs for a long time, consider making two script - one heavy and one for interaction (AJAX will query second one). In this case, second script will fill data into database and heavy script will simply fetch it from there.
I have a PHP script that takes a long time (5-30 minutes) to complete. Just in case it matters, the script is using curl to scrape data from another server. This is the reason it's taking so long; it has to wait for each page to load before processing it and moving to the next.
I want to be able to initiate the script and let it be until it's done, which will set a flag in a database table.
What I need to know is how to be able to end the http request before the script is finished running. Also, is a php script the best way to do this?
Certainly it can be done with PHP, however you should NOT do this as a background task - the new process has to be dissociated from the process group where it is initiated.
Since people keep giving the same wrong answer to this FAQ, I've written a fuller answer here:
http://symcbean.blogspot.com/2010/02/php-and-long-running-processes.html
From the comments:
The short version is shell_exec('echo /usr/bin/php -q longThing.php | at now'); but the reasons "why", are a bit long for inclusion here.
Update +12 years
While this is still a good way to invoke a long running bit of code, it is good for security to limit or even disable the ability of PHP in the webserver to launch other executables. And since this decouples the behaviour of the log running thing from that which started it, in many cases it may be more appropriate to use a daemon or a cron job.
The quick and dirty way would be to use the ignore_user_abort function in php. This basically says: Don't care what the user does, run this script until it is finished. This is somewhat dangerous if it is a public facing site (because it is possible, that you end up having 20++ versions of the script running at the same time if it is initiated 20 times).
The "clean" way (at least IMHO) is to set a flag (in the db for example) when you want to initiate the process and run a cronjob every hour (or so) to check if that flag is set. If it IS set, the long running script starts, if it is NOT set, nothin happens.
You could use exec or system to start a background job, and then do the work in that.
Also, there are better approaches to scraping the web that the one you're using. You could use a threaded approach (multiple threads doing one page at a time), or one using an eventloop (one thread doing multiple pages at at time). My personal approach using Perl would be using AnyEvent::HTTP.
ETA: symcbean explained how to detach the background process properly here.
No, PHP is not the best solution.
I'm not sure about Ruby or Perl, but with Python you could rewrite your page scraper to be multi-threaded and it would probably run at least 20x faster. Writing multi-threaded apps can be somewhat of a challenge, but the very first Python app I wrote was mutlti-threaded page scraper. And you could simply call the Python script from within your PHP page by using one of the shell execution functions.
Yes, you can do it in PHP. But in addition to PHP it would be wise to use a Queue Manager. Here's the strategy:
Break up your large task into smaller tasks. In your case, each task could be loading a single page.
Send each small task to the queue.
Run your queue workers somewhere.
Using this strategy has the following advantages:
For long running tasks it has the ability to recover in case a fatal problem occurs in the middle of the run -- no need to start from the beginning.
If your tasks do not have to be run sequentially, you can run multiple workers to run tasks simultaneously.
You have a variety of options (this is just a few):
RabbitMQ (https://www.rabbitmq.com/tutorials/tutorial-one-php.html)
ZeroMQ (http://zeromq.org/bindings:php)
If you're using the Laravel framework, queues are built-in (https://laravel.com/docs/5.4/queues), with drivers for AWS SES, Redis, Beanstalkd
PHP may or may not be the best tool, but you know how to use it, and the rest of your application is written using it. These two qualities, combined with the fact that PHP is "good enough" make a pretty strong case for using it, instead of Perl, Ruby, or Python.
If your goal is to learn another language, then pick one and use it. Any language you mentioned will do the job, no problem. I happen to like Perl, but what you like may be different.
Symcbean has some good advice about how to manage background processes at his link.
In short, write a CLI PHP script to handle the long bits. Make sure that it reports status in some way. Make a php page to handle status updates, either using AJAX or traditional methods. Your kickoff script will the start the process running in its own session, and return confirmation that the process is going.
Good luck.
I agree with the answers that say this should be run in a background process. But it's also important that you report on the status so the user knows that the work is being done.
When receiving the PHP request to kick off the process, you could store in a database a representation of the task with a unique identifier. Then, start the screen-scraping process, passing it the unique identifier. Report back to the iPhone app that the task has been started and that it should check a specified URL, containing the new task ID, to get the latest status. The iPhone application can now poll (or even "long poll") this URL. In the meantime, the background process would update the database representation of the task as it worked with a completion percentage, current step, or whatever other status indicators you'd like. And when it has finished, it would set a completed flag.
You can send it as an XHR (Ajax) request. Clients don't usually have any timeout for XHRs, unlike normal HTTP requests.
I realize this is a quite old question but would like to give it a shot. This script tries to address both the initial kick off call to finish quickly and chop down the heavy load into smaller chunks. I haven't tested this solution.
<?php
/**
* crawler.php located at http://mysite.com/crawler.php
*/
// Make sure this script will keep on runing after we close the connection with
// it.
ignore_user_abort(TRUE);
function get_remote_sources_to_crawl() {
// Do a database or a log file query here.
$query_result = array (
1 => 'http://exemple.com',
2 => 'http://exemple1.com',
3 => 'http://exemple2.com',
4 => 'http://exemple3.com',
// ... and so on.
);
// Returns the first one on the list.
foreach ($query_result as $id => $url) {
return $url;
}
return FALSE;
}
function update_remote_sources_to_crawl($id) {
// Update my database or log file list so the $id record wont show up
// on my next call to get_remote_sources_to_crawl()
}
$crawling_source = get_remote_sources_to_crawl();
if ($crawling_source) {
// Run your scraping code on $crawling_source here.
if ($your_scraping_has_finished) {
// Update you database or log file.
update_remote_sources_to_crawl($id);
$ctx = stream_context_create(array(
'http' => array(
// I am not quite sure but I reckon the timeout set here actually
// starts rolling after the connection to the remote server is made
// limiting only how long the downloading of the remote content should take.
// So as we are only interested to trigger this script again, 5 seconds
// should be plenty of time.
'timeout' => 5,
)
));
// Open a new connection to this script and close it after 5 seconds in.
file_get_contents('http://' . $_SERVER['HTTP_HOST'] . '/crawler.php', FALSE, $ctx);
print 'The cronjob kick off has been initiated.';
}
}
else {
print 'Yay! The whole thing is done.';
}
I would like to propose a solution that is a little different from symcbean's, mainly because I have additional requirement that the long running process need to be run as another user, and not as apache / www-data user.
First solution using cron to poll a background task table:
PHP web page inserts into a background task table, state 'SUBMITTED'
cron runs once each 3 minutes, using another user, running PHP CLI script that checks the background task table for 'SUBMITTED' rows
PHP CLI will update the state column in the row into 'PROCESSING' and begin processing, after completion it will be updated to 'COMPLETED'
Second solution using Linux inotify facility:
PHP web page updates a control file with the parameters set by user, and also giving a task id
shell script (as a non-www user) running inotifywait will wait for the control file to be written
after control file is written, a close_write event will be raised an the shell script will continue
shell script executes PHP CLI to do the long running process
PHP CLI writes the output to a log file identified by task id, or alternatively updates progress in a status table
PHP web page could poll the log file (based on task id) to show progress of the long running process, or it could also query status table
Some additional info could be found in my post : http://inventorsparadox.blogspot.co.id/2016/01/long-running-process-in-linux-using-php.html
I have done similar things with Perl, double fork() and detaching from parent process. All http fetching work should be done in forked process.
Use a proxy to delegate the request.
what I ALWAYS use is one of these variants (because different flavors of Linux have different rules about handling output/some programs output differently):
Variant I
#exec('./myscript.php \1>/dev/null \2>/dev/null &');
Variant II
#exec('php -f myscript.php \1>/dev/null \2>/dev/null &');
Variant III
#exec('nohup myscript.php \1>/dev/null \2>/dev/null &');
You might havet install "nohup". But for example, when I was automating FFMPEG video converstions, the output interface somehow wasn't 100% handled by redirecting output streams 1 & 2, so I used nohup AND redirected the output.
if you have long script then divide page work with the help of input parameter for each task.(then each page act like thread)
i.e if page has 1 lac product_keywords long process loop then instead of loop make logic for one keyword and pass this keyword from magic or cornjobpage.php(in following example)
and for background worker i think you should try this technique it will help to call as many as pages you like all pages will run at once independently without waiting for each page response as asynchronous.
cornjobpage.php //mainpage
<?php
post_async("http://localhost/projectname/testpage.php", "Keywordname=testValue");
//post_async("http://localhost/projectname/testpage.php", "Keywordname=testValue2");
//post_async("http://localhost/projectname/otherpage.php", "Keywordname=anyValue");
//call as many as pages you like all pages will run at once independently without waiting for each page response as asynchronous.
?>
<?php
/*
* Executes a PHP page asynchronously so the current page does not have to wait for it to finish running.
*
*/
function post_async($url,$params)
{
$post_string = $params;
$parts=parse_url($url);
$fp = fsockopen($parts['host'],
isset($parts['port'])?$parts['port']:80,
$errno, $errstr, 30);
$out = "GET ".$parts['path']."?$post_string"." HTTP/1.1\r\n";//you can use POST instead of GET if you like
$out.= "Host: ".$parts['host']."\r\n";
$out.= "Content-Type: application/x-www-form-urlencoded\r\n";
$out.= "Content-Length: ".strlen($post_string)."\r\n";
$out.= "Connection: Close\r\n\r\n";
fwrite($fp, $out);
fclose($fp);
}
?>
testpage.php
<?
echo $_REQUEST["Keywordname"];//case1 Output > testValue
?>
PS:if you want to send url parameters as loop then follow this answer :https://stackoverflow.com/a/41225209/6295712
Not the best approach, as many stated here, but this might help:
ignore_user_abort(1); // run script in background even if user closes browser
set_time_limit(1800); // run it for 30 minutes
// Long running script here
If the desired output of your script is some processing, not a webpage, then I believe the desired solution is to run your script from shell, simply as
php my_script.php
I have done some google search on this topic and couldn't find the answer to my question.
What I want to achieve is the following:
the client make an asynchronous call to a function in the server
the server runs that function in the background (because that function is time consuming), and the client is not hanging in the meantime
the client constantly make a call to the server requesting the status of the background job
Can you please give me some advices on resolving my issue?
Thank you very much! ^-^
You are not specifying what language the asynchronous call is in, but I'm assuming PHP on both ends.
I think the most elegant way would be this:
HTML page loads, defines a random key for the operation (e.g. using rand() or an already available session ID [be careful though that the same user could be starting two operations])
HTML page makes Ajax call to PHP script to start_process.php
start_process.php executes exec /path/to/scriptname.php to start the process; see the User Contributed Notes on exec() on suggestions how to start a process in the background. Which one is the right for you, depends mainly on your OS.
long_process.php frequently writes its status into a status file, named after the random key that your Ajax page generated
HTML page makes frequent calls to show_status.php that reads out the status file, and returns the progress.
Have a google for long running php processes (be warned that there's a lot of bad advice out there on the topic - including the note referred to by Pekka - this will work on Microsoft but will fail in unpredicatable ways on anything else).
You could develop a service which responds to requests over a socket (your client would use fsockopen to connect) - some simple ways of acheiving this would be to use Aleksey Zapparov's Socket server (http://www.phpclasses.org/browse/package/5758.html) which handles requests coming in via a socket however since this runs as a single thread it may not be very appropriate for something which requiers a lot of processing. ALternatively, if you are using a non-Microsoft system then yo could hang your script off [x]inetd however, you'll need to do some clever stuff to prevent it terminating when the client disconnects.
To keep the thing running after your client disconnects then the PHP code must be running from the standalone PHP executable (not via the webserver) Spawn a process in a new process group (see posix_setsid() and pcntl_fork()). To enable the client to come back and check on progress, the easiest way to achieve this is to configure the server to write out its status to somewhere the client can read.
C.
Ajax call run method longRunningMethod() and get back an idendifier (e.g an id)
Server runs the method, and sets key in e.g. sharedmem
Client calls checkTask(id)
server lookup the key in sharedmem and check for ready status
[repeat 3 & 4 until 5 is finished]
longRunningMethod is finished and sets state to finished in sharedmem.
All Ajax calls are per definition asynchronous.
You could (although not a strictly necessary step) use AJAX to instantiate the call, and the script could then create a reference to the status of the background job in shared memory (or even a temporary entry in an SQL table, or even a temp file), in the form of a unique job id.
The script could then kick off your background process and immediately return the job ID to the client.
The client could then call the server repeatedly (via another AJAX interface, for example) to query the status of the job, e.g. "in progress", "complete".
If the background process to be executed is itself written in PHP (e.g. a command line PHP script) then you could pass the job id to it and it could provide meaningful progress updates back to the client (by writing to the same shared memory area, or database table).
If the process to executed it's not itself written in PHP then I suggest wrapping it in a command line PHP script so that it can monitor when the process being executed has finished running (and check the output to see if was successful) and update the status entry for that task appropriately.
Note: Using shared memory for this is best practice, but may not be available if you are using shared hosting, for example. Don't forget you want to have a means to clean up old status entries, so I would store "started_on"/"completed_on" timestamps values for each one, and have it delete entries for stale data (e.g. that have a completed_on timestamp of more than X minutes - and, ideally, that also checks for jobs that started some time ago but were never marked as completed and raises an alert about them).
PHP provides a mechanism to register a shutdown function:
register_shutdown_function('shutdown_func');
The problem is that in the recent versions of PHP, this function is still executed DURING the request.
I have a platform (in Zend Framework if that matters) where any piece of code throughout the request can register an entry to be logged into the database. Rather than have tons of individual insert statements throughout the request, slowing the page down, I queue them up to be insert at the end of the request. I would like to be able to do this after the HTTP request is complete with the user so the length of time to log or do any other cleanup tasks doesn't affect the user's perceived load time of the page.
Is there a built in method in PHP to do this? Or do I need to configure some kind of shared memory space scheme with an external process and signal that process to do the logging?
If you're really concerned about the insert times of MySQL, you're probably addressing the symptoms and not the cause.
For instance, if your PHP/Apache process is executing after the user gets their HTML, your PHP/Apache process is still locked into that request. Since it's busy, if another request comes along, Apache has to fork another thread, dedicate more memory to it, open additional database connections, etc.
If you're running into performance problems, you need to remove heavy lifting from your PHP/Apache execution. If you have a lot of cleanup tasks going on, you're burning precious Apache processes.
I would consider logging your entries to a file and have a crontab load these into your database out of band. If you have other heavy duty tasks, use a queuing/job processing system to do the work out of band.
Aside from register_shutdown_function() there aren't built in methods for determining when a script has exited. However, the Zend Framework has several hooks for running code at specific points in the dispatch process.
For your requirements the most relevant would be the action controller's post-dispatch hook which takes place just after an action has been dispatched, and the dispatchLoopShutdown event for the controller plugin broker.
You should read the manual to determine which will be a better fit for you.
EDIT: I guess I didn't understand completely. From those hooks you could fork the current process in some form or another.
PHP has several ways to fork processes as you can read in the manual under program execution. I would suggest going over the pcntl extension - read this blog post to see an example of forking child processes to run in the background.
The comments on this random guy's blog sound similar to what you want. If that header trick doesn't work, one of the comments on that blog suggests exec()ing to a separate PHP script to run in the background, if your Web host's configuration allows such a thing.
This might be a little hackish for your taste, but it would be an effective and simple workaround.
You could create a "queue" managed by your DB of choice, and first, store the request in your queue table in the database, then output an iframe that leads to a script that will trigger the instructions that match up with the queue_id in your database.
ex:
<?php
mysql_query('INSERT INTO queue ('instructions') VALUES ('something would go here');
echo('<iframe src="/yourapp/execute_queue/id?' . mysql_insert_id() . '" />');
?>
and the frame would do something like
ex:
<?php
$result = mysql_query('SELECT instructions FROM queue WHERE id = ' . $_GET['id']);
// From here, simply execute some instruction based on the "instructions" field, then delete the instruction from the database.
?>
Like I said, I can see how you could consider this hackish, but the frame will load independent of its parent page, so it would achieve what you want without running some other app in the background.
Hope this at least points you in the right direction.
Cheers!