Spawning a PHP script from inside another. Non-blockning - php

I have a PHP script that I need to execute from inside another PHP webpage. However for the second one to run properly the first needs to have fully completed. Essentially I need the first page to spawn a new process/thread for the second script which will wait 1 second before starting.
Doing an include causes blocking which prevents it from working and I can't get it to start using exec
Edit:
Should have clarified. These pages have no output and are not interfaced with through a web interface. All pages are called by POST requests from another server.
Edit 2:
Solution: make server requesting the page send a request directly to the second page 1 second after the first returns.

proc_open is the correct choice, as #ChristopherMorrissey pointed out. I want to elaborate a little here, as there are some caveats to using proc_open that aren't entirely obviously.
In the first code example # http://php.net/manual/en/function.proc-open.php, it shows the overall usage and I will reference that.
The first caveat is with the pipes. The pipes are file streams in PHP that link to STDIN, STDOUT and STDERR of the child process. In the example, pipe index 0 represents a file stream from the parent PHP processes perspective. If the parent process writes to this stream, it will appear as STDIN input to the child process.
On POSIX compliant OSes, STDIN to a process needs to close before the process can terminate. Its very important to call fclose on the pipe from the parent, or your child process will be stuck. That is done with this line in the example:
fclose($pipes[0]);
The other caveat is on checking the exit code of the child process. Checking the exit code is the best way to determine if the child process has exited correctly, or if it erred out. At the very least, you will need to just know when the child process has completed. Checking this and the exit code are both done with http://www.php.net/manual/en/function.proc-get-status.php
If you want to ensure the process exits correctly, you will need to look at the exitcode field returned in the array from proc_get_status. Keep in mind this exit code will only return a valid value once. All other times it will return -1. So, the one time it returns > -1, this is your actual exit code. So, the first time running == false, check exitcode.
I hope this helps.

One option would be to redirect the page after the first script has finished.
You could do it this way:
//first script here
sleep(1); //wait one second
echo "<meta http-equiv=\"refresh\" content=\"0;URL='yoursecondscript.php'\" />"; //redirect
or even:
//first script here
//redirect after one sec
echo "<meta http-equiv=\"refresh\" content=\"1;URL='http://thetudors.example.com/'\" />";

You might be able to write a header going to the second page when you want it, then putting a header back to the original page at the end of the second page. Though there are many reasons this fix wouldn't work, like the PHP code being required to be in the HTML.
Headers reference
So something like: header(Location: seconddocument.php)
Or maybe you could put the PHP to execute in a function, and then call it from the original PHP document. I can't be exactly sure of your requirements here, but those would be my two best answers.

Related

Windows PHP repeating script via popen

I'm trying to create a browser-started self-calling/repeating PHP script on Windows with PHP (currently 5.3.24 but soon will be latest). It will act as a daemon to monitor changes in a database (every few seconds, so cron/schedule is out) and then call other PHP scripts to perform work when changes are found. For the purposes of this question please ignore the fact that I'd be better off doing this in C# or some other language :)
To keep things simple I started out by trying to use popen to run a second PHP script in the background...
// BatchMonitor.php
SaveToMonitorTable(1); // save 1st test entry to see if the script reached this point
$Command = '"" "C:\Program Files (x86)\PHP\v5.3\php.exe" C:\inetpub\wwwroot\Test.php --Instance=' . $Data->Instance;
pclose(popen("start /B $Command", "r"));
SaveToMonitorTable(2); // save 2nd test entry to see if the script reached this point
exit();
// Test.php
SaveToTestTable(1);
Sleep(10);
SaveToTestTable(2);
exit();
If I run BatchMonitor.php in the browser it works fine. As expected it will save 1 to the monitor table, call Test.php which saves 1 to the test table, the original BatchMonitor.php will continue without waiting for a response and save 2 to the monitor table before exiting, then 10 seconds later the test page saves 2 to the test table before exiting. The second script starts fine, the first script does not wait for a reply and all parameters are correctly passed between scripts. With everything working as intended I then changed the system to work as a repeating loop by calling itself (with delay) instead of another script...
// BatchMonitor.php
SaveToMonitorTable(1); // save 1st test entry to see if the script reached this point
$Command = '"" "C:\Program Files (x86)\PHP\v5.3\php.exe" C:\inetpub\wwwroot\BatchMonitor.php --Instance=' . $Data->Instance;
pclose(popen("start /B $Command", "r"));
SaveToMonitorTable(2); // save 2nd test entry to see if the script reached this point
exit();
If I run BatchMonitor.php in the browser it runs once and that is it. It will save 1 to the database, wait 10 seconds and then save 2 to the database before exiting. The page returns successfully with no script or PHP errors but it doesn't repeat as it should.
Both BatchMonitor.php and Test.php use line-for-line identical functions to get the parameters and both files run correctly and identical on the first iteration. If I use exec instead of popen then the page loops correctly with all logic working as expected (with the one obvious flaw of creating a never-ending chain of scripts awaiting for response values that will never come).
Am I missing something obvious? Does popen have some sort of secret rule that prevents a page/process from opening duplicates of itself? Are there any alternatives to using popen or exec? I read about WScript.Shell but it might be a while before I can schedule that to get enabled so for now it's not an option and I'm hoping there is something more standard that I can use.
I dont feel like this should cbe your actual answer, But why do you disbandon scheduled tasks/cronjobs because you want something done every X seconds? Having the script minute.php calling 5seconds.php with ofcouse 5 second intervals in between would create a repeated taak evert 5 seconds right?
Strangely enough you are kinda using the same sort of mechanism from your browser already.
My only concern would be to take the processed time in account and create a safe script which ensures no more than 1 '5seconds.php' can run at any given time.

php redirect immediately and also continue execution which dont send data

I need to execute a bunch of functions that consume nearly 50 seconds to complete. But i want redirect to page after execution few function itself and also continue further execution.
The below function execute on form submit as in process.php // it wont echo anything
// all function in serprate process page.
func1
func2
func3
// After above 3 function i need to php redirect to redirect now immediately example.com
func4
func5
func6
I tried using php header but it won't immediately redirecting.
header("Location:http://www.example.com");
If I use exit(); it will redirect but not process function 4,5 and 6.
what i need is any way to redirect immediately after first 3 functions and continue execution.
On paper, you can achieve this by combining flush() and ignore_user_abort():
ignore_user_abort(true);
do_stuff();
send_redirect();
flush();
do_more_stuff();
Manual pages:
http://php.net/manual/en/function.flush.php
http://php.net/manual/en/function.ignore-user-abort.php
Note the known caveats: some browsers (old IEs in particular, and possibly new ones) want a minimum amount of bytes received before processing what you send them, so you might end up needing to toss in some long string in an html comment for it to work as expected.
In practice, the more conventional approach is to register a cron job task in some table, and have a cron.php file take care of pending tasks in a completely separate (and independent) request.
A less conventional approach is also highlighted in the comments: issue a shell command or something to that order — be very wary of sanitizing input if you do that.
Adding this for reference (see comments below):
<!-- IE bug fix: pad the page with enough characters such that it is greater than 512 bytes, even after gzip compression abcdefghijklmnopqrstuvwxyz1234567890aabbccddeeffgghhiijjkkllmmnnooppqqrrssttuuvvwwxxyyzz11223344556677889900abacbcbdcdcededfefegfgfhghgihihjijikjkjlklkmlmlnmnmononpopoqpqprqrqsrsrtstsubcbcdcdedefefgfabcadefbghicjkldmnoepqrfstugvwxhyz1i234j567k890laabmbccnddeoeffpgghqhiirjjksklltmmnunoovppqwqrrxsstytuuzvvw0wxx1yyz2z113223434455666777889890091abc2def3ghi4jkl5mno6pqr7stu8vwx9yz11aab2bcc3dd4ee5ff6gg7hh8ii9j0jk1kl2lmm3nnoo4p5pq6qrr7ss8tt9uuvv0wwx1x2yyzz13aba4cbcb5dcdc6dedfef8egf9gfh0ghg1ihi2hji3jik4jkj5lkl6kml7mln8mnm9ono
—>
References:
http://www.clintharris.net/2009/ie-512-byte-error-pages-and-wordpress/
http://core.trac.wordpress.org/ticket/8942
http://core.trac.wordpress.org/ticket/11289
(Actually applies only for http errors, after re-going through it.)
Try to split your process.php in two files : process.php and process_more.php (i.e.). Assuming your are under a unix os server :
Code for process.php :
// code to execute before redirect
func1();
func2();
func3();
// code to execute after redirect in background
// you can pass some parameters
$command = "php -f /path/to/process_more.php param1 param2"
exec($command . " > /dev/null &");
header('Location: http://dn.tld/');
exit();
See more to use parameters with $argv variable, and be careful to prevent the user for injecting another command within the parameters
Code for process_more.php :
func2();
func3();
func4();
Not that you won't be able to access $_GET or $_POST variables. You need to pass any variable you want within the call command.

PHP sleep() inside loop not updating DB

I have a php file that is fired by a cronjob every minute.
When the php file is fired it updates the database, sleeps, etc
It is programmed like this:
$start = microtime(true);
set_time_limit(10);
for($i=0;$i<5;$i++)
{
updateDB();
time_sleep_until($start + $i + 1);
}
If this piece of code is run i don't see any changes happening in the database. Another thing i notices is when i echo something out i is printed when the loop is ended in one piece.
[edit] I tried using flush and ob_flush, but it still didn't print line for line[/edit]
What can i do to avoid these errors. The database needs to be updated.
Another thing i was wondering is what the best way is to log this kind of thing. Can i log the results to a log file.
The loop itself looks fine. If it isn't updating your database, the error must be in your updateDB() function.
As to the echo thing. The output of scripts is often buffered. To force PHP to print it right away, you can call either call flush() whenever you want the output flushed, or you can just call ob_implicit_flush() at the top of the script and it will flush automatically every time you print something.
Also, if you are calling the script via a browser, the browser itself may further buffer the response before showing it to you.
And as to the logging, the simplest way is to pick a file somewhere and just use file_put_contents() to print whatever you want logged. Note the FILE_APPEND flag for the third parameter.
Looks like you are running from command line, in this case you may want to write to stderr so that there is no buffering. $stderr = fopen('php://stderr', 'w');
In the case of logging, just open a file, write to it, and close it. (fopen, fwrite, fclose);

Can PHP tell when the browser goes away?

If I'm generating a stream of data to send out to a browser, and the user closes the browser, can I tell within PHP that I don't need to bother generating or sending the rest of the stream? I'd like to insert something into this loop:
while (!feof($pipes[1])) {
echo fgets($pipes[1]);
}
My fallback plan is to have the browser use a JavaScript onunload to hit another PHP page to kill the process that's generating the data, but it would be cleaner if PHP could tell when I'm echoing to nowhere.
By default PHP will abort the script if the user navigates away. There are however times where you don't want this to happen so php has a config you set called ignore_user_abort.
http://php.net/manual/en/misc.configuration.php
There's also a function called register_shutdown_function() that is supposedly executed when execution halts. I've never actually used it, so I won't vouch for how well it works, but I thought I'd mention it for completeness.
I believe that script will automatically abort when loaded normally (No ajax). But if you want to implement some sort of long polling via php using xmlhttprequest I think you will have to do it with some sort of javascript because then php can't detect it. Also like to know the precise case.
These answers pointed me towards what I was looking for. The underlying process needed special attention to kill it. I needed to jump out of the loop. Thanks again, Stack Overflow.
while (!feof($pipes[1]) && !connection_aborted())
{
echo fgets($pipes[1]);
}
if (connection_aborted())
{
exec('kill -4 '.$mypid);
}

PHP: How to return information to a waiting script and continue processing

Suppose there are two scripts Requester.php and Provider.php, and Requester requires processing from Provider and makes an http request to it (Provider.php?data="data"). In this situation, Provider quickly finds the answer, but to maintain the system must perform various updates throughout the database. Is there a way to immediately return the value to Requester, and then continue processing in Provider.
Psuedo Code
Provider.php
{
$answer = getAnswer($_GET['data']);
echo $answer;
//SIGNAL TO REQUESTER THAT WE ARE FINISHED
processDBUpdates();
return;
}
You can flush the output buffer with the flush() command.
Read the comments in the PHP manual for more info
I use this code for running a process in the background (works on Linux).
The process runs with its output redirected to a file.
That way, if I need to display status on the process, it's just a matter of writing a small amount of code to read and display the contents of the output file.
I like this approach because it means you can completely close the browser and easily come back later to check on the status.
You basically want to signal the end of 1 process (return to the original Requester.php) and spawn a new process (finish Provider.php). There is probably a more elegant way to pull this off, but I've managed this a couple different ways. All of them basically result in exec-ing a command in order to shell off the second process.
adding the following > /dev/null 2>&1 & to the end of your command will allow it to run in the background without inhibiting the actual execution of your current script
Something like the following may work for you:
exec("wget -O - \"$url\" > /dev/null 2>&1 &");
-- though you could do it as a command line PHP process as well.
You could also save the information that needs to be processed and handle the remaining processing on a cron job that re-creates the same sort of functionality without the need to exec.
I think you'll need on the provider to send the data (be sure to flush), and then on the Requester, use fopen/fread to read an expected amount of data, so you can drop the connection to the Provider and continue. If you don't specify an amount of data to expect, I would think the requester would sit there waiting for the Provider to close the connection, which probably doesn't happen until the end of it's run (ie. all the secondary work intensive tasks are complete). You'll need to try out a few POC's..
Good luck.
Split the Provider in two: ProviderCore and ProviderInterface. In ProviderInterface just do the "quick and easy" part, also save a flag in database that the recent request hasn't been processed yet. Run ProviderCore as a cron job that searches for that flag and completes processing. If there's nothing to do, ProviderCore will terminate and retry in (say) 2 minutes.
I'm going out on a limb here, but perhaps you should try cURL or use a socket to update the requester?
You could start another php process in Provider.php using pcntl_fork()
Provider.php
{
// Fork process
$pid = pcntl_fork();
// You are now running both a daemon process and the parent process
// through the rest of the code below
if ($pid > 0) {
// PARENT Process
$answer = getAnswer($_GET['data']);
echo $answer;
//SIGNAL TO REQUESTER THAT WE ARE FINISHED
return;
}
if ($pid == 0) {
// DAEMON Process
processDBUpdates();
return;
}
// If you get here the daemon process failed to start
handleDaemonErrorCondition();
return;
}

Categories