Is it necessary to fclose pipes and proc_close processes - php

I am using a script to open (proc_open) from 5 to 50 processes one after another. Each of them does cURL and posts results to DB. I do not want to wait for their execution results, I just want them to run when the main script is axecuting and after it. I use set_time_limit(10); in each proccess file.
proc_close waits for the process to terminate, and returns its exit code.
On the web I found that php automatically closes all pipes and each proccess once the main file is completely executed without calling proc_close and fclose but did not find any documented prove.
So the question is: Is it necessary to fclose pipes and proc_close processes?
And can it be a problem if 100-200 users run this script at the same time and it opens from 5 to 50 processes for each user and proc_close and fclose are not called.
If you have a more elegant way of doing this task please tell me but first of all I need to get the information regarding my current approach. Thanks a lot.
The code I use to call each proccess is (part of a function):
$params = addcslashes(serialize($params), '"');
$command = $this->phpPath.' -q '.$filename.' --params "'.$params.'"';
++$this->lastId;
$this->handles[$this->lastId] = proc_open($command, $this->descriptorSpec, $pipes);
$this->streams[$this->lastId] = $pipes[1];
$this->pipes[$this->lastId] = $pipes;

Related

Improve prevent php script run twice (with cronjobs)

I'm like to improve script below, or maybe know if exist a better way to rewrite to better results.
I use this on two files cron1.php and cron2.php executed every 5 seconds and need to prevent running twice.
Script execution time depends of filesize, most of the time took around 2 seconds, but for huge files can take 25/30 seconds, for this i need to stop execution.
I'm on right way? Any suggestion to improve?
$fp = fopen("cron.lock", "a+");
if (flock($fp, LOCK_EX | LOCK_NB))
{
echo "task started\n";
// Here is my long script
// Cron run every 5 seconds
sleep(2);
flock($fp, LOCK_UN);
}
else
{
echo "task already running\n";
exit;
}
fclose($fp);
I generally do a file operation like dumping the getmypid in the lock file. So externally I can know which pid has locked it. In some debugging cases, that is helpful.
Finally, unlink the lock file when you are done.

Store php exec in a session variable

Is is possible to store an exec' output into a session variable while its running to see it's current progress?
example:
index.php
<?php exec ("very large command to execute", $arrat, $_SESSION['output']); ?>
follow.php
<php echo $_SESSION['output']); ?>
So, when i run index.php i could close the page and navigate to follow.php and follow the output of the command live everytime i refresh the page.
No, because exec waits for the spawned process to terminate before it returns. But it should be possible to do with proc_open because that function provides the outputs of the spawned process as streams and does not wait for it to terminate. So in broard terms you could do this:
Use proc_open to spawn a process and redirect its output to pipes.
Use stream_select inside some kind of loop to see if there is output to read; read it with the appropriate stream functions when there is.
Whenever output is read, call session_start, write it to a session variable and call session_write_close. This is the standard "session lock dance" that allows your script to update session data without holding a lock on them the whole time.
No, exec will run to completion and only then will store the result in session.
You should run a child process writing directly to a file and then read that file in your browser:
$path = tempnam(sys_get_temp_dir(), 'myscript');
$_SESSION['work_path'] = $path;
// Release session lock
session_write_close();
$process = proc_open(
'my shell command',
[
0 => ['pipe', 'r'],
1 => ['file', $path],
2 => ['pipe', 'w'],
],
$pipes
);
if (!is_resource($process)) {
throw new Exception('Failed to start');
}
fclose($pipes[0]);
fclose($pipes[2]);
$return_value = proc_close($process);
In your follow.php you can then just output the current output:
echo file_get_contents($_SESSION['work_path']);
No, You can't implement watching in this way.
I advise you to use file to populate status from index.php and read status from file in follow.php.
As alternative for file you can use Memcache

how to handle infinite loop process when using proc_open

I use proc_open to execute a program created by c language.
I was using file for the "stdout".
$descriptorspec = array(
0 => array("pipe", "r"),
1 => array("file", "/tmp/example.output"),
2 => array("file", "/tmp/example.error", "a")
);
Everything is fine when I was executing good program but problem occurred when I was executing infinite loop program like the code below :
#include "stdio.h"
int main(){
while(1){
printf("Example");
}
return 0
}
File example.output will make my hard disk full. So I need to delete the file and restart my computer. My question is how to handle something like this?
Thanks :)
The only thing you can do is slaughter the offending process without prejudice using proc_terminate (but you can be polite and allow it to run for a while first, in effect imposing a time limit for it to complete).
For example:
$proc = proc_open(...);
sleep(20); // give the process some time to run
$status = proc_get_status($proc); // see what it's doing
if($status['running']) {
proc_terminate($proc); // kill it forcefully
}
Don't forget to clean up any handles you still have in your hands afterwards.

PHP Pipe into a background process

I'm trying to use popen to run a php script in the background. However, I need to pass a (fairly large) serialized object.
$cmd = "php background_test.php >log/output.log &";
$fh = popen($cmd, 'w');
fwrite($fh, $data);
fclose($fh);
//pclose($fh);
Without the ampersand this code executes fine but the parent script will wait until the child is finished running. With the ampersand STDIN gets no data.
Any ideas?
You can try forking letting child process to write data and main script continue as normal.
Something like this
// Fork a child process
$pid = pcntl_fork();
// Unable to fork
if ($pid == -1) {
die('error');
}
// We are the parent
elseif ($pid) {
// do nothing
}
// We are the child
else {
$cmd = "php background_test.php >log/output.log";
$fh = popen($cmd, 'w');
fwrite($fh, $data);
fclose($fh);
exit();
}
// parent will continue here
// child will exit above
Read more about it here: https://sites.google.com/a/van-steenbeek.net/archive/php_pcntl_fork
Also check function pcntl_waitpid() (zombies be gone) in php documentation.
As far as I know there is no way in php to send a process in background and continue to feed its STDIN (but maybe I'm wrong). You have two other choices here:
Refactor your background_test.php to get its input from command line and transform your command line in php background_test.php arg1 arg2 ... >log/output.log &
If your input is pretty long, write it to a temporary file and then feed the background_test.php script with that file as in the following code
Example for point 2:
<?
$tmp_file = tempnam();
file_put_content($tmp_file, $data);
$cmd = "php background_test.php < $tmp_name > log/output.log &";
exec($cmd);
Make a background processes listen to a socket file. Then open socket file from PHP and send your serialized data there. When your background daemon receives connection through the socket, make it fork, read data then process.
You would need to do some reading, but I think that's the best way to achieve this. By socket i mean unix socket file, but you can also use this over the network.
http://gearman.org/ is also a good alternative as mentioned by #Joshua

PHP - Blocking File Read

I have a file that is getting added to remotely (file.txt). From SSH, I can call tail -f file.txt which will display the updated contents of the file. I'd like to be able to do a blocking call to this file that will return the last appended line. A pooling loop simply isn't an option. Here's what I'd like:
$cmd = "tail -f file.txt";
$str = exec($cmd);
The problem with this code is that tail will never return. Is there any kind of wrapper function for tail that will kill it when once it has returned content? Is there a better way to do this in a low overhead way?
The only solution I've found is somewhat dirty:
<?php
$descriptorspec = array(
0 => array("pipe", "r"), // stdin
1 => array("pipe", "w"), // stdout
2 => array("pipe", "w") // stderr
);
$process = proc_open('tail -f -n 0 /tmp/file.txt',$descriptorspec,$pipes);
fclose($pipes[0]);
stream_set_blocking($pipes[1],1);
$read = fgets($pipes[1]);
fclose($pipes[1]);
fclose($pipes[2]);
//if I try to call proc_close($process); here, it fails / hangs untill a second line is
//passed to the file. Hence an inelegant kill in the next 2 line:
$status = proc_get_status($process);
exec('kill '.$status['pid']);
proc_close($process);
echo $read;
tail -n 1 file.txt will always return you the last line in the file, but I'm almost sure what you want instead is for PHP to know when file.txt has a new line, and display it, all without polling in a loop.
You will need a long running process anyway if it will check for new content, be it with a polling loop that checks for file modification time and compares to the last modification time saved somewhere else, or any other way.
You can even have php be run via cron to do the check if you don't want it running in a php loop (probably best), or via a shell script that does the loop and calls the php file if you need sub 1-minute runs that are cron's limit.
Another idea, though I haven't tried it, would be to open the file in a non-blocking stream and then use the quite efficient stream_select on it to have the system poll for changes.

Categories