PHP CLI multiple background processes limitation - php

Server Information:
CentOS 6.5
12GB RAM
Intel(R) Xeon(R) CPU E5-2430 # 6 CPU x 2.20GHz
PHP CLI 5.5.7
I am currently trying to use Perl to fire off 1000 PHP CLI processes in parallel. This however takes 9.9 seconds vs 2.3 seconds for the equivalent Perl script. When I test using the Perl script /opt/test.pl, all 1000 processes are launched in parallel (ps -eLf | grep -ic 'test.pl'). When I test using /opt/testphp.php, using ps -eLf | grep -ic 'testphp.php', I see a count of 250, then it rises to 580 and then it drops to 0 (the script is executed 1000 times, just not in parallel).
Is there a limitation preventing a high number of PHP CLI processes from being launched in parallel?
Has anyone experienced this issue?
Please let me know if I have left out anything that would help to identify the issue.
Thanks
Perl launcher script:
use Time::HiRes qw/ time sleep /;
my $command = '';
my $start = time;
my $filename = '/tmp/report.txt';
# open(my $fh, '>', $filename) or die "Could not open file '$filename' $!";
for $i(1 .. 1000) {
# $command = $command . "(perl /opt/test.pl &);"; // takes 2.3 seconds
$command = $command . "(php -q /opt/testphp.php &);"; // takes 9.9 seconds
}
system($command);
my $end = time;
print 'Total time taken: ', ( $end - $start ) , "\n";
PHP file (testphp.php):
sleep(5);
$time = microtime(true);
file_put_contents('/tmp/report_20140804_php.log', "This is the record: $time\n", FILE_APPEND);
Perl file (test.pl):
#! /usr/bin/perl
use Time::HiRes qw/ time sleep /;
sleep(5);
my $command = '';
my $start = time;
my $filename = '/tmp/report_20140804.log';
open(my $fh, '>>', $filename) or die "Could not open file '$filename' $!";
print $fh "Successfully saved entry $start\n";
close $fh;

Related

how to get the status of the particular process by the shell_exec()?

I want to know the status of the process by passing the name in the command and execute it with the function shell_exec().
I have tried this:
`
$checkProcessStatus = "ps aux | grep <ProcessName>";
$status = shell_exec($checkProcessStatus);
dd($status);
`
I got this result:
`
user 17072 0.0 0.2 166216 33332 pts/3 S+ 11:31 0:00 <ProcessName> artis
user 20397 0.0 0.0 14232 868 pts/3 S+ 11:52 0:00 grep <ProcessName>
`
I want only the Status Like "Running" OR "Sleeping".
Here is the working code:
<?php
$command = 'ps aux';
$result = shell_exec($command);
//split for rows
$processes = explode("\n", $result);
//delete head row
array_shift($processes);
//analyze
foreach ($processes as $rawProcess) {
//beware, command line column may include spaces, that why last group is (.+)
preg_match('/(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(.+)/', $rawProcess,$matches);
//preg match doesn't find anything
if (empty($matches)) {
continue;
}
//is sleeping status
if (strpos($matches[8], 'S') === 0) {
echo $rawProcess;
echo "\n";
continue;
}
//is running status
if (strpos($matches[8], 'R') === 0) {
echo $rawProcess;
echo "\n";
continue;
}
//is not sleeping and not running status
}
You can use $matches[N] for any column.
By the way you can use awk to grep data by status
ps aux | awk 'substr($8,1,1) == "S" || substr($8,1,1) == "R"'
P.S.
Status mean:
D uninterruptible sleep (usually IO)
R running or runnable (on run queue)
S interruptible sleep (waiting for an event to complete)
T stopped by job control signal
t stopped by debugger during the tracing
W paging (not valid since the 2.6.xx kernel)
X dead (should never be seen)
Z defunct ("zombie") process, terminated but not reaped by its parent
Status addition mean
< high-priority (not nice to other users)
N low-priority (nice to other users)
L has pages locked into memory (for real-time and custom IO)
s is a session leader
l is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)
+ is in the foreground process group

Php executing a bash script with at command

Hello I'am trying to execute a shell command in my php script but it is not working.
My php script :
//I save the Order
$holdedOrder->save();
$id = $holdedOrder->id;
$old_path = getcwd();
chdir(__DIR__.'/../');
$scriptFile = 'anacron_job_unhold_order.sh';
$bool = file_exists($scriptFile);
//$bool is true !
//this command works in shell but not in here do not know why
$s = shell_exec("echo \"/usr/bin/bash $scriptFile $id\" | /usr/bin/at now +$when");
chdir($old_path);
return [$s,$bool];
$when has a valid value 4 hours or 4 days ...
The command will be :
echo bash anacron_job_unhold_order.sh 29 | at now +1 minutes
the output is null. Trying it with exec() is returning 127 code
Edit :
I removed www-data from /etc/at.deny and still the same problem
The command shell_exec("echo \"/usr/bin/bash $scriptFile $id\" | /usr/bin/at now +$when"); is probably causing the issue, you should take a closer look on how at works.
The command should be something like
at [options] [job...]
To give a job to at from the command instead of STDIN, use heredoc syntax
at now + 1 minute <<< 'touch /tmp/test'
So the PHP code should be something like;
$s = shell_exec("/usr/bin/at now + {$when} <<< '/usr/bin/bash {$scriptFile} {$id}'");
exec($s, $output, $return_var);

Run Node.js script from PHP - output is truncated to 512 characters

We run node.js CLI script from PHP with Symfony Process.
The script always print whole response as JSON in one line.
The response is somehow truncated on 512 characters.
I only found that xdebug.var_display_max_data => 512 => 512 in php.ini but don't see how this is related.
Adapter > Symfony Process > node script.js
A) Test Node script
from terminal node script $ node user-update.js parameters returns full result in all cases - like 629 chars.
from Symfony Process node script response is truncated to 512 chars.
B) Test Symfony Process
$process = new Process($cmd);
try {
$process->mustRun();
$response = $process->getOutput();
} catch (ProcessFailedException $e) {
$response = $e->getMessage();
}
echo $response;
echo PHP_EOL;
echo strlen($response);
$cmd = 'node user-update.js parameters'; - truncated to 512.
$cmd = 'php -r \'for($i=0; $i<520; $i++){ echo "."; }\''; - does not truncate.
$cmd = 'cat long_one_line.txt'; - print full file. 1650 chars in one line.
C) Try with PHP shell functions
$response = shell_exec($cmd); // response is truncated to 512
system($cmd, $returnVal); // print directly to STDOut, truncated to 512
What could be the cause and solution?
node v7.6.0
PHP 7.1.2
I suspect your process is ending before the buffer can be read by PHP.
As a work-around you can add something like this:
// The `| cat` at the end of this line means we wait for
// cat's process to end instead of node's process.
$process = new Process('node user-update.js parameters | cat');

How to setup a timeout on PHP exec() command on Windows?

I am running some php exec commands on PHP - CLI
Some of these exec() take too long.
So my idea is to setup a 60 seconds timeout on the exec()
I found some solutions for Linux, but I could not adapt them on windows (pipe/processes...)
Any idea on how to trigger a timeout on windows php cli exec() command ?
$intExecutionTimeout = 60;
$strCommand = 'wget http://google.com';
$strCommand = 'timeout --signal=15 --kill-after=' . ( $intExecutionTimeout* 2 ) . 's ' . $intExecutionTimeout . 's ' . $strCommand . ' &';
exec( $strCommand, $arrstrResponse );
try timeout command in CLI:
$time = 60;
$command = 'wget http://google.com';
exec(sprintf("C:\Windows\System32\timeout.exe /t %d %s", $time, $command), $output);

Need a way to selectively kill sleeping php processes

I have a website that periodically gets a large number of sleeping php processes. My hosting service sets a limit of 20 concurrent running processes. If it goes over the limit my site goes down with a 503 error.
It is a rare occurrence and doesn't seem to have any correlation to the number of people visiting my site.
As a safeguard I would like to have a cron job with a php script that would kill php processes that have been sleeping for over 10 min.
I have a php function that will kill all sleeping MySql processes that have been sleeping for more than 10 min;
public function kill_sleeping_mysql_processes()
{
$result = $this->db->query("SHOW FULL PROCESSLIST");
foreach($result->result_array() as $row)
{
if ($row['Command'] == "Sleep" && $row['Time'] > 600)
{
$this->db->query("KILL {$row['Id']}")
}
}
}
The question is how can do I do the same with php processes?
I can get a read out of php processes with this code.
exec("ps aux | less", $output);
and I can kill specific php processes with this code if I have the pid;
$pid = 11054;
exec("kill -9 $pid");
But how can I selectively kill php processes that have been sleeping more than 10 min?
I cobbled something together. It is not elegant and is a bit of a hack but it seems to work, although I am going to test it further before putting in a cron job.
public function kill_dormant_php_processes()
{
$output_array = array();
exec("ps aux | grep -v grep", $ps_output);
array_shift($ps_output);
if (count($ps_output) > 0)
{
$i = 0;
foreach ($ps_output as $ps)
{
$ps = preg_split('/ +/', $ps);
$output_array[$i]->pid = $ps[1];
$output_array[$i]->stat = $ps[7];
$output_array[$i]->time = $ps[9];
$i++;
}
}
if( ! empty($output_array))
{
foreach ($output_array as $row)
{
if( $row->stat == 'S' && date('H:i', strtotime($row->time)) > date('H:i', strtotime('00:01')))
{
exec("kill -9 $row->pid");
}
}
}
}
I am sure there must be a better way to do it.
Could someone explain why 00:01 in the read out seems to translate to 6 min?
freedom 6933 6.0 0.1 57040 13040 ? S 16:55 0:01 /usr/local/bin/php53.cgi -c .:/home/freedom/:/etc index.php
As an alternative to the PHP script shared here, you can use the killall command with an "older than" time filter (using the -o option) to kill all those processes.
This command for example will kill all php-cgi processes that have been running for more than 30 minutes:
killall -o 30m /usr/bin/php-cgi

Categories