I've a script accesible by the end user that makes the following call:
exec("php orderWatcher.php $insertedId > /dev/null &");
In orderWatcher.php I do some operations that take a long time:
if (checkSomeStuff) {
sleep(60);
}
makeOtherStuff();
I'm aware that I can have as many php scripts running as users requesting them, but I'm not sure if that remains true when I make an exec() call, since (according to my understanding) this executes a shell like command in the system.
Further more, if I perform the tests (This tests have been modified to keep them relevant to question, they have actually a lot more meaning than this):
class OrderPlacerResultOrders extends UnitTestCase {
function testSimple() {
exec("php orderWatcher.php $insertedId > /dev/null &");
// Wait for exec to finish
sleep(65);
$this->assertTrue(orderWatcherWorked(1));
// No problem here
}
function testComplex() {
for($i = 0; $i < 100; ++$i) {
exec("php orderWatcher.php $insertedId > /dev/null &");
}
// Wait a really long time
sleep(1000);
for($i = 0; $i < 100; ++$i) {
$this->assertTrue(orderWatcherWorked(i));
// Failure arround the 17th case
}
}
}
The tests aren't the main point, the test made me question the following:
How many exec calls to a php script can be made and handled by the server?
If there's a limited amount, it makes a diference that the exec is being requested by two instances of the script (As two different web users calling the script that makes the exec call)? Or is it the same as being called in the same script (As in the tests)?
PD: Coudln't think of any tags besides php, if you do think of one please tag the question
Calling a command via exec or by a URL doesn't change the amount of scripts that can be run at the same time.
The amount of scripts that a server can run at the same time depend on it's memory alongside with a number of other factors.
Related
First of all use windows.
I have the following code:
index.php
<?php
error_reporting(E_ALL);
$tiempo_inicio = microtime(true);
exec('C:\wamp\bin\php\php5.5.12\php.exe -e C:\wamp\www\mail.php > /dev/null &');
$tiempo_fin = microtime(true);
echo ($tiempo_fin - $tiempo_inicio);
?>
Mail.php
<?php
$tiempo_inicio = microtime(true);
$logs = fopen("test.txt","a+");
sleep(2);
$tiempo_fin = microtime(true);
fwrite($logs, ($tiempo_fin - $tiempo_inicio)."
");
sleep(4);
$tiempo_fin = microtime(true);
fwrite($logs, ($tiempo_fin - $tiempo_inicio)."
");
sleep(6);
$tiempo_fin = microtime(true);
fwrite($logs, ($tiempo_fin - $tiempo_inicio)."
");
echo 'fin';
?>
But it does not work I hope, because what I want is to run the file in the background without the user wait for the completion of this file.
What am I doing wrong?
You're talking about a non-blocking execution (where one process doesn't wait on another. PHP really can't do that very well (natively anyways) because it's designed around a single thread. Without knowing what your process does I can't comment on this precisely but I can make some suggestions
Consider asynchronous execution via AJAX. Marrying your script to a Javascript lets the client do the request and lets your PHP script run freely while AJAX opens another request that doesn't block the activity on the main page. Just be sure to let the user visually know you're waiting on data
pthreads (repo)- Multi-threaded PHP. Opens another process in another thread.
Gearman - Similar to pthreads but can be automated as well
cron job - Fully asynchronous. Runs a process on a regular interval. Consider that it could do, say, data aggregation and your script fetches on the aggregate data
I've had success in the past on Windows using pclose/popen and the Windows start command instead of exec. The downside to this is that it is difficult to react to any errors in the program you are calling.
I would try something like this (I'm not on a machine to test this today):
$command_string = 'C:\wamp\bin\php\php5.5.12\php.exe -f "C:\wamp\www\mail.php" -- variablestopass';
pclose(popen("start /B ".$command_string, 'r'));
I saw a couple of other question on the issue but not a clear answer.
I've a PHP file (must be PHP, cannot cron or other stuff) running from CLI where I must call the same function multiple time with different arguments:
doWork($param1);
doWork($param2);
doWork($param2);
function doWork($data)
{
//do stuff, write result to db
}
Each call makes HTTPs requests and parses the response. The operation can require up to a minute to complete. I must prevent the "convoy effect": each call must be executed without waiting for the previous one to complete.
PECL pthread is not an option due to server constraints.
Any ideas?
As far as I know you cannot do what you are looking for.
Instead of calling a function with its parameters, you have to call another cli php script in a nonblocking manner and put your function in that script.
This is your main script:
callDoWork($param1);
callDoWork($param2);
callDoWork($param3);
function callDoWork($param){
$cmd = 'start "" /b php doWork.php '.$param;
//if $param contains spaces or other special caracters for the command line,
// you have to escape them.
pclose(popen($cmd);
}
doWork.php would look like :
if(is_array($_SERVER['argv'])) $param = $_SERVER['argv'][1];
doWork($param);
function doWork($data)
{
//do stuff, write result to db
}
More information about argv.
How about adding "> /dev/null 2>/dev/null &"
exec('php myFile.php > /dev/null 2>/dev/null &');
You can check the documentation for more
I've written a PHP script that gathers all the data from an IceCast stream and stores it in an array. I want to measure how many listeners the stream has ever five minutes. Is there a way to remotely run the script so that it "refreshes" every five minutes and sticks the number of listeners into a database? Thanks!
A Cron job is what you are looking for. You can search on SO/Google/etc. for how to create/setup a Cron job.
I have used following snippets to periodically run a script.
The main advantage of it is that run after $min minutes (configurable) after finish the current process. That cannot be achieved with cron that run exactly each X time. See the differences? In this way you can be sure of wait a given amount of time between processes.
Maybe is not exactly what you want but I would like to share this useful technique.
script_at.php file:
function init_at()
{
// my code
runNextPlease();
}
function runNextPlease()
{
$min = 5;
exec ("at now + $min minutes -f " . PATH_TO_SOURCE . "script_at.sh", $output, $out);
my_logger("at return status: $out");
}
script_at.sh file:
#!/bin/bash
/usr/bin/wget -c -t0 -o /dev/null -O /dev/null http://domain/script_at.php
I have a ton of rows in MySQL. I'm going to perform a ping on an ip in each of these rows, so I'd like to split the load. I've made a script that runs a new process for every 100 row in the database. The problem is that the parent script seems to wait for each of the child scripts to finish before starting the next one, which voids the entire purpose.
This is the code of the important part of the parent script
for($i = 0; $i < $children_num; $i++)
{
$start = $bn["dots_pr_child"] * $i;
exec("php pingrange.php $i $start $bn[dots_pr_child]");
}
It's worth mentioning that each of these children processes run exec("ping") once per MySQL row. I'm thinking that's a part of the problem.
I'll post more information and code on request.
Is there a way to force the PHP instance to run in the background, and for the foreground to continue? Preferably the parent script should take 0.0001 sek and print "Done". Currently it runs for 120 seconds.
Thanks for any and all help
Edit: I've tried to add a & after the process. One would think that'd make the exec function return instantly, but nope.
Edit: Tried exec("php pingrange.php $i $start $bn[dots_pr_child] 2>&1 &"); without success
Edit: Tried exec("nohup php pingrange.php $i $start $bn[dots_pr_child] &"); without success
exec("php pingrange.php $i $start $bn[dots_pr_child] > /dev/null 2>/dev/null & ");
should do the work in background.
Try nohup and the ampersand:
exec("nohup php pingrange.php $i $start $bn[dots_pr_child] &");
The Wikipedia article on nohup gives a pretty good description of what it does.
While it might be overkill for this specific task, consider Gearman, a message / work queue designed for exactly what you're doing: farming out tasks to workers. It has comprehensive PHP support.
If you want to pass on Gearman for now, take a peek at proc_open instead of exec. It's a bit more complex, but it gives you a higher degree of control that might work better for you.
<?php
passthru("/path/script >> /path/to/log_file.log 2>&1 &");
?>
This should work since none of default PHP streams are used.
Just append > /dev/null 2>&1 & to your command, works for me.
Using PHP on Linux, I'd like to determine whether a shell command run using exec() was successfully executed. I'm using the return_var parameter to check for a successful return value of 0. This works fine until I need to do the same thing for a process that has to run in the background. For example, in the following command $result returns 0:
exec('badcommand > /dev/null 2>&1 &', $output, $result);
I have put the redirect in there on purpose, I do not want to capture any output. I just want to know that the command was executed successfully. Is that possible to do?
Thanks, Brian
My guess is that what you are trying to do is not directly possible. By backgrounding the process, you are letting your PHP script continue (and potentially exit) before a result exists.
A work around is to have a second PHP (or Bash/etc) script that just does the command execution and writes the result to a temp file.
The main script would be something like:
$resultFile = '/tmp/result001';
touch($resultFile);
exec('php command_runner.php '.escapeshellarg($resultFile).' > /dev/null 2>&1 &');
// do other stuff...
// Sometime later when you want to check the result...
while (!strlen(file_get_contents($resultFile))) {
sleep(5);
}
$result = intval(file_get_contents($resultFile));
unlink($resultFile);
And the command_runner.php would look like:
$outputFile = $argv[0];
exec('badcommand > /dev/null 2>&1', $output, $result);
file_put_contents($outputFile, $result);
Its not pretty, and there is certainly room for adding robustness and handling concurrent executions, but the general idea should work.
Not using the exec() method. When you send a process to the background, it will return 0 to the exec call and php will continue execution, there's no way to retrieve the final result.
pcntl_fork() however will fork your application, so you can run exec() in the child process and leave it waiting until it finishes. Then exit() with the status the exec call returned.
In the parent process you can access that return code with pcntl_waitpid()
Just my 2 cents, how about using the || or && bash operator?
exec('ls && touch /tmp/res_ok || touch /tmp/res_bad');
And then check for file existence.