PHP/Cron Limit Number of Concurrently Running Scripts - php

I have a script written in PHP that is kicked off by a cron job. It's a long running script, and often cron kicks off another instance before the last is finished. That's fine. I allow 3 instances of this script to run at any given time.
I limit the script to 3 instances with this code at the top of the script:
exec('ps -A | grep nameofmyscript', $results);
if (count($results) > 4) {
echo "Already Running\n"
die(0);
}
It works, but I'm looking for a better way. This approach backfired on me a few weeks back when I renamed the script, and forgot to change that line of code. It also fails when the script is named something similar to an already running process.

Using PHP, you could create a randomly named lock file in a directory specific to the cron job. Check for the number of lock files that exist in the directory before letting the PHP script continue. This may not be the best solution though.

You could automate the name placement...
exec('ps -A | grep ' . escapeshellarg(basename(__FILE__)) , $results);
if (count($results) > 4) {
echo "Already Running\n"
die(0);
}

Related

Open Linux terminal command in PHP

I have a server running on Linux that execute commands to 12 nodes (12 computers with Linux running in them). I recently downloaded PHP on the server to create web pages that can execute commands by opening a specific PHP file.
I used exec(), passthru(), shell_​exec(), and system(). system() is the only one that returns a part of my code. I would like PHP to act like open termainal command in linux and I cannot figure out how to do it!
Here is an example of what is happening now (Linux directly vs PHP):
When using linux open terminal command directly:
user#wizard:/home/hyperwall/Desktop> /usr/local/bin/chbg -mt
I get an output:
The following settings will be used:
option = mtsu COLOR = IMAGE = imagehereyouknow!
NODES = LOCAL
and additional code to send it to 12 nodes.
Now with PHP:
switch($_REQUEST['do'])
{ case 'test':
echo system('/usr/local/bin/chbg -mt');
break;
}
Output:
The following settings will be used:
option = mtsu COLOR = IMAGE = imagehereyouknow!
NODES = LOCAL
And stops! Anyone has an explanation of what is happening? And how to fix it? Only system displays part of the code the other functions display nothing!
My First thought is it can be something about std and output error. Some softwares dump some informations on std out and some in std error. When you are not redirecting std error to std out, most of the system calls only returns the stdout part. It sounds thats why you see the whole output in terminal and can't in the system calls.
So try with
/usr/local/bin/chbg -mt 2>&1
Edit:
Also for a temporary work through, you can try some other things. For example redirect the output to file next to the script and read its contents after executing the command, This way you can use the exec:
exec("usr/local/bin/chbg -mt 2>&1 > chbg_out");
//Then start reading chbg_out and see is it work
Edit2
Also it does not make sense why others not working for you.
For example this piece of code written in c, dumps a string in stderr and there is other in stdout.
#include <stdio.h>
#include<stdlib.h>
int main()
{
fputs("\nerr\nrro\nrrr\n",stderr);
fputs("\nou\nuu\nuttt\n",stdout);
return 0;
}
and this php script, tries to run that via exec:
<?php
exec("/tmp/ctest",&$result);
foreach ( $result as $v )
{
echo $v;
}
#output ouuuuttt
?>
See it still dumps out the stdout. But it did not receive the stderr.
Now consider this:
<?php
exec("/tmp/ctest 2>&1",&$result);
foreach ( $result as $v )
{
echo $v;
}
//output: errrrorrrouuuuttt
?>
See, this time we got the whole outputs.
This time the system:
<?php
echo system("/tmp/ctest 2>&1");
//output: err rro rrr ou uu uttt uttt
?>
and so on ...
Maybe your chbg -mt writes additional code to stderr instead of stdout? Try to execute your script inside php like this:
/usr/local/bin/chbg -mt 2>&1
The other responses are good for generic advice. But in this specific case, it appears you are trying to change your background on your desktop. This requires many special considerations because of 'user context':
First, your web server is probably running as a different user, and therefore would not have permissions to change your desktop.
Second, the program probably requires some environmental variables from your user context. For example, X programs need a DISPLAY variable, ssh-agent needs SSH_AGENT_PID and SSH_AUTH_SOCK, etc. I don't know much about changing backgrounds, but I'm guessing it involves D-Bus, which probably requires things like DBUS_SESSION_BUS_ADDRESS, KONSOLE_DBUS_SERVICE, KONSOLE_DBUS_SESSION, and KONSOLE_DBUS_WINDOW. There may be many others. Note that some of these vars change every time you log in, so you can't hard-code them on the PHP side.
For testing, it might be simpler to start your own webserver right from your user session. (i.e. Don't use the system one, it has to run as you. You will need to run it on an alternate port, like 8080). The web server you start manually will have all the 'context' it needs. I'll mention websocketd because it just came out and looks neat.
For "production", you may need to run a daemon in your user context all the time, and have the web server talk to that daemon to 'get stuff done' inside your user context.
PHP's system only returns the last line of execution:
Return Value: Returns the last line of the command output on success, and FALSE on failure.
You will most likely want to use either exec or passthru. exec has an optional parameter to put the output into an array. You could implode the output and use that to echo it.
switch($_REQUEST['do'])
{ case 'test':
exec('/usr/local/bin/chbg -mt', $output);
echo implode('\n', $output); // Could use <br /> if HTML output is desired
break;
}
I think that the result of execution, can changes between users.
First, try to run your PHP script directly into your terminal php yourScript.php
If it runs as expected, go to your Apache service and update it to run with your own credentials
You are trying to change the backgrounds for currently logged in users... While they are using the desktop. Like while I'm typing this message. I minimize my browser and 'ooh my desktop background is different'. Hopefully this is for something important like it turns red when the reactor or overheating.
Anyway to my answer:
Instead of trying to remotely connect and run items as the individual users. Setup each user to run a bash script (in their own account, in their own shell) on a repeating timer. Say every 10 minutes. Have it select the SAME file.. from a network location
/somenetworkshare/backgrounds/images/current.png
Then you can update ALL nodes (1 to a million) just by changing the image itself in /somenetworkshare/backgrounds/images/current.png
I wrote something a while ago that does just this -- you can run a command interpreter (/bin/sh), send it commands, read back responses, send more commands, etc. It uses proc_open() to open a child process and talk to it.
It's at http://github.com/andrasq/quicklib, Quick/Proc/Process.php
Using it would look something like (easier if you have a flexible autoloader; I wrote one of those too in Quicklib):
include 'lib/Quick/Proc/Exception.php';
include 'lib/Quick/Proc/Exists.php';
include 'lib/Quick/Proc/Process.php';
$proc = new Quick_Proc_Process("/bin/sh");
$proc->putInput("pwd\n");
$lines = $proc->getOutputLines($nlines = 10, $timeoutSec = 0.2);
echo $lines[0];
$proc->putInput("date\n");
$lines = $proc->getOutputLines(1, 0.2);
echo $lines[0];
Outputs
/home/andras/quicklib
Sat Feb 21 01:50:39 EST 2015
The unit of communication between php and the process is newline terminated lines. All commands must be newline terminated, and all responses are retrieved in units of lines. Don't forget the newlines, they're hard to identify afterward.
I am working on a project that uses Terminal A on machine A to output to Terminal B on Machine B, both using linux for now. I didnt see it mentioned, but perhaps you can use redirection, something like this in your webserver:
switch($_REQUEST['do'])
{ case 'test':
#process ID on the target (12345, 12346 etc)
echo system('/usr/local/bin/chbg -mt > /proc/<processID>/fd/1');
#OR
#device file on the target (pts/0,tty0, etc)
echo system('/usr/local/bin/chbg -mt > /dev/<TTY-TYPE>/<TTYNUM>');
break;
}
Definitely the permissions need to be set correctly for this to work. The command "mesg y" in a terminal may also assist...Hope that helps.

Windows PHP repeating script via popen

I'm trying to create a browser-started self-calling/repeating PHP script on Windows with PHP (currently 5.3.24 but soon will be latest). It will act as a daemon to monitor changes in a database (every few seconds, so cron/schedule is out) and then call other PHP scripts to perform work when changes are found. For the purposes of this question please ignore the fact that I'd be better off doing this in C# or some other language :)
To keep things simple I started out by trying to use popen to run a second PHP script in the background...
// BatchMonitor.php
SaveToMonitorTable(1); // save 1st test entry to see if the script reached this point
$Command = '"" "C:\Program Files (x86)\PHP\v5.3\php.exe" C:\inetpub\wwwroot\Test.php --Instance=' . $Data->Instance;
pclose(popen("start /B $Command", "r"));
SaveToMonitorTable(2); // save 2nd test entry to see if the script reached this point
exit();
// Test.php
SaveToTestTable(1);
Sleep(10);
SaveToTestTable(2);
exit();
If I run BatchMonitor.php in the browser it works fine. As expected it will save 1 to the monitor table, call Test.php which saves 1 to the test table, the original BatchMonitor.php will continue without waiting for a response and save 2 to the monitor table before exiting, then 10 seconds later the test page saves 2 to the test table before exiting. The second script starts fine, the first script does not wait for a reply and all parameters are correctly passed between scripts. With everything working as intended I then changed the system to work as a repeating loop by calling itself (with delay) instead of another script...
// BatchMonitor.php
SaveToMonitorTable(1); // save 1st test entry to see if the script reached this point
$Command = '"" "C:\Program Files (x86)\PHP\v5.3\php.exe" C:\inetpub\wwwroot\BatchMonitor.php --Instance=' . $Data->Instance;
pclose(popen("start /B $Command", "r"));
SaveToMonitorTable(2); // save 2nd test entry to see if the script reached this point
exit();
If I run BatchMonitor.php in the browser it runs once and that is it. It will save 1 to the database, wait 10 seconds and then save 2 to the database before exiting. The page returns successfully with no script or PHP errors but it doesn't repeat as it should.
Both BatchMonitor.php and Test.php use line-for-line identical functions to get the parameters and both files run correctly and identical on the first iteration. If I use exec instead of popen then the page loops correctly with all logic working as expected (with the one obvious flaw of creating a never-ending chain of scripts awaiting for response values that will never come).
Am I missing something obvious? Does popen have some sort of secret rule that prevents a page/process from opening duplicates of itself? Are there any alternatives to using popen or exec? I read about WScript.Shell but it might be a while before I can schedule that to get enabled so for now it's not an option and I'm hoping there is something more standard that I can use.
I dont feel like this should cbe your actual answer, But why do you disbandon scheduled tasks/cronjobs because you want something done every X seconds? Having the script minute.php calling 5seconds.php with ofcouse 5 second intervals in between would create a repeated taak evert 5 seconds right?
Strangely enough you are kinda using the same sort of mechanism from your browser already.
My only concern would be to take the processed time in account and create a safe script which ensures no more than 1 '5seconds.php' can run at any given time.

In PHP, exec fails silently, sometimes, when calling many exec commands, but the same command run again later will work

I have a PHP script that uses exec('command args > /log/file &'); within a loop to create multiple child scripts that run at the same time. Basically, the parent script gets user information out of a database and creates child scripts running in parallel, then the child script creates an email to send to a single user. This happens approximately 50,000 times.
To prevent the creation of 50,000 simultaneously running processes, I have a database table that keeps track of the currently running processes, and before creating a new process the parent checks the current child count and sleeps if 25 children are currently active. The child, upon completing its task, deletes its row in the table, freeing the parent to create more children.
The problem is, about 10% of the exec commands fail silently, and for seemingly no reason. I can run the parent script again (it's smart enough not to email the same user twice), and it will work, once again, 90% of the time using the same exec commands that failed last time. Running the script five or six times in a row will email everyone.
By putting a sleep immediately after the exec, I can increase my success rate to around 95%.
Why would exec be failing, if the same command will work later? I can just keep the script repeating until it completes, but I'd much rather solve the exec problem.
Some highly simplified sample code:
Parent script:
do {
//get user, group, and supergroup information for users that haven't
//been emailed yet
foreach ($users as $userArray) {
$processId = insertIntoProcessQueue($userArray);
$cmd = 'sudo php -q ./childScript.php ' . cliArg($userArray) . ' ' .
cliArg($groupArray) . ' ' . cliArg($supergroupArray)
' ' . $proccessId . ' > file.log &';
exec($cmd);
do {
if (numChildren() >= 25) {
sleep(1);
$waiting = true;
}
} while ($waiting);
}
$incomplete = moreUsersToEmail() > 0 ? true : false;
} while ($incomplete);
function cliArg($array) {
return escapeshellarg(json_encode($arg));
}
Child script:
ignore_user_abort(true);
$user = json_decode($argv[1]);
$group = json_decode($argv[2]);
$supergroup = json_decode($argv[3]);
print_r($user);
$email = createEmail($user, $group, $supergroup);
$email->sendEmail();
removeFromProcessQueue($argv[4]);
flush();
exit;
The print_r will only show up in the log file when the script completes and I never get any errors, so I can't get any data about why it's failing. To add to that, it doesn't fail consistently on any individual users, and it doesn't fail running a single user at a time, so I have to run the script through everyone and try and catch the errors amidst the 45,000 that are working properly. And, since the parent and child never communicate beyond the parent starting the child, I can't detect (from the parent) when a child fails (otherwise I could immediately try and start any failed children again instead of rerunning the parent post-hoc).
Edit: So it turns out there's an included script that's dynamically generated and is destroyed and regenerated every time it's used (don't ask me why), which creates a race condition while running processes in parallel that caused the script to fail.
Thanks everyone for your unfortunately wasted time.
I just looked at the PHP docs for exec() and you can pass an array as a reference with a second parameter which will be filled with the output of exec. You can use this to determine a) why the command is failing and b) when the command fails and integrate that into your code.
So I'd change:
exec($cmd);
To something like:
function check_exec_results($results)
{
echo '<HR><PRE>',print_r($output,true),'</PRE><HR>'; //use this to figure out what output you're getting from the exec commands then remove when you've figured out a way to set $results_look_good below
$results_look_good = ?; //you will need to edit this yourself to actually do some kind of check
return $results_look_good;
}
$successful_exec = false;
do
{
$exec_results = array();
exec($cmd,$exec_results);
$successful_exec = check_exec_results($exec_results);
}
while (!$successful_exec);
Note that this is potentially an infinite loop so I'd also go a step further and set a limit to the number of times exec() can be called for each user.
So it turns out there's an included script that's dynamically generated and is destroyed and regenerated every time it's used (don't ask me why), which creates a race condition while running processes in parallel that caused the script to fail.
Thanks everyone for your unfortunately wasted time.

Don't run script if it's already running

I've been completely unsuccessful finding an answer to this question. Hopefully someone here can help.
I have a PHP script (a WordPress template, to be specific) that automatically imports and processes images when a user hits it. The problem is that the image processing takes up a lot of memory, particularly if multiple users are accessing the template at the same time and initiating the image processing. My server crashed multiple times because of this.
My solution to this was to not execute the image-processing function if it was already running. Before the function started running, I would check a database entry named image_import_running to see if it was set to false. If it was, the function then ran. The very first thing the function did was set image_import_running to true. Then, after it was all finished, I set it back to false.
It worked great -- in theory. The site hasn't crashed since, I can tell you that. But there are two major problems with it:
If the user closes the page while it's loading, the script never finishes processing the images and therefore never sets image_import_running back to false. The template will never process images again until it's manually set to false.
If the script times out while it's processing images -- and that's a strong possibility if there are many images in the queue -- you have essentially the same problem as No. 1: the script never gets to the point where it sets image_import_running back to false.
To handle No. 1 (the first one of the two problems I realized), I added ignore_user_abort(true) to the script. Did it work? I don't know, because No. 2 is still an issue. That's where I'm stumped.
If I could ask the server whether the script was running or not, I could do something like this:
if($import_running && $script_not_running) {
$import_running = false;
}
But how do I set that $script_not_running variable? Beats me.
I've shared this entire story with you just in case you have some other brilliant solution.
Try using
ignore_user_abort(true); it will continue to run even if the person leaves and closes the browser.
you might also want to put a number instead of true false in the db record and set a maximum number of processes that can run together
As others have suggested, it would be best to move the image processing out of the request itself.
As an interim "fix", store a timestamp alongside image_import_running when a processing job begins (e.g., image_import_commenced). This is a very crude mechanism, but if you know the maximum time that a job can run before timing out, the script can check whether that period of time has elapsed.
e.g., if image_import_running is still true but the current time is more than 10 minutes since image_import_commenced, run the processing anyway.
What about setting a transient with an expiry time that would throttle the operation?
if(!get_transient( 'import_running' )) {
set_transient( 'import_running', true, 30 ); // set a 30 second transient on the import.
run_the_import_function();
}
I would rather store the job into database flagging it pending and set a cron job to execute the processing one job at a time.
For Me i use just this simple idea with a text document. for example run.txt file
in the top script use :
if((file_get_contents('run.txt') != 'run'){ // here the script will work
$file = fopen('run.txt', 'w+');
fwrite($file, 'run');
fclose('run.txt');
}else{
exit(); // if it find 'run' in run.txt the script will stop
}
And add this in the end of your script file
$file = fopen('run.txt', 'w+');
fwrite($file, ''); //will delete run word for the next try ;)
fclose('run.txt');
That will check if script already work by checking runt.txt contents
if run word exist in run.txt it will not run
Running a cron would definitively be a better solution. Idea to store url in a table is a good one.
To answer to the original question, you may run a ps auxwww command with exec (Check this page: How to get list of running php scripts using PHP exec()? ) and move your function in a separated php file.
exec("ps auxwww|grep myfunction.php|grep -v grep", $output);
Just add following on the top of your script.
<?php
// Ensures single instance of script run at a time.
$fileName = basename(__FILE__);
$output = shell_exec("ps -ef | grep -v grep | grep $fileName | wc -l");
//echo $output;
if ($output > 2)
{
echo "Already running - $fileName\n";
exit;
}
// Your php script code.
?>

Make sure one copy of php script running in background

I'm using cronjob to run php script that will be executed every 1 minute
I need also to make sure only of copy is running so if this php script is still running after 2 minutes, cronjob should not run another version.
currently I have 2 options and I would like to see your feedback and if you have any more options
Option 1: create a tmp file when the php script start and remove it when php script finish (and check if the file exists) ---> the problem for me with this option is that if I have my php script crash for any reason, it will not run again (the tmp file will not be deleted)
Option 2: run a bash script like the one below to control the php script execution ---> good but looking for something that can be done within php
#!/bin/bash
function rerun {
BASEDIR=$(dirname $0)
echo $BASEDIR/$1
if ps -ef | grep -v grep | grep $1; then
echo "Running"
exit 0
else
echo "NOT running";
/usr/local/bin/php $BASEDIR/$1 &
exit $?
fi
}
rerun myphpscript.php
PS: I just saw "Mutex class" at http://www.php.net/manual/en/class.mutex.php but not sure if it's stable and anyone tried it.
You might want to use my library ninja-mutex which provides simple interface for handling mutex. Currently it can use flock, memcache, redis or mysql to handle lock.
Below is an example which uses memcache:
<?php
require 'vendor/autoload.php';
use NinjaMutex\Lock\MemcacheLock;
use NinjaMutex\Mutex;
$memcache = new Memcache();
$memcache->connect('127.0.0.1', 11211);
$lock = new MemcacheLock($memcache);
$mutex = new Mutex('very-critical-stuff', $lock);
if ($mutex->acquireLock(1000)) {
// Do some very critical stuff
// and release lock after you finish
$mutex->releaseLock();
} else {
throw new Exception('Unable to gain lock!');
}
I often use the program flock that comes with many linux distributions directly in my crontabs like:
* * * * * flock -n /var/run/mylock.LCK /usr/local/bin/myprogram
Of cause it is still possible to actually start two simultaneously instances of myprogram if you do it by hand, but crond will only make one.
Flock being a small compiled binary, makes it super fast to launch compared to a eventually larger chunk of php code. This is especially a benefit if you have many longer running executions, which it is not perfectly clear that you actually have.
If you're not on a NFS mount, you can use flock() (http://php.net/manual/en/function.flock.php):
$fh = fopen('guestbook.txt','a') or die($php_errormsg);
$tries = 3;
while ($tries > 0) {
$locked = flock($fh,LOCK_EX | LOCK_NB);
if (! $locked) {
sleep(5);
$tries--;
} else {
// don't go through the loop again
$tries = 0;
}
}
if ($locked) {
fwrite($fh,$_REQUEST['guestbook_entry']) or die($php_errormsg);
fflush($fh) or die($php_errormsg);
flock($fh,LOCK_UN) or die($php_errormsg);
fclose($fh) or die($php_errormsg);
} else {
print "Can't get lock.";
}
From: http://docstore.mik.ua/orelly/webprog/pcook/ch18_25.htm
I found the best solution for me is creating a separate database user for your Script and limit the concurent connection to 1 for that user.

Categories