how can detect a php file is still running on apache? - php

I have a php script with an unlimited while, it must run 24/7
in another php file, how can I check is that file running on server or stopped?
how can i send a signal to apache to stop and re-execute that file?

I'm going to assign numbers to files. The file that is running 24/7 will be the first file and the file that will change the state of the first one will be called the second file.
Now, the first file can write to a file or in database, let's say, every 10 minutes. This way you know if it's running by checking that file and the last date when it wrote that file. So you create a second file or database table and write in it the state you want the first file to be. Example: active or disabled. Now you read the file/table with the first file, if it's disabled, you stop executing the script.

easiest might be to save timestamped status in memcache or other shared location, and have the other php script check the status timestamp
it's easy to kill an apache process and hit the page again, that will restart the script. Or you can add a signal handler to restart on SIGHUP

You cannot check whether a specific file is running. You'll have to check whether a process is still running with that file. That also means this isn't something Apache can do for you. You'll either have to:
1) use an OS-dependant kill-signal to the process running the script
2) check for a kill-signal from inside the script
The former requires a lot of privilige on the server, so it's probably easier to do the second.
The easiest way is for the running file to write to a database or file somewhere to signify that it´s still going, and to read the same location for a signal to stop every so often. If the running process sees a stop-signal, it can simply break from whatever loop is keeping it going.
The second script can set the signal at whatever point and for whatever reason, and the other script will quickly after terminate.

If the first script terminates, write a file to a disk called error.txt, and then make the second script check for this every minute or so.
The second script once spots an error.txt file will be signaled to restart your first script. More realtime would be to use a database and with the current timestamp.

Related

How to detect if PHP script is running on a shared server

I have a PHP script that runs on a shared hosting environment server. This PHP script takes a long time to run. It may take 20 to 30 minutes to finish a run. It is a recurring background process. However I do not have control over when the process starts (it could be triggered every five minutes, or every three hours, no one knows).
Anyway, at the beginnin of this script I would like to detect if the previous process is still running, if the earlier run is still running and has not finished, then I would not run the script again. If it is not running, then I run the new process.
In other words, here is a pseudo code. Let's call the script abc.php
1. Start script abc.php
2. Check if an older version of abc.phh is still running. If it is running, then terminate
3. If it is not running, then continue with abc.php and do your work which might take 30 minutes or more
How can I do that? Please keep in mind this is shared hosting.
UPDATE: I was thinking of using a DB detection mechanism. So, when the script starts, it will set a value in a DB as 'STARTED=TRUE', when done, it will set 'STARTED=FALSE'. However this solution is not proper, because there is no garantee that the script will terminate properly. It might get interrupted, and therefore may not update the STARTED value to FALSE. So the DB solution is out of the question. It has to be a process detection of some sort, or maybe a different solution that I did not think off. Thanks.
If this is a CGI process, I would try using exec + ps, if the latter is available in your environment. A quick SO search turns up this answer: https://stackoverflow.com/a/7182595/177920
You'll need to have a script that is responsible for (and separate from) checking to see if your target script is running, of course, otherwise you'll always see that your target script is running based on the order of ops in your "psuedo code".
You can implement a simple locking mechanism: Create a tmp lock file when script starts and check before if the lock file already exists. If it does, dont run the script, it it doesnt create a lock file and run the script. At then end of successful run, delete the lock file so that it will run properly next time.
if(!locked()) {
lock();
// your code here
unlock();
} else {
echo "script already running";
}
function lock() { file_put_contents("write.lock", 'running'); }
function locked() { return file_exists("write.lock"); }
function unlock() { return unlink("write.lock"); }

Execute batch of commands in a large script as form of queue (cron)

I am making a remote fileuploading script which uploads a single or multiple chosen files to multiple filehosts, I am doing this by some cli script and as such all my main commands are executed using exec function. As such what I do is create my commands as per user input from one php file and save the commands in a .json file.
Then I have a separate file to be run manually or via cron to execute those batch of commands per json file. However sometimes if I input 50 files at once with 3 filehosts, commands to be executed are almost 100-150+ and manytimes due to nginx/php timeout or other such reasons the CLI script simply stops or suspends midway and then I have to restart whole batch and reupload all files again rather then the point where it ended/suspended.
Is there a better way to manage this type of long command queue and possibly resume it from where it last suspended or aborted ?
One way I thought is rather then creating all commands in a single json file, I create one file each for each command and save it in a new folder created for that queue, then the cron script picks one command file, executes it, if its success, deletes the file and selects next file (using loop)
Is that the only best option I can have ?
Check your php.ini to increase max execution time or the function set_time_limit

Windows Bat check if php script runnin

I am using a bat to run a php on win2003 scheduler. Is there a way to check the processes and see if the file is still running.
How about your batch file like this: (pseudocode, as it's been a while)
:makerandom
make som random var, microtime, whatever, we call it %x%
check if file exist, if it does, goto makerandom
call the script with %x% as argument
:check
if file exist %x% goto check
:done
in the php-script:
create the file specified by the argument
... script here ...
delete the file
In your scheduled task's .php file: Use getmypid() to get the PHP process' ID (PID) and save it to a file.
Next time your .php file is called, use $tasks = shell_exec('tasklist.exe'); to get a list of all active processes, then read the previously saved PID and look it up.
Honestly, I don't know if this is the best solution or not.
Try out Sysinternals Process Utilities.
http://technet.microsoft.com/en-us/sysinternals/bb896682
The pslist utility is just what you need (given a pid tells if it's running setting an env variable)
Regards
PS: with pslist I suggest evaluating also the pskill utility

get output from shell_exec command as command runs

I am coding a PHP-scripted web page that is intended to accept the filename of a JFFS2 image which was previously uploaded to the server. The script is to then re-flash a partition on the server with the image, and output the results. I had been using this:
$tmp = shell_exec("update_flash -v " . $filename . " 4 2>&1");
echo '<h3>' . $tmp . '</h3>';
echo verifyResults($tmp);
(The verifyResults function will return some HTML that indicates to the user whether the update command completed successfully. I.e., in the case that the update completes successfully, display a button to restart the device, etc.)
The problem with this is that the update command takes several minutes to complete, and the PHP script blocks until the shell command is complete before it returns any of the output. This typically means that the update command will continue running, while the user will see an HTTP 504 error (at worst) or wait for the page to load for several minutes.
I was thinking about doing something like this instead:
shell_exec("rm /tmp/output.txt");
shell_exec("update_flash -v " . $filename . " 4 2>&1 >> /tmp/output.txt &");
echo '<div id="output"></div>';
echo '<div id="results"></div>';
This would theoretically put the command in the background and append all output to /tmp/output.txt.
And then, in a Javascript function, I would periodically request getOutput.php, which would simply print the contents of /tmp/output.txt and stick it into the "output" div. Once the command is completely done, another Javascript function would process the output and display a result in the "results" div.
But the problem I see here is that getOutput.php will eventually become inaccessible during the process of updating the device's flash memory, because it's on the partition to which is targeted for an update. So that could leave me in the same position as before, albeit without the 504 or a seemingly eternally-loading page.
I could move getOutput.php to another partition in the device, but then I think I would still have to do some funky stuff with the webserver configuration to be able to access it there (a symlink to it from the webroot would, like any other file, eventually be overwritten during the re-flash).
Is there any other way of displaying the output of the command as it runs, or should I just make do with the solution I have?
Edit 1: I'm currently testing some solutions. I'll update my question with results later.
Edit 2: It seems that the filesystem does not get overwritten as I had originally thought. Instead, the system seems to mount the existing filesystem in read-only mode, so I can still access getOutput.php even after the filesystem is re-flashed.
The second solution I described in my question does seem to work in addition with using popen (as mentioned in an answer below) instead of shell_exec. The page loads, and via Ajax I can display the contents of output.txt.
However, it seems that output.txt does not reflect the output from the re-flash command in real time-- it seems to display nothing until the update command returns from execution. I will need to do further testing to see what's going on here.
Edit 3: Never mind, it looks like the file is current as I access it. I was just hitting a delay while the kernel did some JFFS2-related tasks triggered by my use of the partition on which the source JFFS2 image is stored. I don't know why, but this apparently causes all PHP scripts to block until it's done.
To work around that, I'm going to put the update command invocation in a separate script and request it via Ajax-- that way, the user will at least receive some prepackaged feedback while technically still waiting on the system.
Look at the popen: http://it.php.net/manual/en/function.popen.php
Interesting scenario.
My first thought was to do something regarding proc_* and $_SESSION, but I'm not sure if that will work or not. Give it a try, but if not...
If you're worried about the file being flashed during the process, you could always instantiate a mysql database in the secondary process and write to that. The database can exist on another partition, and you can address it by local ip and the system will take care of the routing.
Edit
When I mentioned proc_* with sessions, I meant something similar to this where $descriptorspec would become:
$_SESSION = array(
1 => array("pipe", "w"),
);
However I kind of doubt that will work. The process will end up writing to the $_SESSION in memory which no longer exists once the first script is killed.
Edit 2
ACTUALLY, on that note, you could install memcache and have your secondary process write directly to memory, which can then be re-read by your web-interfaced process.
If you wipe the DocRoot there is no resource/script that can respond to requests from the user during this time. Therefore you have to send updates to the user in the same request that does the wipe. This requires you to start the shell process and immediately return to PHP. This can be accomplished with pcntl_fork() and pcntl_exec(). Your PHP script should now continuously send the output of the shell script to the client. If the shell script appends to a file in /tmp, you could fpassthru() that file and clear it until the shell script ends.
Regarding your However:
My guess is you are trying to use the file as a stream. I haven't done any production tests, but I believe that the file will only be written back to disk on fclose().
If you are writing to the file continually in script #2, those writes are actually going directly into memory until the file is closed.
Again - I cannot verify this, but if you want to test it, try re-opening and closing the file for every write. This will confirm or deny my theory and you can modify your approach accordingly.

Stopping Parallel Execution of PHP Script

I am trying to stop my cron script from allowing it to run in parallel. I need it so that if there is no current execution of it, the script will be allowed to run until it is complete, the script timesout or an exception occurs.
I have been trying to use the PHP flock function to engage a file lock, run the script and then release the lock. However, it still looks like I am able to run the script multiple times in parallel. Am I missing something?
Btw, I am developing on Mac OS X with the Mac filesystem, maybe this is the reason the file locks are being ignored? Though the PHP documentation only looks about NTFS filesystems?
// Construct cron lock file path
$cronLockFilePath = realpath(APPLICATION_PATH . '/locks/cron');
// Get cron lock file
$cronLockFile = fopen($cronLockFilePath, 'r');
// Lock cron lock file
if (flock($cronLockFile, LOCK_EX)) {
echo 'lock';
sleep(10);
} else {
echo 'no lock';
}
Your idea is basically correct, but tinkering with file locks generally leads to strange behaviour.
Just create a file on script start and delete it in the end. The presense of the file will indicate if the cron is already running. Make absolutely sure, that the file is deleted in the end, even if the cron runs into an error halfway through.
From documentation:
Warning
On some operating systems flock() is
implemented at the process level. When
using a multithreaded server API like
ISAPI you may not be able to rely on
flock() to protect files against other
PHP scripts running in parallel
threads of the same server instance!
You can try to create and delete file, or write something in to it.
I think what you could do is write a regular file somewhere (lock.txt or something) when script starts to execute, without any flocks, and remove it when the script stops running. And then always check upon initialization whether that file already exists - another instance running.

Categories