I know this is a bit generic, but I'm sure you'll understand my explanation. Here is the situation:
The following code is executed every 10 minutes. Variable "var_x" is always read/written to an external text file when its refereed to.
if ( var_x != 1 )
{
var_x = 1;
//
// here is where the main body of the script is.
// it can take hours to completely execute.
//
var_x = 0;
}
else
{
// exit script as it's already running.
}
The problem is: if I simulate a hardware failure (do a hard reset when the script is executing) then the main script logic will never execute again because "var_x" will always be "1". (I already have logic to work out the restore point).
Thanks.
You should lock and unlock files with flock:
$fp = fopen($your_file);
if (flock($fp, LOCK_EX)) { )
{
//
// here is where the main body of the script is.
// it can take hours to completely execute.
//
flock($fp, LOCK_UN);
}
else
{
// exit script as it's already running.
}
Edit:
As flock seems not to work correctly on Windows machines, you have to resort to other solutions. From the top of my head an idea for a possible solution:
Instead of writing 1 to var_x, write the process ID retrieved via getmypid. When a new instance of the script reads the file, it should then lookup for a running process with this ID, and if the process is a PHP script. Of course, this can still go wrong, as there is the possibility of another PHP script obtaining the same PID after a hardware failure, so the solution is far from optimal.
Don't you think this would be better solved using file locks? (When the reset occurs file locks are reset as well)
http://php.net/flock
It sounds like you're doing some kind of manual semaphore for process management.
Rather than writing to a file, perhaps you should use an environment variable instead. That way, in the event of failure, your script will not have a closed semaphore when you restore.
Related
I have a PHP Code that does some tasks.
Lets say someone executes the code by doing so https://localhost/code.php.
I have an employee that executes the script over curl from a separate server, what is the best way to prevent him from launching the script twice, before the (already running) script is actually completed/finished goes to the end.
TLDR: I would need a function, to wait until the task/code (that's running now) completes and the secondary task that is trying to be launched has given (sleep for few seconds or until the first tasks completes).
TLDR2: Looking for function [The title says it]
Any ideas? thanks.
While a session won't work with cURL, the idea is valid -- you need to set something persistent outside of your script. So, how about writing to a local file, or writing to a database?
if ( file_exists('lock.txt') ) die;
file_put_contents ('lock.txt', 'This file prevents script execution', LOCK_EX);
(... your script code here...)
unlink ('lock.txt');
If you know that there is only one user who will hit your server you can simply use session data.
<?php
session_start();
if (true === $_SESSION["NOT_FINISHED"] ?? false) {
die("Previous job is not finished yet!");
} else {
$_SESSION["NOT_FINISHED"] = true;
// start whatever job need to be done here
...
// when job is done and finished lets release out busy flag
unset( $_SESSION["NOT_FINISHED"]);
}
I have a script that is running continuously in the server, in this case a PHP script, like:
php path/to/my/index.php.
It's been executed, and when it's done, it's executed again, and again, forever.
I'm looking for the best way to be notified if that event stop running(been executed).
There are many reasons why it stops been called, like server memory, new deployment, human error... etc.
I just want to be notified(email, sms, slack...) if that script was not executed for certain amount of time(like 1 hour, 1 day, etc...)
My server is Ubuntu living in AWS.
An idea:
I was thinking on having an index in REDIS/MEMCACHED/ETC with a TTL. Every time the script run, renovate that TTL for this index.
If the script stop working for that TTL time, this index will expire. I just need a way to trigger a notification when that expiration happen, but looks like REDIS/MEMCACHED are not prepared for that
register_shutdown_function might help, but might not... https://www.php.net/manual/en/function.register-shutdown-function.php
I can't say i've ever seen a script that needs to run indefinitely in PHP. Perhaps there is another way to solve the problem you are after?
Update - Following your redis idea, I'd look at keyspace notifications. https://redis.io/topics/notifications
I've not tested the idea since I'm not actually a redis user. But it may be possible to subscribe to capture the expiration event (perhaps from another server?) and generate your notification.
There's no 'best' way to do this. Ultimately, what works best will boil down to the specific workflow you're supporting.
tl;dr version: Find what constitutes success and record the most recent time it happened. Use that for your notification trigger in another script.
Long version:
That said, persistent storage with a separate watcher is probably the most straight-forward way to do this. Record the last successful run, and then check it with a cron job every so often.
For what it's worth, for scripts like this I generally monitor exit codes or logs produced by the script in question. This isolates the error notification process from the script itself so a flaw in the script (hopefully) doesn't hamper the notification.
For a barebones example, say we have a script to invoke the actual script... (This is very much untested pseudo-code)
<?php
//Run and record.
exec("php path/to/my/index.php", $output, $return_code);
//$return_code will be 255 on fatal errors. You can use other return codes
//with exit in your called script to report other fail states.
if($return_code == 0) {
file_put_contents('/path/to/folder/last_success.txt', time());
} else {
file_put_contents('/path/to/folder/error_report.json', json_encode([
'return_code' => $return_code,
'time' => time(),
'output' => implode("\n", $output),
//assuming here that error output isn't silently logged somewhere already.
], JSON_PRETTY_PRINT));
}
And then a watcher.php that monitors these files on a cron job.
<?php
//Notify us immediately on failure maybe?
//If you have a lot of transient failures it may make more sense to
//aggregate and them in a single report at a specific time instead.
if(is_file('/path/to/folder/error_report.json')) {
//Mail details stored in JSON here.
//rename file so it's recorded, but we don't receive it again.
rename('/path/to/folder/error_report.json', '/path/to/folder/error_report.json'.'-sent-'.date('Y-m-d-H-i-s'));
} else {
if(is_file('/path/to/folder/last_success.txt')) {
$last_success = intval(file_get_contents('/path/to/folder/last_success.txt'));
if(strtotime('-24 hours') > $last_success) {
//Our script hasn't run in 24 hours, let someone know.
}
} else {
//No successful run recorded. Might want to put code here if that's unexpected.
}
}
Notes: There are some caveats to the specific approach displayed above. A script can fail in a non-fatal way and if you're not checking for it this example could record that as a successful run. For example, permissions errors causing warnings but the script still runs it's full course and exits normally without hitting an exit call with a specific return code. Our example invoker here would log that as a successful run - even though it isn't.
Another option is to log success from your script and only check for error exits from the invoker.
To prevent multiple instances of a PHP-based daemon I wrote from ever running simultaneously, I wrote a simple function to acquire a lock with flock when the process starts, and called it at the start of my daemon. A simplified version of the code looks like this:
#!/usr/bin/php
<?php
function acquire_lock () {
$file_handle = fopen('mylock.lock', 'w');
$got_lock_successfully = flock($file_handle, LOCK_EX);
if (!$got_lock_successfully) {
throw new Exception("Unexpected failure to acquire lock.");
}
}
acquire_lock(); // Block until all other instances of the script are done...
// ... then do some stuff, for example:
for ($i=1; $i<=10; $i++) {
echo "Sleeping... $i\n";
sleep(1);
}
?>
When I execute the script above multiple times in parallel, the behaviour I expect to see - since the lock is never explicitly released throughout the duration of the script - is that the second instance of the script will wait until the first has completed before it proceeds past the acquire_lock() call. In other words, if I run this particular script in two parallel terminals, I expect to see one terminal count to 10 while the other waits, and then see the other count to 10.
This is not what happens. Instead, I see both scripts happily executing in parallel - the second script does not block and wait for the lock to be available.
As you can see, I'm checking the return value from flock and it is true, indicating that the (exclusive) lock has been acquired successfully. Yet this evidently isn't preventing another process from acquiring another 'exclusive' lock on the same file.
Why, and how can I fix this?
Simply store the file pointer resource returned from fopen in a global variable. In the example given in the question, $file_handle is automatically destroyed upon going out of scope when acquire_lock() returns, and this releases the lock taken out by flock.
For example, here is a modified version of the script from the question which exhibits the desired behaviour (note that the only change is storing the file handle returned by fopen in a global):
#!/usr/bin/php
<?php
function acquire_lock () {
global $lock_handle;
$lock_handle = fopen('mylock.lock', 'w');
$got_lock_successfully = flock($lock_handle, LOCK_EX);
if (!$got_lock_successfully) {
throw new Exception("Unexpected failure to acquire lock.");
}
}
acquire_lock(); // Block until all other instances of the script are done...
// ... then do some stuff, for example:
for ($i=1; $i<=10; $i++) {
echo "Sleeping... $i\n";
sleep(1);
}
?>
Note that this seems to be a bug in PHP. The changelog from the flock documentation states that in version 5.3.2:
The automatic unlocking when the file's resource handle is closed was removed. Unlocking now always has to be done manually.
but at least for PHP 5.5, this is false; flock locks are released both by explicit calls to fclose and by the resource handle going out of scope.
I reported this as a bug in November 2014 and may update this question and answer pair if it is ever resolved. In case I get eaten by piranhas before then, you can check the bug report yourself to see if this behaviour has been fixed: https://bugs.php.net/bug.php?id=68509
I've been completely unsuccessful finding an answer to this question. Hopefully someone here can help.
I have a PHP script (a WordPress template, to be specific) that automatically imports and processes images when a user hits it. The problem is that the image processing takes up a lot of memory, particularly if multiple users are accessing the template at the same time and initiating the image processing. My server crashed multiple times because of this.
My solution to this was to not execute the image-processing function if it was already running. Before the function started running, I would check a database entry named image_import_running to see if it was set to false. If it was, the function then ran. The very first thing the function did was set image_import_running to true. Then, after it was all finished, I set it back to false.
It worked great -- in theory. The site hasn't crashed since, I can tell you that. But there are two major problems with it:
If the user closes the page while it's loading, the script never finishes processing the images and therefore never sets image_import_running back to false. The template will never process images again until it's manually set to false.
If the script times out while it's processing images -- and that's a strong possibility if there are many images in the queue -- you have essentially the same problem as No. 1: the script never gets to the point where it sets image_import_running back to false.
To handle No. 1 (the first one of the two problems I realized), I added ignore_user_abort(true) to the script. Did it work? I don't know, because No. 2 is still an issue. That's where I'm stumped.
If I could ask the server whether the script was running or not, I could do something like this:
if($import_running && $script_not_running) {
$import_running = false;
}
But how do I set that $script_not_running variable? Beats me.
I've shared this entire story with you just in case you have some other brilliant solution.
Try using
ignore_user_abort(true); it will continue to run even if the person leaves and closes the browser.
you might also want to put a number instead of true false in the db record and set a maximum number of processes that can run together
As others have suggested, it would be best to move the image processing out of the request itself.
As an interim "fix", store a timestamp alongside image_import_running when a processing job begins (e.g., image_import_commenced). This is a very crude mechanism, but if you know the maximum time that a job can run before timing out, the script can check whether that period of time has elapsed.
e.g., if image_import_running is still true but the current time is more than 10 minutes since image_import_commenced, run the processing anyway.
What about setting a transient with an expiry time that would throttle the operation?
if(!get_transient( 'import_running' )) {
set_transient( 'import_running', true, 30 ); // set a 30 second transient on the import.
run_the_import_function();
}
I would rather store the job into database flagging it pending and set a cron job to execute the processing one job at a time.
For Me i use just this simple idea with a text document. for example run.txt file
in the top script use :
if((file_get_contents('run.txt') != 'run'){ // here the script will work
$file = fopen('run.txt', 'w+');
fwrite($file, 'run');
fclose('run.txt');
}else{
exit(); // if it find 'run' in run.txt the script will stop
}
And add this in the end of your script file
$file = fopen('run.txt', 'w+');
fwrite($file, ''); //will delete run word for the next try ;)
fclose('run.txt');
That will check if script already work by checking runt.txt contents
if run word exist in run.txt it will not run
Running a cron would definitively be a better solution. Idea to store url in a table is a good one.
To answer to the original question, you may run a ps auxwww command with exec (Check this page: How to get list of running php scripts using PHP exec()? ) and move your function in a separated php file.
exec("ps auxwww|grep myfunction.php|grep -v grep", $output);
Just add following on the top of your script.
<?php
// Ensures single instance of script run at a time.
$fileName = basename(__FILE__);
$output = shell_exec("ps -ef | grep -v grep | grep $fileName | wc -l");
//echo $output;
if ($output > 2)
{
echo "Already running - $fileName\n";
exit;
}
// Your php script code.
?>
is there a way for me to check to see if a file is copied before continuing to execute a php loop?
i have a for loop, and within the loop it is going to copy a file. now, i want it so that it waits until the current file is copied before continuing the loop.
example:
for ($i = 1; $i <= 10; $i++)
{
$temp = $_FILES['tmp_name'];
$extension = '.jpg';
copy("$temp_$i_$extension", "$local_$i_$extension");
// not sure what to do here
if (FILE_DONE_COPYING())
{
CONTINUE_LOOP();
}
else
{
PAUSE_LOOP();
}
}
thats just an example. i have no clue how to do this...can anyone chime in?
That's what copy() does in PHP - it blocks until the file is copied. There's nothing you need to do, except checking the return value to see if the operation was successful.
PHP is taking it line by line, step by step, so it's waiting until copy() is completed
for ($i = 1; $i <= 10; $i++)
{
$temp = $_FILES['tmp_name'];
$extension = '.jpg';
$result = copy("$temp_$i_$extension", "$local_$i_$extension");
if($result){
//done
}
else{
//failed
}
}
copy returns true on success and false on failure. Check for that.
Unless you go through the trouble of using threading and have copy fired asynchronously, PHP will not move to the line after copy until after it has completed.
copy does wait for completion before continuing execution. It is a syncronous call. But, it can return false if it didn't work, and your copy wont work since $temp_ and $i_ are not defined variables. So maybe you are thinking the copy isn't finishing, when it actually just isn't working at all.
You should use:
copy("{$temp}_{$i}_$extension", "{$local}_{$i}_$extension");
OR
copy($temp.'_'.$i.'_'.$extension, $local.'_'.$i.'_'.$extension);
What makes you think that copy() will return before it has finished?
You could of course compare filesize of original file and copy to be sure the process is complete.
You could use a while loop with sleep calls to delay checking, and just exit the while loop once the file exists under the new name.
I know this is an ancient question but I feel I really need to talk about this problem. COPY is a great command - BUT - it does not work all of the time. I can honestly tell you this. Why? Why does it not always work? Simple - the Operating System is at fault. Here are two examples. One is using a standard disk drive and the second one deals with a Ram disk. The COPY command reads CHUNKS of a file and writes these chunks out to the destination. This means it really does NOT just do a File_get_contents but instead does the fopen(IN), fopen(OUT), while( !EOF(IN) ){ fread(IN), fwrite(OUT) } and then fclose(IN), and fclose(out). It should be noted that these commands try to make sure everything goes ok but if the disk drive buffers what it does - then the file might take a second or two to finish. This can be seen by having a file_exists() on the output file's name. It can come back as FALSE(it IS NOT there). This is because the disk drive's hardware has not caught up with the commands.
I even installed the AMD RamDisk software and ran a program using the above commands (both file_get_contents->file_put_contents and the fopen-fread/fwrite-fclose commands). The same thing happened then also. Every now and then (not always) the file_exists() function returned FALSE because the test got there before the file had finished being created. Don't ask me why - it just would do this.
So what do I suggest? Use the SLEEP() command. Maybe use three(3) seconds (so SLEEP(3);) -after- the COPY() command. I also determined that a CHMOD(, 0777); was a good idea also. With a SLEEP() command after it so it has time to apply the changes. (Which is probably closer to one second).
Now, remember - everyone's hardware is different. So some hardware might work better or faster than the one I am using. So - this is one of those - try it if you are having problems. Or don't - if you are not having problems. It is that simple. So - this is happening to me - I'm using it - it works now that the system gets three seconds to breath - but it might not do anything for you - who has an atomic powered Willy-Wonka mobile which does the impossible before breakfast.
Got it? Good. :-)