I have a problem with PHP filemtime function. In my webapp I use Smarty template engine with caching option. In my webapp I can do some actions which generate error, but lets focus on only one action. When I click link on page some content is updated - I can click few times and everything is OK but about one request on 10 fails. Following error occurs:
filemtime() [<a href='function.filemtime'>function.filemtime</a>]: stat failed for
and the line that causes the problem:
return ($_template->getCachedFilepath() && file_exists($_template->getCachedFilepath())) ? filemtime($_template->getCachedFilepath()) : false ;
As you can see, file exists because it is checked.
Problematic line of code is included in smarty_internal_cacheresource_file.php (part of Smarty lib v3.0.6)
App is run on UNIX system, external hosting.
Any ideas? Should I post more details?
file_exists internally uses the access system call which checks permissions as the real user, whereas filemtime uses stat, which performs the check as the effective user. Therefore, the problem may be rooted in the assumption of effective user == real user, which does not hold. Another explanation would be that the file gets deleted between the two calls.
Since both the result of $_template->getCachedFilepath() and the existance of the file can change in between system calls, why do you call file_exists at all? Instead, I'd suggest just
return #filemtime($_template->getCachedFilepath());
If $_template->getCachedFilepath() can be set to a dummy value such as false, use the following:
$path = $_template->getCachedFilepath();
if (!$path) return false;
return #filemtime($path);
Use:
Smarty::muteExpectedErrors();
Read this and this
I used filemtime successfully without checking "file_exists" for years. The way I have always interpreted the documentation is that FALSE should be returned from "filemtime" upon any error. Then a few days ago something very weird occurred. If the file did not exist, my Cron job terminated with a result. The result was not in the program output but rather in the Cron output. The message was "file length exceeded". I knew the Cron job ended on the filemtime statement because I sent myself an email before and after that statement. The "after" email never arrived.
I inserted a file_exists check on the file to fix the Cron job. However, that should not have been necessary. I still do not know what was changed on the hosting server I use. Several other Cron jobs started failing on the same day. I do not know yet whether they have anything to do with filemtime.
Related
I have a script that is running continuously in the server, in this case a PHP script, like:
php path/to/my/index.php.
It's been executed, and when it's done, it's executed again, and again, forever.
I'm looking for the best way to be notified if that event stop running(been executed).
There are many reasons why it stops been called, like server memory, new deployment, human error... etc.
I just want to be notified(email, sms, slack...) if that script was not executed for certain amount of time(like 1 hour, 1 day, etc...)
My server is Ubuntu living in AWS.
An idea:
I was thinking on having an index in REDIS/MEMCACHED/ETC with a TTL. Every time the script run, renovate that TTL for this index.
If the script stop working for that TTL time, this index will expire. I just need a way to trigger a notification when that expiration happen, but looks like REDIS/MEMCACHED are not prepared for that
register_shutdown_function might help, but might not... https://www.php.net/manual/en/function.register-shutdown-function.php
I can't say i've ever seen a script that needs to run indefinitely in PHP. Perhaps there is another way to solve the problem you are after?
Update - Following your redis idea, I'd look at keyspace notifications. https://redis.io/topics/notifications
I've not tested the idea since I'm not actually a redis user. But it may be possible to subscribe to capture the expiration event (perhaps from another server?) and generate your notification.
There's no 'best' way to do this. Ultimately, what works best will boil down to the specific workflow you're supporting.
tl;dr version: Find what constitutes success and record the most recent time it happened. Use that for your notification trigger in another script.
Long version:
That said, persistent storage with a separate watcher is probably the most straight-forward way to do this. Record the last successful run, and then check it with a cron job every so often.
For what it's worth, for scripts like this I generally monitor exit codes or logs produced by the script in question. This isolates the error notification process from the script itself so a flaw in the script (hopefully) doesn't hamper the notification.
For a barebones example, say we have a script to invoke the actual script... (This is very much untested pseudo-code)
<?php
//Run and record.
exec("php path/to/my/index.php", $output, $return_code);
//$return_code will be 255 on fatal errors. You can use other return codes
//with exit in your called script to report other fail states.
if($return_code == 0) {
file_put_contents('/path/to/folder/last_success.txt', time());
} else {
file_put_contents('/path/to/folder/error_report.json', json_encode([
'return_code' => $return_code,
'time' => time(),
'output' => implode("\n", $output),
//assuming here that error output isn't silently logged somewhere already.
], JSON_PRETTY_PRINT));
}
And then a watcher.php that monitors these files on a cron job.
<?php
//Notify us immediately on failure maybe?
//If you have a lot of transient failures it may make more sense to
//aggregate and them in a single report at a specific time instead.
if(is_file('/path/to/folder/error_report.json')) {
//Mail details stored in JSON here.
//rename file so it's recorded, but we don't receive it again.
rename('/path/to/folder/error_report.json', '/path/to/folder/error_report.json'.'-sent-'.date('Y-m-d-H-i-s'));
} else {
if(is_file('/path/to/folder/last_success.txt')) {
$last_success = intval(file_get_contents('/path/to/folder/last_success.txt'));
if(strtotime('-24 hours') > $last_success) {
//Our script hasn't run in 24 hours, let someone know.
}
} else {
//No successful run recorded. Might want to put code here if that's unexpected.
}
}
Notes: There are some caveats to the specific approach displayed above. A script can fail in a non-fatal way and if you're not checking for it this example could record that as a successful run. For example, permissions errors causing warnings but the script still runs it's full course and exits normally without hitting an exit call with a specific return code. Our example invoker here would log that as a successful run - even though it isn't.
Another option is to log success from your script and only check for error exits from the invoker.
I am trying to trigger an update of AWStats from inside a PHP script.
I currently use a cron job to trigger the update, and simply copied the command line into an exec function within the script.
if(exec("/path/to/awstats.pl -config=domain.com -update")) {
echo 'Logs processed';
}
However, this returns a false positive. Although the "Logs processed" line is displayed, AWStats has not processed the stats information.
AWStats does work perfectly when visited directly, and when running the update via the cron job, it just isn't from this PHP script. I have checked the error logs, there is not a problem with my script or with AWStats timing out.
Am I missing something?
For the record, this script is designed to purge the old data, update a blacklist of referrers to block spam, and then recompile the stats data from the log files. Yes, I am aware of the performance issues of using the SkipReferrerBlackList directive.
It seems from your code that you think exec returns a boolean indicating success or failure. It doesn't, it just returns a string (the last line of output from the command). And strings (except "0" and an empty string) always evaluate to true.
To debug the problem you should print the output of the command:
exec("/path/to/awstats.pl -config=domain.com -update", $output);
echo join(PHP_EOL, $output);
EDIT : My problem came from the "intelligent" behaviour of Firefox. If you call the same page on two different tabs, it automatically start the second after the first is done. If you want parallel execution you must add a different parameter.
Was trying to create a mutex using a directory. For exemple :
$dir = 'test' ;
echo is_dir($dir) ;
mkdir($dir)
wait(30)
rmdir($dir)
In a browser, I call the script, on another tab a few seconds later I call the same script.
is_dir returns false and there isno error on mkdir on the second call
ON the disk the dir is created with the first script and remain until the second end.
If I call on command line the two script one after the other I have the
expected result is_dir is true and mk_dir failed with dir already exists error.
The web server is an apache2.
Can't explain such a behavior.
When you use stat(), lstat(), or any of the other functions listed in the affected functions list (below), PHP caches the information those functions return in order to provide faster performance. However, in certain cases, you may want to clear the cached information. For instance, if the same file is being checked multiple times within a single script, and that file is in danger of being removed or changed during that script's operation, you may elect to clear the status cache. In these cases, you can use the clearstatcache() function to clear the information that PHP caches about a file.
This function caches information about specific filenames, so you only need to call clearstatcache() if you are performing multiple operations on the same filename and require the information about that particular file to not be cached.
Affected functions include stat(), lstat(), file_exists(), is_writable(), is_readable(), is_executable(), is_file(), is_dir(), is_link(), filectime(), fileatime(), filemtime(), fileinode(), filegroup(), fileowner(), filesize(), filetype(), and fileperms().
TLDR, add a clearstatcache(); before any checks
source : http://php.net/manual/en/function.clearstatcache.php
You might want to explain a bit better, and paste a better code exemple...
Meanwhile, here is a better way to handle your mkdir/rmdir
$mydir= 'my/dir/'
if(!is_dir($myDir)) {
mkdir($myDir, 0755, true);
wait(30);
rmdir(mydir);
}
You might need to find out how to recursively delete dirs and files, it might help... ;)
Also, is wait()a PHP function you made?!
I do know sleep() but not wait()...
The code could be prettier and more realistic, was just trying to be concise. Add thought of apc or xcode cache problem...
Wandering on the interweb for a hint, I read that when calling the same script on two tabs firefox was so intelligent (f... him) that it waited for the first to be done before executing the second.
Adding a different param to each call (?t=1 and ?t=2) or using chrome for one call and ff for the other make it working flawlessly.... What a waste of time....
I've been completely unsuccessful finding an answer to this question. Hopefully someone here can help.
I have a PHP script (a WordPress template, to be specific) that automatically imports and processes images when a user hits it. The problem is that the image processing takes up a lot of memory, particularly if multiple users are accessing the template at the same time and initiating the image processing. My server crashed multiple times because of this.
My solution to this was to not execute the image-processing function if it was already running. Before the function started running, I would check a database entry named image_import_running to see if it was set to false. If it was, the function then ran. The very first thing the function did was set image_import_running to true. Then, after it was all finished, I set it back to false.
It worked great -- in theory. The site hasn't crashed since, I can tell you that. But there are two major problems with it:
If the user closes the page while it's loading, the script never finishes processing the images and therefore never sets image_import_running back to false. The template will never process images again until it's manually set to false.
If the script times out while it's processing images -- and that's a strong possibility if there are many images in the queue -- you have essentially the same problem as No. 1: the script never gets to the point where it sets image_import_running back to false.
To handle No. 1 (the first one of the two problems I realized), I added ignore_user_abort(true) to the script. Did it work? I don't know, because No. 2 is still an issue. That's where I'm stumped.
If I could ask the server whether the script was running or not, I could do something like this:
if($import_running && $script_not_running) {
$import_running = false;
}
But how do I set that $script_not_running variable? Beats me.
I've shared this entire story with you just in case you have some other brilliant solution.
Try using
ignore_user_abort(true); it will continue to run even if the person leaves and closes the browser.
you might also want to put a number instead of true false in the db record and set a maximum number of processes that can run together
As others have suggested, it would be best to move the image processing out of the request itself.
As an interim "fix", store a timestamp alongside image_import_running when a processing job begins (e.g., image_import_commenced). This is a very crude mechanism, but if you know the maximum time that a job can run before timing out, the script can check whether that period of time has elapsed.
e.g., if image_import_running is still true but the current time is more than 10 minutes since image_import_commenced, run the processing anyway.
What about setting a transient with an expiry time that would throttle the operation?
if(!get_transient( 'import_running' )) {
set_transient( 'import_running', true, 30 ); // set a 30 second transient on the import.
run_the_import_function();
}
I would rather store the job into database flagging it pending and set a cron job to execute the processing one job at a time.
For Me i use just this simple idea with a text document. for example run.txt file
in the top script use :
if((file_get_contents('run.txt') != 'run'){ // here the script will work
$file = fopen('run.txt', 'w+');
fwrite($file, 'run');
fclose('run.txt');
}else{
exit(); // if it find 'run' in run.txt the script will stop
}
And add this in the end of your script file
$file = fopen('run.txt', 'w+');
fwrite($file, ''); //will delete run word for the next try ;)
fclose('run.txt');
That will check if script already work by checking runt.txt contents
if run word exist in run.txt it will not run
Running a cron would definitively be a better solution. Idea to store url in a table is a good one.
To answer to the original question, you may run a ps auxwww command with exec (Check this page: How to get list of running php scripts using PHP exec()? ) and move your function in a separated php file.
exec("ps auxwww|grep myfunction.php|grep -v grep", $output);
Just add following on the top of your script.
<?php
// Ensures single instance of script run at a time.
$fileName = basename(__FILE__);
$output = shell_exec("ps -ef | grep -v grep | grep $fileName | wc -l");
//echo $output;
if ($output > 2)
{
echo "Already running - $fileName\n";
exit;
}
// Your php script code.
?>
Given a simple code like :
$file = 'hugefile.jpg';
$bckp_file = 'hugeimage-backup.jpg';
// here comes some manipulation on $bckp_file.
The assumed problem is that if the file is big or huge - let´s say a jpg - One would think that it will take the server some time to copy it (by time I mean even a few milliseconds) - but one would also assume that the execution of the next line would be much faster ..
So in theory - I could end up with "no such file or directory" error when trying to manipulate file that has not yet created - or worse - start to manipulate a TRUNCATED file.
My question is how can I assure that $bckp_file was created (or in this case -copied) successfully before the NEXT line which manipulates it .
What are my options to "pause" , "delay" the next line execution until the file creation / copy was completed ?
right now I can only think of something like
if (!copy($file, $bckp_file)) {
echo "failed to copy $file...\n";
}
which will only alert me but will not resolve anything (same like having the php error)
or
if (copy($file, $bckp_file)) {
// move the manipulation to here ..
}
But this is also not so valid - because let´s say the copy was not executed - I will just go out of the loop without achieving my goal and without errors.
Is that even a problem or am I over-thinking it ?
Or is PHP has a bulit-in mechanism to ensure that ?
Any recommended practices ?
any thoughts on the issue ? ??
What are my options to "pause" , "delay" the next line execution until the file is creation / copy was completes
copy() is a synchronous function meaning that code will not continue after the call to copy() until copy() either completely finishes or fails.
In other words, there's no magic involved.
if (copy(...)) { echo 'success!'; } else { echo 'failure!'; }
Along with synchronous IO, there is also asynchronous IO. It's a bit complicated to explain in technical detail, but the general idea of it is that it runs in the background and your code hooks into it. Then, whenever a significant event happens, the background execution alerts your code. For example, if you were going to async copy a file, you would register a listener to the copying that would be notified when progress was made. That way, your code could do other things in the mean time, but you could also know what progress was being made on the file.
PHP handles file uploads by saving the whole file in a temporary directory on the server before executing any of script (so you can use $_FILES from the beginning), and it's safe to assume all functions are synchronous -- that is, PHP will wait for each line to execute before moving to the next line.