Is it possible to lock the file img.jpg until Imagick creates it?
$image->writeImage('img.jpg')
I'm not entirely sure the problem you are describing actually exists as a problem. No on else has ever reported it.
However, even if it is a problem, you don't want to use file locking here....that is for solving a separate set of problems.
Instead what you want to use is atomic operations, that are done 'instantaneously' by the computer.
$created = false;
for ($i=0; $i<5 && $created == false; $i++) {
// Create a temp name
$tmpName = "temp".rand(10000000, 99999999).".jpg";
// Open it. The x+ means 'do not create if file already exists'.
$fileHandle = #fopen($tmpName, 'x+');
if ($fileHandle === false) {
// The file with $tmpName already exists, or we otherwise failed
// to create the file, loop again.
continue;
}
// We don't actually want the file-handle, we just wanted to make sure
// we had a uniquely named file ending with .jpg so just close it again.
// You could also use tempnam() if you don't care about the file extension.
fclose($fileHandle);
// Writes the image data to the temp file name.
$image->writeImage($tmpName);
rename($tmpName, 'img.jpg');
$created = true;
}
if ($created === false) {
throw new FailedToGenerateImageException("blah blah");
}
There's no locking in there....but it is not possible for any process to read the data from img.jpg while it is being written to. If there are any other processes that have img.jpg while the rename occurs, their file-handle to the old version of the file will continue to exist, and they will continue to read the old file, until they close and open it again.
Related
I have a text file which multiple users will be simultaneously editing (limited to an individual line per edit, per user). I have already found a solution for the "line editing" part of the required functionality right here on StackOverflow.com, specifically, the 4th solution (for large files) offered by #Gnarf in the following question:
how to replace a particular line in a text file using php?
It basically rewrites the entire file contents to a new temporary file (with the user's edit included) and then renames the temporary file to the original file once finished. It's great!
To avoid one user's edit causing a conflict with another user's edit if they are both attempting an edit at the same time, I have introduced flock() functionality, as can be seen in my variation on the code here:
$reading = fopen($file, 'r');
$writing = fopen($temp, 'w');
$replaced = false;
if ((flock($reading, LOCK_EX)) and (flock($writing, LOCK_EX))) {
echo 'Lock acquired.<br>';
while (!feof($reading)) {
$line = fgets($reading);
$values = explode("|",$line);
if ($values[0] == $id) {
$line = $id."|comment edited!".PHP_EOL;
$replaced = true;
}
fputs($writing, $line);
}
flock($reading, LOCK_UN);
flock($writing, LOCK_UN);
fclose($reading);
fclose($writing);
} else {
echo 'Lock not acquired.<br>';
}
I've made sure the $temp file always has a unique filename. Full code here: https://pastebin.com/E31hR9Mz
I understand that flock() will force any other execution of the script to wait in a queue until the first execution has finished and the flock() has been released. So far so good.
However, the problem starts at the end of the script, when the time has come to rename() the temporary file to replace the original file.
if ($replaced) {
rename($temp, $file);
} else {
unlink($temp);
}
From what I have seen, rename() will fail if the original file still has a flock(), so I need to release the flock() before this point. However, I also need it to remain locked, or rename() will fail when another user running the same script immediately opens a new flock() as soon as the previous flock() is released. When this happens, it will return:
Warning: rename(temporary.txt,original.txt): Access is denied. (code: 5)
tl;dr: I seem to be in a bit of a Catch-22. It looks like rename() won't work on a locked file, but unlocking the file will allow another user to immediately lock it again before the rename() can take place.
Any ideas?
update: After some extensive research into how flock() works, (in layman's terms, there is no guarantee that another script will respect the "lock", and therefore it is not really a "lock" at all as one would assume from the literal meaning of the word) I have opted for this solution instead which works like a charm:
https://docstore.mik.ua/orelly/webprog/pcook/ch18_25.htm
"Good lock" on your locking adventures.
I have cleanup script, which move the XLS files from one place to another. for this file moving process, I have used the rename function. This script is working fine. but when the XLS file is open, when I try to move that xls, I am getting error which simply say Can not rename sample.xls. But I would like to add the functionality like, Check the XLS is open before initiate rename function.
I believe this is function call flock but this is applicable for TXT file alone.
How to check XLS file is opened before call the rename function.
One simple thing you could try is to use flock to acquire a Exclusive Lock on the file and if it fails you will know the file is being used:
<?php
$fp = fopen('c:/your_file.xlsx', 'r+');
if(!flock($fp, LOCK_EX))
{
echo 'File is being used...';
exit(-1);
}
else
{
fclose($fp);
// rename(...);
}
An alternative would be to check the existence of the locking file excel usually creates when a file is being used:
<?php
$file = 'c:/testfile.xlsx';
$lock = 'c:/~$testfile.xlsx';
if (file_exists($lock))
{
echo "Excel $file is locked.";
}
else
{
echo "Excel $file is free.";
}
The hidden file is usually name with the prefix ~$ as for old excel files I believe 2003 and older the lock files are saved on the temp folder with a random name like ~DF7B32A4D388B5854C.TMP so it would be pretty hard to find out.
You should use flock(). This puts a flag on the file so that other scripts are informed that the file is in use. The flag is turned off either intentionally using fclose or implicitly by the end of the script.
Use file lock like:
flock($file,LOCK_EX);
see this
I have an array of filenames and each process need to create and write only to a single file.
This is what I came to:
foreach ($filenames as $VMidFile) {
if (file_exists($VMidFile)) { // A
continue;
}
$fp = fopen($VMidFile, 'c'); // B
if (!flock($fp, LOCK_EX | LOCK_NB)) { // C
continue;
}
if (!filesize($VMidFile)) { // D
// write to the file;
flock($fp, LOCK_UN);
fclose($fp);
break;
}
flock($fp, LOCK_UN);
fclose($fp); // E
}
But I don't like that I'm relying on the filesize.
Any proposals to do it in another (better) way?
UPD: added the labels to discuss easily
UPD 2: I'm using filesize because I don't see any other reliable way to check if the current thread created the file (thus it's empty yet)
UPD 3: the solution should be condition race free.
A possible, slightly ugly solution would be to lock on a lock file and then testing if the file exists:
$lock = fopen("/tmp/".$filename."LOCK", "w"); // A
if (!flock($lock, LOCK_EX)) { // B
continue;
}
if(!file_exists($filename)){ // C
//File doesn't exist so we know that this thread will create it
//Do stuff to $filename
flock($lock, LOCK_UN); // D
fclose($lock);
}else{
//File exists. This thread didn't create it (at least in this iteration).
flock($lock, LOCK_UN);
fclose($lock);
}
This should allow exclusive access to the file and also allows deciding whether the call to fopen($VMidFile, 'c'); will create the file.
Rather than creating a file and hoping that it's not interfered with:
create a temporary file
do all necessary file operations on it
rename it to the new location if location doesn't exist.
Technically, since rename will overwrite the destination there is a chance that concurrent threads will still clash. That's very unlikely if you have:
if(!file_exists($lcoation) { rename(...
You could use md5_file to verify the file contents is correct after this block.
You can secure exclusive access using semaphores (UNIX only, and provided the sysvsem extension is installed):
$s = sem_get(ftok($filename), 'foo');
sem_acquire($s);
// Do some critical work...
sem_release($s);
Otherwise you can also use flock. It does not require any special extensions, but according to comments on PHP.net is a bit slower than using semaphores:
$a = fopen($file, 'w');
flock($a, LOCK_EX);
// Critical stuff, again
flock($a, LOCK_UN);
Use mode 'x' instead of 'c' in your fopen call. And check the resulting $fp, if it's false, the file wasn't created by the current thread, and you should continue to the next filename.
Also, depending your PHP's installation settings, you may want to put an # in front of the fopen call to suppress any warnings if fopen($VMidFile, 'x') is unable to create the file because it already existed.
This should work even without flock.
I have a simple caching system as
if (file_exists($cache)) {
echo file_get_contents($cache);
// if coming here when $cache is deleting, then nothing to display
}
else {
// PHP process
}
We regularly delete outdated cache files, e.g. deleting all caches after 1 hour. Although this process is very fast, but I am thinking that a cache file can be deleted right between the if statement and file_get_contents processes.
I mean when if statement checks the existence of cache file, it exists; but when file_get_contents tries to catch it, it is no longer there (deleted by simultaneous cache deleting process).
file_get_contents locks the file to avoid the undergoing delete process during the read process. But the file can be deleted when the if statement sends the PHP process to the first condition (before start of the file_get_contents).
Is there any approach to avoid this? Is the cache deleting system different?
NOTE: I did not face any practical problem, as it is not very probable to catch this event, but logically it is possible, and should happen on heavy loads.
Luckily file_get_contents return FALSE on error, so you could quick-bake it like:
if (FALSE !== ($buffer = file_get_contents())) {
echo $buffer;
return;
}
// PHP process
or similiar. It's a bit the quick and dirty way, considering you want to place the # operator to hide any warnings about non-existent files:
if (FALSE !== ($buffer = #file_get_contents())) {
The other alternative would be to lock, however that might prevent your cache-deletion to not delete the file if you have locked it.
Then left is to stall the cache your own. That means reading the file-creation time in PHP, check that it is < 5 minutes then for the file-deletion processing (5 minutes is exemplary) and then you would know that the file is already stale and for being replaced with fresh content. Re-create the file then. Otherwise read the file in, which probably is better then with readfile instead of file_get_contents and echo.
On failure, file_get_contents returns false, so what about this:
if (($output = file_get_contents($filename)) === false){
// Do the processing.
$output = 'Generated content';
// Save cache file
file_put_contents($filename, $output);
}
echo $output;
By the way, you may want to consider using fpassthru, which is more memory-efficient, especially for larger files. Using file_get_contents on large files (> 100 MB), will probably cause problems (depending on your configuration).
<?php
$fp = #fopen($filename, 'rb');
if ($fp === false){
// Generate output
} else {
fpassthru($fp);
}
I am trying to build a small demon in PHP that analyzes the logfiles on a linux system. (eg. follow the syslog).
I have managed to open the file via fopen and continuosly read it with stream_get_line. My problem starts when the monitored file is deleted and recreated (eg when rotating logs). The program then does not read anything anymore, even if the file grew larger than previously.
Is there an elegant solution for this? stream_get_meta_data does not help and using tail -f on the command line shows the same problem.
EDIT, added sample code
I tried to boil down the code to a minimum to illustrate what I am looking for
<?php
$break=FALSE;
$handle = fopen('./testlog.txt', 'r');
do {
$line = stream_get_line($handle, 100, "\n");
if(!empty($line)) {
// do something
echo $line;
}
while (feof($handle)) {
sleep (5);
$line = stream_get_line($handle, 100, "\n");
if(!empty($line)) {
// do something
echo $line;
}
// a commented on php.net indicated it is possible
// with tcp streams to distinguish empty and lost
// does NOT work here --> need somefunction($handle)
if($line !== FALSE && $line ='') $break=TRUE;
}
} while (!$break);
fclose($handle);
?>
When log files are rotated, the original file is copied, then deleted, and a new file with the same name is created. It may have the same name as the original file, but it has a different inode. Inodes (dumbed down description follows) are like hidden incremental index numbers for your files. You can change the name of a file, or move it, but it takes the inode with it. Once that original log file is deleted, you can't re-open a file with the same name using the same file handler, because the inode has changed. Your best bet is detect the failure, and attempt to open the new file.