About PHP parallel file read/write - php

Have a file in a website. A PHP script modifies it like this:
$contents = file_get_contents("MyFile");
// ** Modify $contents **
// Now rewrite:
$file = fopen("MyFile","w+");
fwrite($file, $contents);
fclose($file);
The modification is pretty simple. It grabs the file's contents and adds a few lines. Then it overwrites the file.
I am aware that PHP has a function for appending contents to a file rather than overwriting it all over again. However, I want to keep using this method since I'll probably change the modification algorithm in the future (so appending may not be enough).
Anyway, I was testing this out, making like 100 requests. Each time I call the script, I add a new line to the file:
First call:
First!
Second call:
First!
Second!
Third call:
First!
Second!
Third!
Pretty cool. But then:
Fourth call:
Fourth!
Fifth call:
Fourth!
Fifth!
As you can see, the first, second and third lines simply disappeared.
I've determined that the problem isn't the contents string modification algorithm (I've tested it separately). Something is messed up either when reading or writing the file.
I think it is very likely that the issue is when the file's contents are read: if $contents, for some odd reason, is empty, then the behavior shown above makes sense.
I'm no expert with PHP, but perhaps the fact that I performed 100 calls almost simultaneously caused this issue. What if there are two processes, and one is writing the file while the other is reading it?
What is the recommended approach for this issue? How should I manage file modifications when several processes could be writing/reading the same file?

What you need to do is use flock() (file lock)
What I think is happening is your script is grabbing the file while the previous script is still writing to it. Since the file is still being written to, it doesn't exist at the moment when PHP grabs it, so php gets an empty string, and once the later processes is done it overwrites the previous file.
The solution is to have the script usleep() for a few milliseconds when the file is locked and then try again. Just be sure to put a limit on how many times your script can try.
NOTICE:
If another PHP script or application accesses the file, it may not necessarily use/check for file locks. This is because file locks are often seen as an optional extra, since in most cases they aren't needed.

So the issue is parallel accesses to the same file, while one is writing to the file another instance is reading before the file has been updated.
PHP luckily has a mechanisms for locking the file so no one can read from it until the lock is released and the file has been updated.
flock()
can be used and the documentation is here

You need to create a lock, so that any concurrent requests will have to wait their turn. This can be done using the flock() function. You will have to use fopen(), as opposed to file_get_contents(), but it should not be a problem:
$file = 'file.txt';
$fh = fopen($file, 'r+');
if (flock($fh, LOCK_EX)) { // Get an exclusive lock
$data = fread($fh, filesize($file)); // Get the contents of file
// Do something with data here...
ftruncate($fh, 0); // Empty the file
fwrite($fh, $newData); // Write new data to file
fclose($fh); // Close handle and release lock
} else {
die('Unable to get a lock on file: '.$file);
}

Related

PHP Lock on file before unlink

I wanted to wait for all processes reading a certain file in PHP by obtaining an exclusive lock on that file, and after that delete (unlink) the file. This concerns files like profile pictures which a user can delete or change. The name of the file will be something like the user ID.
My code:
//Obtain lock
$file = fopen("path/to/file", "r"); //(I'm not sure which mode to use here btw)
flock($file, LOCK_EX);
//Delete file
unlink("path/to/file");
Line 3 waits for all locks to be released, which is good, but the unlink function throws an error: Warning: unlink(path/to/file): Resource temporarily unavailable in path/to/script on line xx
To prevent this I could release the lock before calling unlink, but this means another process could lock on the file again, which would cause the same error.
My questions are:
Is it possible to delete a file in PHP without releasing the lock? That is, without the risk of other processes trying to use the file at the same time.
If not:
Is this possible in Windows at all? How about Unix?
Should I involve my database for this matter and lock on rows in the database instead, or is there a better way?
Another option I can see is repeating this piece of code, including a release of the lock before calling unlink, until unlink succeeds, but this seems a bit messy, right?
Hey I'm struggling with this too, 2 years later. Kind of seems dumb you can't acquire an exclusive lock on a file when trying to rename or unlink it, or at least the documentation isn't there for doing this.
One solution is open the file for writing, acquire an exclusive lock, clear contents of the file using ftruncate, close it, and then unlink it. When you're reading from the file, you can check the size to make sure the file has contents.
When deleting (untested code):
$fh = fopen('yourfile.txt', 'c'); // 'w' mode truncates file, you don't want to do that yet!
flock($fh, LOCK_EX); // blocking, but you can use LOCK_EX | LOCK_NB for nonblocking and a loop + sleep(1) for a timeout
ftruncate($fh, 0); // truncate file to 0 length
fclose($fh);
unlink('yourfile.txt');
When reading (untested code):
if (!file_exists('yourfile.txt') || filesize('yourfile.txt') <= 0) {
print 'nah.jpg, must be dELeTeD :O';
}

Find out whether fopen(..., 'a') created a new file

in my PHP project, I use some kind of counter that appends to an existing (or new) file very often:
$f = fopen($filename, 'ab');
fwrite($f);
fclose($f);
When a new file is created, I have to edit this file's permissions, so another user may access the file as well:
$existed = file_exists($filename);
// Do the append
$f = fopen($filename, 'ab');
fwrite($f);
fclose($f);
// Update permissions
if (!$existed) {
#chmod($filename, 0666);
}
Is there any way to find out, whether 'a' (append) created a new file or appended to an exiting one without using file_exists()? To my understanding, file_exists() retrieves the file stats, which causes some unnecessary overhead compared to a simple file-append. As the function is used very often, I wonder if there's an option to tell if fopen(..., 'a') created a new file without using file_exists()?
Note: This is mostly a question of style and interest, not a true performance issue. But if I am mistaken and fopen() already retrieves the file stats, please let me know!
Update
Okay, it really is a rather academic question. Here're some performance tests run on a windows system (Apache, Win8.1 - no UNIX file permissions) and a linux machine (Nginx, Ubuntu 14.04, virutal machine).
Each test run with 1000 repetitions, file deleted before the first repetition.
Win Linux
simply append one byte 1.8ms 9.4ms
append + clearstatcache() 1.8ms 9.3ms
test fileexists() + append 2.2ms 10.5ms
fileexists() + append + clear 2.2ms 11.0ms
append + chmod() 2.7ms 12.3ms
append + fileexists() -> chmod() 3.3ms 10.6ms
Note: The last one is the only one that uses and IF within the test loop.
The php fopen is just a call to the libc fopen, that automatically creates a file for the modes w,w+,a and a+. As far as I can see, there is no way to get the stat with the permission bits from the returned file pointer.
It seems that PHP stores the stat array for each opened file and you can access it with fstat($fp) with the opened file handle $fp. But the mode field contains inode permission bits. I can't immediately see how "inode permission bits" are related to the "UNIX file mode". The stat system call does not use this term.
You can use "r+" mode to open your file and create it if that fails. If not you need to SEEK to then end to achieve something similar.
But finally it's best to check for existence before you open the file.
No, fopen just returns the resource, it doesn't return or set a flag that indicates whether the file already existed - http://php.net/manual/en/function.fopen.php
EDIT: see the performance test in the edited question.
Why not calling chmod() every times?
Your file_exist() is probably (maybe a little performance test...) more expensive than a chmod().
// Do the append
$f = fopen($filename, 'ab');
fwrite($f);
fclose($f);
// Update persmissions
#chmod($this->filename, 0666);

PHP write to included file

I need to include one PHP file and execute function from it.
After execution, on end of PHP script I want to append something to it.
But I'm unable to open file. It's possible to close included file/anything similar so I'll be able to append info to PHP file.
include 'something.php';
echo $somethingFromIncludedFile;
//Few hundred lines later
$fh = fopen('something.php', 'a') or die('Unable to open file');
$log = "\n".'$usr[\''.$key.'\'] = \''.$val.'\';';
fwrite($fh, $log);
fclose($fh);
How to achieve that?
In general you never should modify your PHP code using PHP itself. It's a bad practice, first of all from security standpoint. I am sure you can achieve what you need in other way.
As Alex says, self-modifying code is very, VERY dangerous. And NOT seperating data from code is just dumb. On top of both these warnings, is the fact that PHP arrays are relatively slow and do not scale well (so you could file_put_contents('data.ser',serialize($usr)) / $usr=unserialize(file_get_contents('data.ser')) but it's only going to work for small numbers of users).
Then you've got the problem of using conventional files to store data in a multi-user context - this is possible but you need to build sophisticated locking queue management. This usually entails using a daemon to manage the queue / mutex and is invariably more effort than its worth.
Use a database to store data.
As you already know this attempt is not one of the good ones. If you REALLY want to include your file and then append something to it, then you can do it the following way.
Be aware that using eval(); is risky if you cannot be 100% sure if the content of the file does not contain harmful code.
// This part is a replacement for you include
$fileContent = file_get_contents("something.php");
eval($fileContent);
// your echo goes here
// billion lines of code ;)
// file append mechanics
$fp = fopen("something.php", "a") or die ("Unexpected file open error!");
fputs($fp, "\n".'$usr[\''.$key.'\'] = \''.$val.'\';');
fclose($fp);

PHP queue file implementation

For a project I was working on I need a queue which will be too large to hold in normal memory. I had been implementing it as a simple file where it would read the whole file take the first few (~100) lines, process them, then write back the updated queue with new instructions added and the old ones removed. However, since the queue became too large to hold in memory like this I need something different. Preferably someone can tell me a way to peel off just the first few lines of a file without having to look at the rest of the data. I had thought about using a database (MySQL probably with sorted insert timestamps) but I would heavily prefer to do it without for load and bandwidth reasons (several servers would have to all be sending and receiving a lot of data from the DB). The language I'm working in is PHP but really this question is more about unix files I suppose. Any help would be appreciated.
Sucking out the first line of a file is pretty trivial (fopen() followed by an fgets()). Re-writing the file to remove completed jobs would be very painful, especially if you've got multiple concurrent servers working off the same queue file.
One alternative would be to use a seperate file for each job. If you have some concurrency-safe method of generating an incrementing ID for these files, then it'd be a simple matter of picking out the file with the lowest id for the oldest job, and generating a new id for each new job. You'd have to figure out some file locking, though, to keep two+ servers grabbing the same file at the same time, however.
I had same problems while I was working on enqueue/fs transport. I failed to modify a small portion at the begging of the file without copying it to the memory and saving back. Instead, but that's possible to do that with the end of the file. You can read a portion and then truncate it. That's not really a queue but a stack. So if you rely on message ordering this would not be a solution. In my case, I lock the file when the file has been read from the file, the lock is released.
This is how you could write messages to a queue file:
<?php
$rawMessage = 'this your message to put to the queue as a string';
$queueFile = fopen('/path/to/queue/file', '+a');
// here it may add some spaces so the message length is multiples of modular.
// that make it easier to read messages from a file.
// lock file
$rawMessage = str_repeat(' ', 64 - (strlen($rawMessage) % 64)).$rawMessage;
fwrite($queueFile, $rawMessage);
// release lock
This is how you could read messages from a queue file:
<?php
$queueFile = fopen('/path/to/queue/file', '+c');
// lock file
$frame = readFrame($file, 1);
ftruncate($file, fstat($file)['size'] - strlen($frame));
rewind($file);
$rawMessage = substr(trim($frame), 1);
// release lock
function readFrame($file, $frameNumber)
{
$frameSize = 64;
$offset = $frameNumber * $frameSize;
fseek($file, -$offset, SEEK_END);
$frame = fread($file, $frameSize);
if ('' == $frame) {
return '';
}
if (false !== strpos($frame, '|{')) {
return $frame;
}
return readFrame($file, $frameNumber + 1).$frame;
}
For the locking I'd suggest using Symfony LockHandler or simply take enqueue/fs.

What's the best way to read from and then overwrite file contents in php?

What's the cleanest way in php to open a file, read the contents, and subsequently overwrite the file's contents with some output based on the original contents? Specifically, I'm trying to open a file populated with a list of items (separated by newlines), process/add items to the list, remove the oldest N entries from the list, and finally write the list back into the file.
fopen(<path>, 'a+')
flock(<handle>, LOCK_EX)
fread(<handle>, filesize(<path>))
// process contents and remove old entries
fwrite(<handle>, <contents>)
flock(<handle>, LOCK_UN)
fclose(<handle>)
Note that I need to lock the file with flock() in order to protect it across multiple page requests. Will the 'w+' flag when fopen()ing do the trick? The php manual states that it will truncate the file to zero length, so it seems that may prevent me from reading the file's current contents.
If the file isn't overly large (that is, you can be confident loading it won't blow PHP's memory limit), then the easiest way to go is to just read the entire file into a string (file_get_contents()), process the string, and write the result back to the file (file_put_contents()). This approach has two problems:
If the file is too large (say, tens or hundreds of megabytes), or the processing is memory-hungry, you're going to run out of memory (even more so when you have multiple instances of the thing running).
The operation is destructive; when the saving fails halfway through, you lose all your original data.
If any of these is a concern, plan B is to process the file and at the same time write to a temporary file; after successful completion, close both files, rename (or delete) the original file and then rename the temporary file to the original filename.
Read
$data = file_get_contents($filename);
Write
file_put_contents($filename, $data);
One solution is to use a separate lock file to control access.
This solution assumes that only your script, or scripts you have access to, will want to write to the file. This is because the scripts will need to know to check a separate file for access.
$file_lock = obtain_file_lock();
if ($file_lock) {
$old_information = file_get_contents('/path/to/main/file');
$new_information = update_information_somehow($old_information);
file_put_contents('/path/to/main/file', $new_information);
release_file_lock($file_lock);
}
function obtain_file_lock() {
$attempts = 10;
// There are probably better ways of dealing with waiting for a file
// lock but this shows the principle of dealing with the original
// question.
for ($ii = 0; $ii < $attempts; $ii++) {
$lock_file = fopen('/path/to/lock/file', 'r'); //only need read access
if (flock($lock_file, LOCK_EX)) {
return $lock_file;
} else {
//give time for other process to release lock
usleep(100000); //0.1 seconds
}
}
//This is only reached if all attempts fail.
//Error code here for dealing with that eventuality.
}
function release_file_lock($lock_file) {
flock($lock_file, LOCK_UN);
fclose($lock_file);
}
This should prevent a concurrently-running script reading old information and updating that, causing you to lose information that another script has updated after you read the file. It will allow only one instance of the script to read the file and then overwrite it with updated information.
While this hopefully answers the original question, it doesn't give a good solution to making sure all concurrent scripts have the ability to record their information eventually.

Categories