Does PHP offer file concurrency in this scenario? - php

I have a cache file that is updated every hour or so. The file size ranges from 100KB to 1MB. The way the cache is updated is with the file_put_contents() method.
Only the server writes to the file. However, there is continuous access to the file. The file is accessed by users by a script that performs a one time read through readfile() to echo it to the user.
If the file is being read by the caching script, and the server runs the user reading script, or the other way around, would there be a problem? Or is this handled automatically by PHP>

Basically, you should lock the file while writing or reading. At least, it guarantees that there is no problem. It is the way of good programming!. The example is shown below.
<?php
$fp = fopen("/tmp/lock.txt", "w+");
if (flock($fp, LOCK_EX)) { // do an exclusive lock
fwrite($fp, "Write something here\n");
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't lock the file !";
}
fclose($fp);
?>
More information

Related

PHP flock does not lock second run of the script

I trying to implmente a simpler version of the code in this answer:
How to detect whether a PHP script is already running?
The problem described here is my EXACT situation. I want to prevent cron from launching the same script if it's already running.
I started with this simple test code:
<?php
$script_name = __FILE__;
$lock_file_name = basename($script_name, ".php") . ".lock";
$fp = fopen($lock_file_name, "c");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
//ftruncate($fp, 0); // truncate file
//fwrite($fp, "Write something here\n");
//fflush($fp); // flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
echo "GOT LOCK\n";
sleep(20);
} else {
echo "Couldn't get the lock!\n";
}
flock($fp, LOCK_UN); // release the lock
fclose($fp);
?>
As I understand it, I launch this code in one console and read the Got Lock. If I launch it in another console the I should get the Coudn't get the lock message. Howver I get the Got Lock message both times.
What am I doing wrong?
The file is only locked for the duration of the script. Once the PHP finishes its activities the script tereminates and then the OS unlocks the file.
Note that your sleep(20) call is coming after you have unlocked the file, so there is no pause when the file is locked. So it sounds like you're in effect calling the single script twice in sequence rather than in parallel.
Solution:
Move the sleep(20) statement to before the lock is removed (actually you unlock twice so simply removing the first unlock does this).
$fp = fopen($lock_file_name, "c");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
echo "GOT LOCK\n";
/** this time window allows you to run another parallel script to prove the locking mechanism works **/
sleep(20);
} else {
echo "Couldn't get the lock!\n";
}
flock($fp, LOCK_UN); // release the lock
fclose($fp);

PHP script file loads indefinitely on browser when flock (LOCK_SH/LOCK_EX)

I came across this link while trying to learn how to lock files to prevent a script reading from a file as another is writing, or two scripts writing to the same file simultaneously.
I created two scripts, readandwritelock.php and readlock.php, the first script to retrieve the file with file_get_contents, append it and then write back to the same file with file_put_contents($file, $data, LOCK_EX), and the second that just retrieves the file with file_get_contents after flock($file, LOCK_SH).
<?php
//readandwritelock.php
$myfile = fopen('15-11-2018.txt', 'r+');
if (flock($myfile, LOCK_SH)) {
echo "Gotten lock<br>";
$current = file_get_contents('15-11-2018.txt');
/*I commented this on my second test to see if file_put_contents will work.
After uncommenting and third test, it does not work anymore.
if (flock($myfile, LOCK_UN)) {
echo "Unlocked<br>";
}*/
$current .= "appending";
if (file_put_contents('15-11-2018.txt', $current, LOCK_EX)) {
echo "Success";
}
else {
echo "Failed";
//browser loads indefinitely so this does not run
}
fclose($myfile);
}
?>
The problem I am facing is that the first try I was able to file_get_contents after getting the lock, and then releasing the lock and proceed to append and file_put_contents($file, $data, LOCK_EX). However on the second try I decided to comment the releasing of the LOCK_SH lock to test and see what would happen. The script file loads indefinitely (Waiting for localhost...) on my browser, so I reverted back the changes for my third try, but this time the script file still loads indefinitely. It's as if the LOCK_SH was never released.
I must be doing something wrong, but I do not know what exactly it is. Could someone explain?
This was tested on XAMPP and macOS High Sierra and Chrome.
<?php
//readlock.php
//works as normal
$myfile = fopen('15-11-2018.txt', 'r');
if (flock($myfile, LOCK_SH)) {
echo "Gotten lock<br>";
$current = file_get_contents('15-11-2018.txt');
echo $current;
if (flock($myfile, LOCK_UN)) {
echo "<br>Unlocked";
}
fclose($myfile);
}
?>
The reason why your browser seems to load indefinitely is because your PHP file never finishes.
First you get a LOCK_SH (a shared or read lock) for your file, which is fine while you are reading the content.
The problem is that you also try to get a LOCK_EX (an exclusive lock) on the same file in the file_put_contents function. Therefore the file_put_contents functions blocks until all other locks (shared AND exclusive ones) are unlocked, which can't work (this is a deadlock).
For your code to work properly, you can either try to get an exlusive lock in the first place
if( flock($myfile, LOCK_EX) ) {
// ...
or you unlock the shared lock before you write
flock($myfile, LOCK_UN);
if ( file_put_contents('15-11-2018.txt', $current, LOCK_EX) ) {
// ...
In general it is a good idea to keep a locks life as short as possible. If you plan to make extensive manipulations to your data between reading and writing, I would recommend to unlock the file right after reading and lock it again right for writing.

PHP file write threading issues

In a PHP webpage, Im opening a file in write mode, reading and than deleting the first line and closing the file. (The file has 1000's of lines)
Now, what the problem is, if there are like 100 users connected to that page, all will be opening that file in write mode and than try to write it after deleting the first line.
Will there be any deadlocks in this situation?
For your information, we are using Windows server with IIS server and PHP5.
Thanks in advance for your help.
Use flock to grant access to file only for one user at a time.
But don't forget release your file lock by fclose
Update. Consider this code:
<?php
$start = time();
echo 'Started at '.$start.'<br />';
$filename = 'D:\Kindle\books\Brenson_Teryaya_nevinnost__Avtobiografiya_66542.mobi';
$fp = fopen($filename, 'w+') or die('have no access to '.$filename);
if (flock($fp, LOCK_EX)) {
echo 'File was locked at '.time().'. Granted exclusive access to write<br />';
}
else {
echo 'File is locked by other user<br />';
}
sleep(3);
flock($fp, LOCK_UN);
echo 'File lock was released at '.time().'<br />';
fclose($fp);
$end = time();
echo 'Finished at '.$end.'<br />';
echo 'Proccessing time '.($end - $start).'<br />';
Run this code twice (it locks file for 3 seconds, so let's consider our manual script run as asynchronous). You will see something like this:
First instance:
File was locked at 1302788738. Granted exclusive access to write
File lock was released at 1302788741
Second:
File was locked at 1302788741. Granted exclusive access to write
File lock was released at 1302788744
Notice, that second instance waited for first to release file lock.
If it does not comply your requirements, well... try to invent other solution like:
user can read file, then he edit one line and save it as temporary, other user saves his temporary file and so on and once you have all users released file lock, you compose new file as patch of all temporary files on each other (use save files mtime to define which file should stratify other one)... Something like this.... maybe... I'm not the expert in this kind of tasks, unfortunately - just my assumption on how you can get this done.
Use file locking, or a database that allows concurrent access. You will get in trouble otherwise.

file locking in php

I had a newcomer (the next door teenager) write some php code to track some usage on my web site. I'm not familiar with php so I'm asking a bit about concurrent file access.
My native app (on Windows), occasionally logs some data to my site by hitting the URL that contains my php script. The native app does not examine the returned data.
$fh = fopen($updateFile, 'a') or die("can't open file");
fwrite($fh, $ip);
fwrite($fh, ', ');
fwrite($fh, $date);
fwrite($fh, ', ');
fwrite($fh, implode(', ', $_GET));
fwrite($fh, "\r\n");
fclose($fh);
This is a low traffic site, and the data is not critical. But what happens if two users collide and two instances of the script each try to add a line to the file? Is there any implicit file locking in php?
Is the code above at least safe from locking up and never returning control to my user? Can the file get corrupted? If I have the script above delete the file every month, what happens if another instance of the script is in the middle of writing to the file?
You should put a lock on the file:
$fp = fopen($updateFile, 'w+');
if (flock($fp, LOCK_EX)) {
fwrite($fp, 'a');
flock($fp, LOCK_UN);
} else {
echo 'can\'t lock';
}
fclose($fp);
For the record, I worked in a library that does that:
https://github.com/EFTEC/DocumentStoreOne
It allows to CRUD documents by locking the file. I tried 100 concurrent users (100 calls to the PHP script at the same time) and it works.
However, it doesn't use flock but mkdir:
while (!#mkdir("file.lock")) {
// use the file
fopen("file"...)
#rmdir("file.lock")
}
Why?
mkdir is atomic, so the lock is atomic: In a single step, you lock or you don't.
It's faster than flock(). Apparently flock requires several calls to the file system.
flock() depends on the system.
I did a stress test and it worked.
Since this is an append to the file, the best way would be to aggregate the data and write it to the file in one fwrite(), providing the data to be written is not bigger then the file buffer. Ofcourse you don't always know the size of the buffer, so flock(); is always a good option.

php script will only run one at a time

I have a script that I want to make sure only one is running, I am doing:
$fp = fopen("lock.txt", "w");
if (flock($fp, LOCK_EX|LOCK_NB)) { // do an exclusive lock
//Processing code
flock($fp, LOCK_UN);
}else{
echo "Could not lock file!";
}
fclose($fp);
The problem is when I start the second script, it just stays there waiting. If I then stop the first script, the second script will then print "Could Not lock this file". Why doesn't the second script just stop immediately and report that message?
If it ran the second script, it would know that the file is locked and should exit. When I watch my processes I can see the second script sat there ... it's just wating.
Thanks.
EDIT: I have just tried a quick and dirty database lock i.e set a running field to -1 and check for it when the script opens. But that doesn't work. I have also tried using sockets as described: here . It seems like the second file won't even run ... Should I even be worried?
$fp = fopen("lock.txt", "w");
$block = 0;
if (flock($fp, LOCK_EX|LOCK_NB, $block)) { // do an exclusive lock
sleep(5);
flock($fp, LOCK_UN);
echo "Could lock file!";
}else{
echo "Could not lock file!";
}
fclose($fp);
works for me. Try adding the 3'rd parameter
Edit:
You may have some session locking problems:
when you call session_start() it will block(put further requests in a waiting state) any other requests to your script until the original script that started the session has finished. To test this, try either accessing with 2 different browsers, or avoid calling session_start() in this script.

Categories