PHP flock not having any effect in ubuntu - php

I have one php process locking a file with flock.
I then start another process that deletes the file.
On windows, I get the correct behaviour where the locked file is not able to be deleted, however on ubuntu (both wsl and a full Ubuntu installation) I am always able to delete the file.
I have read other similar questions but they all seem to tested wrong.
I am pretty sure I am testing this correctly.
UPDATED: More thorough testing.
After reading Why is unlink successful on an open file? and the delete Queue in the file system I needed to also test read and write.
Testing method.
Open a terminal and run my test file:
php test/flock_file_locking_with_splfileobject.php 👈 locks the file and waits for 30 seconds
Open a second terminal and run the following:
php test/flock_file_locking_with_splfileobject.php read
php test/flock_file_locking_with_splfileobject.php write
php test/flock_file_locking_with_splfileobject.php delete
Here's the contents of test/flock_file_locking_with_splfileobject.php
See comments in the code describing the output I get.
<?php
$me = array_shift($argv);
$command = array_shift($argv);
$fileName = 'cats';
switch ($command) {
case 'delete':
//attempt to delete the file
if (!unlink($fileName)) {
echo "Failed to delete file!";
exit(1);
}
echo "File was deleted:";
var_dump(file_exists($fileName) === false);
//on linux I get File was deleted:bool(true)
//on windows I get 'PHP Warning: unlink(cats): Resource temporarily unavailable'
break;
case 'write':
$written = file_put_contents($fileName, 'some datah!');
echo "Written bytes:";
var_dump($written);
//on Linux I get 'Read: string(21) "file should be locked"'
// OR 'Read: string(11) "some datah!"' if I run the write command first.
//on windows i get Warning: file_put_contents(): Only 0 of 11 bytes written, possibly out of free disk space
break;
case 'read':
$read = file_get_contents($fileName);
echo "Read: ";
var_dump($read);
// on Linux i get 'Written bytes:int(11)'
// on windows I get no error but I get no data 'Read: string(0) ""'
break;
default:
$file = new \SplFileObject($fileName, 'a+');//file gets created by a+
if (! $file->flock(LOCK_EX)) {
echo "Unable to lock the file.\n";
exit(1);
}
$file->fwrite('file should be locked');
echo "File should now be locked, try running the delete/write/read commands in another terminal.",
"I'll wait 30 seconds and try to write to the file...\n";
sleep(30);
if (! $file->fwrite('file is now unlocked')) {
echo "Unable to write to file.\n";
exit(1);
}
//in either system this file write succeeds regardless of whats going on.
break;
}
echo "\n";
Windows behaves as it should but on Linux,
my 2nd process can do anything it likes to the file while the first process apparently gets a successful file lock.
Obviously, can't trust flock on production if it only works on windows.
Is there any way to actually get an exclusive file lock?
Any ideas?

Related

`LOCK_EX` prohibits reading, but not writing?

Why can I not read a file locked with LOCK_EX? I am still able to write to it.
I wanted to know, what happens, if one process locks a file (with LOCK_SH or LOCK_EX) and another process tries to read this file or write to it, but ignores the lock at all. So I made a little script that has 3 functionalities:
Locking: Opens the target file, writes to it, locks the file (with specified lock), writes to it again, sleeps 10 seconds, unlocks it and closes it.
Reading: Opens the target file, reads from it and closes it.
Writing: Opens the target file, writes to it and closes it.
I tested it by having two consoles side by side and doing the folloing:
FIRST CONSOLE | SECOND CONSOLE
-----------------------------+-----------------------
php test lock LOCK_SH | php test read
php test lock LOCK_SH | php test write
php test lock LOCK_EX | php test read
php test lock LOCK_EX | php test write
LOCK_SH seems to have no effect at all, because the first process as well as the second process can read and write to the file. If the file is being locked with LOCK_EX by the first process, both processes can still write to it, but only the first process can read. Is there any reasoning behind this?
Here is my little test program (tested on Windows 7 Home Premium 64-bit):
<?php
// USAGE: php test [lock | read | write] [LOCK_SH | LOCK_EX]
// The first argument specifies whether
// this script should lock the file, read
// from it or write to it.
// The second argument is only used in lock-mode
// and specifies whether LOCK_SH or LOCK_EX
// should be used to lock the file
// Reads $file and logs information.
function r ($file) {
echo "Reading file\n";
if (($buffer = #fread($file, 64)) !== false)
echo "Read ", strlen($buffer), " bytes: ", $buffer, "\n";
else
echo "Could not read file\n";
}
// Sets the cursor to 0.
function resetCursor ($file) {
echo "Resetting cursor\n", #fseek($file, 0, SEEK_SET) === 0 ? "Reset cursor" : "Could not reset cursor", "\n";
}
// Writes $str to $file and logs information.
function w ($file, $str) {
echo "Writing \"", $str, "\"\n";
if (($bytes = #fwrite($file, $str)) !== false)
echo "Wrote ", $bytes, " bytes\n";
else
echo "Could not write to file\n";
}
// "ENTRYPOINT"
if (($file = #fopen("check", "a+")) !== false) {
echo "Opened file\n";
switch ($argv[1]) {
case "lock":
w($file, "1");
echo "Locking file\n";
if (#flock($file, constant($argv[2]))) {
echo "Locked file\n";
w($file, "2");
resetCursor($file);
r($file);
echo "Sleeping 10 seconds\n";
sleep(10);
echo "Woke up\n";
echo "Unlocking file\n", #flock($file, LOCK_UN) ? "Unlocked file" : "Could not unlock file", "\n";
} else {
echo "Could not lock file\n";
}
break;
case "read":
resetCursor($file);
r($file);
break;
case "write":
w($file, "3");
break;
}
echo "Closing file\n", #fclose($file) ? "Closed file" : "Could not close file", "\n";
} else {
echo "Could not open file\n";
}
?>
That's a very good question, but also a complex one because it depends on a lot of conditions.
We have to start with another pair of locking types - advisory and mandatory:
Advisory locking simply give you "status flags" by which you know if a resource is locked or not.
Mandatory locking enforces the locks, regardless of whether you're checking these "status flags".
... and that should answer your question, but I'll continue in order to explain your particular case.
What you seem to be experiencing is the behavior of advisory locks - there's nothing preventing you from reading or writing to a file, no matter if there is a lock for it or if you even checked for one.
However, you will find a note in the PHP manual for flock(), saying the following:
flock() uses mandatory locking instead of advisory locking on Windows. Mandatory locking is also supported on Linux and System V based operating systems via the usual mechanism supported by the fcntl() system call: that is, if the file in question has the setgid permission bit set and the group execution bit cleared. On Linux, the file system will also need to be mounted with the mand option for this to work.
So, if PHP uses mandatory locking on Windows and you've tested this on Windows, either the manual is wrong/outdated/inaccurate (I'm too lazy to check for that right now) or you have to read this big red warning on the same page:
On some operating systems flock() is implemented at the process level. When using a multithreaded server API like ISAPI you may not be able to rely on flock() to protect files against other PHP scripts running in parallel threads of the same server instance!
flock() is not supported on antiquated filesystems like FAT and its derivates and will therefore always return FALSE under this environments (this is especially true for Windows 98 users).
I don't believe it is even possible that your php-cli executable somehow spawns threads for itself, so there's the option that you're using a filesystem that simply doesn't support locking.
My guess is that the manual isn't entirely accurate and you do in fact get advisory locks on Windows, because you are also experiencing a different behavior between LOCK_EX (exclusive lock) and LOCK_SH (shared lock) - it doesn't make sense for them to differ if your filesystem just ignores the locks.
And that brings us to the difference between exclusive and shared locks or LOCK_EX and LOCK_SH respectively. The logic behind both is based around writing, but there's a slight difference ...
Exclusive locks are (generally) used when you want to write to a file, because only one process at a time may hold an exclusive lock over the same resource. That gives you safety, because no other process would read from that resource while the lock-holder writes to it.
Shared locks are used to ensure that a resource is not being written to while you read from it. Since no process is writing to the resource, it is not being modified and is therefore safe to read from for multiple processes at the same time.

Php Lock files when writte

I am testing my code using little database in txt files. The most important problem that I have found is: when users write at the same time into one file. To solve this I am using flock.
OS of my computer is windows with xampp installed (comment this because i understand flocks works fine over linux no windows) However I need to do this test over linux server.
Actually I have tested my code by loading the same script in 20 windows at the same time. The firsts results works fine, but after test database file appears empty.
My Code :
$file_db=file("test.db");
$fd=fopen("".$db_name."","w");
if (flock($fd, LOCK_EX))
{
ftruncate($fd,0);
for ($i=0;$i<sizeof($file_db);$i++)
{
fputs($fd,"$file_db[$i]"."\n");
}
fflush($fd);
flock($fd, LOCK_UN);
fclose($fd);
}
else
{
print "Db Busy";
}
How it's possible that the script deletes database file content. What is proper way: use flock with fixing of existing code or use some other alternative technique of flock?
I have re-wrote the script using #lolka_bolka's answer and it works. So in answer to your question, the file $db_name could be empty if the file test.db is empty.
ftruncate after fopen with "w" is useless.
file function
Returns the file in an array. Each element of the array corresponds to a line in the file, with the newline still attached. Upon failure, file() returns FALSE.
You do not have to add additional end of line symbol.
flock function
PHP supports a portable way of locking complete files in an advisory way (which means all accessing programs have to use the same way of locking or it will not work).
It means that file function not affected by the lock. It means that $file_db=file("test.db"); could read file while other process somewhere between ftruncate($fd,0); and fflush($fd);. So, you need read file content inside lock.
$db_name = "file.db";
$fd = fopen($db_name, "r+"); // w changed to r+ for getting file resource but not truncate it
if (flock($fd, LOCK_EX))
{
$file_db = file($db_name); // read file contents while lock obtained
ftruncate($fd, 0);
for ($i = 0; $i < sizeof($file_db); $i++)
{
fputs($fd, "$file_db[$i]");
}
fflush($fd);
flock($fd, LOCK_UN);
}
else
{
print "Db Busy";
}
fclose($fd); // fclose should be called anyway
P.S. you could test this script using console
$ for i in {1..20}; do php 'file.php' >> file.log 2>&1 & done

PHP fopen() on local file that DOES exist

Using php 5.2.17, on a Linux Server. My office production machine is Windows7 Professional with Service Pack 1 installed.
Trying desperately -- and so far, in vain -- to get fopen() to find and open a .csv file on my local machine in order to import records to an existing MySQL database on the server. Consistently getting a "failed to open stream" error message.
Here is part of the code, with explanatory notes / server responses, including notes on what I have tried:
ini_set('track_errors', 1); // Set just to make sure I was seeing all of the rrror codes
ini_set ('user_agent', $_SERVER['HTTP_USER_AGENT']); // Tried after reading a note; no effect
error_reporting(E_ALL); // Again, just to make sure all error codes visible!
echo(get_include_path()."<br />"); // Initially returns: .:/usr/local/php5/lib/php
set_include_path("C:\SWSRE\\"); // Have tried with BOTH forward and backslashes, BOTH single and doubled, in every conceivable combination!
ini_set('safe_mode_include_dir',"C:\SWSRE"); // Ditto here for slashes!
echo(get_include_path()."<br />"); //NOW echoes "C:\SWSRE\"
clearstatcache(); // Just in case this was a problem -- added after reading note # php.net
$file = "Individuals.txt"; // This absolutely DOES exist locally in C:\SWSRE\ (29Mb)
// Inserted following tests to see if it even SEES the file. It does NOT.
if (file_exists("Individuals.txt")) {
echo("File EXISTS!!!<br />");
} else {
echo("File does NOT exist!<br />"); // Echoes "File does NOT exist!"
}
if(is_readable($file)) {
echo 'readable' ;
} else {
echo 'NOT readable!<br />'; // Echoes "NOT readable!"
}
if(is_writable($file)) {
echo 'writable ' ;
} else {
echo 'NOT writable!<br />'; // Echoes "NOT readable!"
}
$handle = fopen("Individuals.txt", "r+", TRUE);
Here are the final PHP error messages:
Warning: fopen(Individuals.txt) [function.fopen]: failed to open stream: No such file or directory in /home/content/b/u/r/burtsweeto/html/ADREImport.php on line 145
array(4) { ["type"]=> int(2) ["message"]=> string(118) "fopen(Individuals.txt) [function.fopen]: failed to open stream: No such file or directory" ["file"]=> string(56) "/home/content/b/u/r/burtsweeto/html/ADREImport.php" ["line"]=> int(145) }
Finally, I have tried putting the file up in the directory where the PHP script is running; and it does work exactly correctly there! I'm just trying to minimize ongoing headaches for the end-user by not having to upload a huge file before doing the import.
Any suggestions that I have not tried?
Add the full path to $file, like this:
$file = "C:\\SWSRE\\Individuals.txt";
set_include_path() and the ini_set() do what they sound like; they adjust include paths, which is not the same as all paths. file_exists() expects either an absolute path or a path relative to the PHP file calling it. It is not affected by set_include_path() or ini_set('safe_mode_include_dir',...)

Long running php script failing as a cron

I have written a PHP script that uploads images from a directory to a remote server using ftp_put. I have set this as a cron using task scheduler and wget. Initially it works great, but then after a while, do not know exactly when, the process freezes, by that "windows task scheduler" saying it is running the job, but no photos are no longer being uploaded.
Initially I thought the problem was due the max_execution_time , but I have set that to 24 hours by using set_time_limit(3600*24); and I have set max_input_time to 600 seconds (10 mins).
Why doesn't it complete the task?
Here is the code:
if($conn){
if (is_dir($imagesPath)){
if($files = opendir($imagesPath)){
while(($file = readdir($files)) !== false){
if($file != "." && $file != ".." && preg_match("/\.jpg|\.JPG|\.gif|\.GIF|\.png|\.PNG|\.bmp|\.BMP/",$file) && date("Ymd",filemtime($imagesPath.'/'.$file)) >= date("Ymd",strtotime(date("Y-m-d")." -".$days." day"))){
if(ftp_put($conn, $remotePath.$file, $imagesPath.'/'.$file, FTP_BINARY)){
//echo $file;
$counter++;
}
else{
echo '<br>'.$imagesPath.'/'.$file;
}
}
}
closedir($files);
echo $counter.' Files Uploaded on '.date("Y-m-d");
}
else{
echo 'Unable to read '.$imagesPath;
}
}
else{
echo $imagesPath.' Does not exist';
}
ftp_close($conn);
}else{
echo "Failed to connect";
}
/* End */
exit;
Added:
/* Settings */
// Set Max Execution time
set_time_limit(3600*24);
at the top of the script.
Thanks.
I am working on something similar and I found the following to help:
1) make sure every step is conditional so if something fails to load (like an image, ftp connection etc.) the procedure keeps running (iterating).
2) set a small sleep() at the end of the file (or between difficult steps). I am using it for images as well and when the connection to the image lags or the 'image write' time lags it helps.
3) set the scheduler to execute the script often (my is 2 times a day) but it could be set to hourly as long as you check the box: do not start new instance if the script is already running
4) depending on the setup of your server, check for other tasks which may interrupt the script run-time. It is not always PHP issue. As long as you have properly scheduled task to re-execute the script you should be fine.
5) for easy debugging, instead of the echo statement (which is most likely showing in your cmd window) use a simple file logging like ( $message = fopen($myLogFile, 'w'); ) additionally to your "Does not exist" or "Failed to connect" statements and include more details so when it fails, you can go to your log file and see when and why it failed.
6) you may try to use an infinite loop like ( while (true) { ... your code... }) instead of the set_time_limit().
I hope this helps :)

php script will only run one at a time

I have a script that I want to make sure only one is running, I am doing:
$fp = fopen("lock.txt", "w");
if (flock($fp, LOCK_EX|LOCK_NB)) { // do an exclusive lock
//Processing code
flock($fp, LOCK_UN);
}else{
echo "Could not lock file!";
}
fclose($fp);
The problem is when I start the second script, it just stays there waiting. If I then stop the first script, the second script will then print "Could Not lock this file". Why doesn't the second script just stop immediately and report that message?
If it ran the second script, it would know that the file is locked and should exit. When I watch my processes I can see the second script sat there ... it's just wating.
Thanks.
EDIT: I have just tried a quick and dirty database lock i.e set a running field to -1 and check for it when the script opens. But that doesn't work. I have also tried using sockets as described: here . It seems like the second file won't even run ... Should I even be worried?
$fp = fopen("lock.txt", "w");
$block = 0;
if (flock($fp, LOCK_EX|LOCK_NB, $block)) { // do an exclusive lock
sleep(5);
flock($fp, LOCK_UN);
echo "Could lock file!";
}else{
echo "Could not lock file!";
}
fclose($fp);
works for me. Try adding the 3'rd parameter
Edit:
You may have some session locking problems:
when you call session_start() it will block(put further requests in a waiting state) any other requests to your script until the original script that started the session has finished. To test this, try either accessing with 2 different browsers, or avoid calling session_start() in this script.

Categories