Hi guys I'm following up from my question Acquire_lock() not working. Bot still sending requests quickly. PHP + AJAX which I haven't been able to get an answer for.
I've simplified everything and have three files
abc.txt
file1.php
file2.php
All in the same directory and the contents of both php files are the same:
<?php
$x = fopen("/var/www/abc.txt", "w");
if (flock($x, LOCK_EX|LOCK_NB)) {
print "No problems, I got the lock, now I'm going to sit on it.";
while (true)
sleep(5);
} else {
print "Didn't quite get the lock. Quitting now. Good night.";
}
fclose($x);
?>
Yet when I load either of them, I get the second print message: "Didn't quite get the lock. Quitting now. Good night.".
Anyone have any idea, either to this question or the former as to what's going on? Literally at my wits end with this.
Thank you as always.
If you want the php script to not exit then you would want a blocking lock.
flock documentation says you can do it by specifying third parameter. Also removing LOCK_NB might help.
<?php
$x = fopen("/var/www/abc.txt", "w");
if (flock($x, LOCK_EX, 1)) {
print "No problems, I got the lock, now I'm going to sit on it.";
// wait for 5 seconds
sleep(5);
// Release the lock now so that next script is executed
flock($x , LOCK_UN);
} else {
print "Didn't quite get the lock. Quitting now. Good night.";
}
fclose($x);
?>
Here is what's going on in your script:
1) You code is going into an infinite loop while(true) and it is never reaching the fclose() statement at the end.
2) I tested both files on my local server: File1.php kept looping while the second file2.php gave the file is locked messaged immediately (which means the first file locked correctly). I tried refreshing both files afterwards and they both failed the lock test.
If you are using PHP> 5.3.2 then you have to unlock the file manually:
The automatic unlocking when the file's resource handle is closed was
removed. Unlocking now always has to be done manually.
Source
If you are using an older version of PHP, then the script will unlock the file once the script is done executing, since you are going into an infinite loop, the script is never done and therefore the file is never unlocked.
Also, even if you stop the script from running in your browser window, the process php-cgi.exe related to that script remains running and has to be terminated manually from the task manager (I verified that myself)
Solution:
1)
To fix this issue and make the script wait for the lock, you need to make sure first that the script actually stops gracefully by removing the infinite loop:
Here is a script that will lock the file for 30 seconds (loop removed):
<?php
$x = fopen("/var/www/abc.txt", "w");
if (flock($x, LOCK_EX|LOCK_NB)) {
print "No problems, I got the lock, now I'm going to sit on it.";
sleep(30);
fclose($x); // it is good practice to always close even your PHP <5.3.2
}
else {
print "Didn't quite get the lock. Quitting now. Good night.";
}
?>
2) If you are on a linux machine, you can use the LOCK_NB flag to determine whether the file is locked or not. The usage of LOCK_NB should be like this:
while ( ! flock($f, LOCK_NB) )
{
sleep(1);
}
this will force the script to check every second for the lock and wait for the other script to finish.
3) use flock($fp, LOCK_UN) to explicitly remove the lock when you are done instead of fclose();
In summary, this is how your code should look like:
<?php
$x = fopen("/var/www/abc.txt", "w");
while(!flock($x,LOCK_NB)
sleep(1);
if (flock($x, LOCK_EX,true)) {
print "No problems, I got the lock, now I'm going to sit on it.";
sleep(30);
fflush($fp); // flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
} else {
print "Didn't quite get the lock. Quitting now. Good night.";
}
fclose($x);
?>
Related
I am having trouble figuring out why flock() is not behaving properly in the following scenario.
The following code is placed into two different PHP scripts one "test1.php" and the other "test2.php". The point of the code is to create a file which no other process (which properly uses the flock() code) should be able to write to. There will be many different PHP scripts which try to obtain an exclusive lock on this file, but only one should have access at any given time and all the rest should fail gracefully when they fail to get the lock.
The way I am testing this is very simple. Both "test1.php" and "test2.php" are placed in a web accessible directory on my server. Then from a browser such as Firefox, the first script will be executed, and then immediately after, the second script is executed from a different browser tab. This seams to work when the code is run from two different PHP scripts such as "test1.php" and "test2.php", but when the code is run twice from the same "test1.php" script or "test2.php" script the second script that is run will not immediately return with a failure.
The only reason I can think of for this, is that flock() treats all PHP processes with the same file name as the same process. If this is the case, then when "test1.php" or "test2.php" are run twice (from two different browser tabs) PHP sees them as the same process and thus does not fail the lock. But to me, it does not makes sense for PHP to be designed like that, thus I am hear to see if anyone else can solve this problem for me.
Thanks in advance!
<?
$file = 'command.bat';
echo "Starting script...";
flush();
$handle = fopen($file, 'w+');
echo "Lets try locking...";
flush();
if(is_resource($handle)){
echo "good resource...";
flush();
if(flock($handle, LOCK_EX | LOCK_NB) === TRUE){
echo "Got lock!";
flush();
sleep(100);
flock($fp, LOCK_UN);
}else{
echo "Failed to get lock!";
flush();
}
}else{
echo "bad resource...";
flush();
}
exit;
Any help with the above is greatly appreciated!
Thank you,
Daniel
I had the same situation and found the problem to be with the browser.
When making multiple requests to the same URL, even if doing so across tabs or windows, the browser is "smart" enough to wait until the first request completes, and then the browser attempts to run the subsequent request(s).
So, while it may look like the lock is not working, what is actually happening is that the browser (both Chrome and Firefox) is waiting for the first request to complete before running the second request.
You can verify that this is the case by opening the same URL once in Chrome and once in Firefox. By doing so, as I did, you would probably see that the lock is indeed working as expected.
flock has many restrictions, including multi-threaded servers, NFS volumes, etc.
The accepted solution is apparently to attempt to create a link instead.
Lots of discussion on this topic: http://www.php.net/manual/en/function.flock.php
To prevent multiple instances of a PHP-based daemon I wrote from ever running simultaneously, I wrote a simple function to acquire a lock with flock when the process starts, and called it at the start of my daemon. A simplified version of the code looks like this:
#!/usr/bin/php
<?php
function acquire_lock () {
$file_handle = fopen('mylock.lock', 'w');
$got_lock_successfully = flock($file_handle, LOCK_EX);
if (!$got_lock_successfully) {
throw new Exception("Unexpected failure to acquire lock.");
}
}
acquire_lock(); // Block until all other instances of the script are done...
// ... then do some stuff, for example:
for ($i=1; $i<=10; $i++) {
echo "Sleeping... $i\n";
sleep(1);
}
?>
When I execute the script above multiple times in parallel, the behaviour I expect to see - since the lock is never explicitly released throughout the duration of the script - is that the second instance of the script will wait until the first has completed before it proceeds past the acquire_lock() call. In other words, if I run this particular script in two parallel terminals, I expect to see one terminal count to 10 while the other waits, and then see the other count to 10.
This is not what happens. Instead, I see both scripts happily executing in parallel - the second script does not block and wait for the lock to be available.
As you can see, I'm checking the return value from flock and it is true, indicating that the (exclusive) lock has been acquired successfully. Yet this evidently isn't preventing another process from acquiring another 'exclusive' lock on the same file.
Why, and how can I fix this?
Simply store the file pointer resource returned from fopen in a global variable. In the example given in the question, $file_handle is automatically destroyed upon going out of scope when acquire_lock() returns, and this releases the lock taken out by flock.
For example, here is a modified version of the script from the question which exhibits the desired behaviour (note that the only change is storing the file handle returned by fopen in a global):
#!/usr/bin/php
<?php
function acquire_lock () {
global $lock_handle;
$lock_handle = fopen('mylock.lock', 'w');
$got_lock_successfully = flock($lock_handle, LOCK_EX);
if (!$got_lock_successfully) {
throw new Exception("Unexpected failure to acquire lock.");
}
}
acquire_lock(); // Block until all other instances of the script are done...
// ... then do some stuff, for example:
for ($i=1; $i<=10; $i++) {
echo "Sleeping... $i\n";
sleep(1);
}
?>
Note that this seems to be a bug in PHP. The changelog from the flock documentation states that in version 5.3.2:
The automatic unlocking when the file's resource handle is closed was removed. Unlocking now always has to be done manually.
but at least for PHP 5.5, this is false; flock locks are released both by explicit calls to fclose and by the resource handle going out of scope.
I reported this as a bug in November 2014 and may update this question and answer pair if it is ever resolved. In case I get eaten by piranhas before then, you can check the bug report yourself to see if this behaviour has been fixed: https://bugs.php.net/bug.php?id=68509
I'm trying to test a race condition in PHP. I'd like to have N PHP processes get ready to do something, then block. When I say "go", they should all execute the action at the same time. Hopefully this will demonstrate the race.
In Java, I would use Object.wait() and Object.notifyAll(). What can I use in PHP?
(Either Windows or linux native answers are acceptable)
Create a file "wait.txt"
Start N processes, each with the code shown below
Delete the "wait.txt" file.
...
<?php
while (file_exists('wait.txt')) {}
runRaceTest();
Usually with PHP file lock approach is used. One create a RUN_LOCK or similar file and asks for file_exists("RUN_LOCK"). This system is also used to secure potential endless loops in recursive threads.
I decided to require the file for the execution. Other approach may be, that existence of the file invokes the blocking algorithm. That depends on your situation. Always the safer situation should be the easier to achieve.
Wait code:
/*prepare the program*/
/* ... */
/*Block until its time to go*/
define("LOCK_FILE", "RUN_UNLOCK"); //I'd define this in some config.php
while(!file_exists(LOCK_FILE)) {
usleep(1); //No sleep will eat lots of CPU
}
/*Execute the main code*/
/* ... */
/*Delete the "run" file, so that no further executions should be allowed*/
usleep(1); //Just for sure - we want other processes to start execution phase too
if(file_exists(LOCK_FILE))
unlink(LOCK_FILE);
I guess it would be nice to have a blocking function for that, like this one:
function wait_for_file($filename, $timeout = -1) {
if($timeout>=0) {
$start = microtime(true)*1000; //Remember the start time
}
while(!file_exists($filename)) { //Check the file existence
if($timeout>=0) { //Only calculate when timeout is set
if((microtime(true)*1000-$start)>$timeout) //Compare current time with start time (current always will be > than start)
return false; //Return failure
}
usleep(1); //Save some CPU
}
return true; //Return success
}
It implements timeout. You don't need them but maybe someone else will.
Usage:
header("Content-Type: text/plain; charset=utf-8");
ob_implicit_flush(true);while (#ob_end_clean()); //Flush buffers so the output will be live stream
define("LOCK_FILE","RUN_FOREST_RUN"); //Define lock file name again
echo "Starting the blocking algorithm. Waiting for file: ".LOCK_FILE."\n";
if(wait_for_file(LOCK_FILE, 10000)) { //Wait for 10 secconds
echo "File found and deleted!\n";
if(file_exists(LOCK_FILE)) //May have been deleted by other proceses
unlink(LOCK_FILE);
}
else {
echo "Wait failed!\n";
}
This will output:
Starting the blocking algorithm. Waiting for file: RUN_FOREST_RUN
Wait failed!
~or~
Starting the blocking algorithm. Waiting for file: RUN_FOREST_RUN
File found and deleted!
PHP doesn't have multithreading. And its not planned to be implemented either.
You can try hacks with sockets though or 0MQ to communicate between multiple processes
See Why does PHP not support multithreading?
Php multithread
I have the following code:
ignore_user_abort(true);
while(!connection_aborted()) {
// do stuff
}
and according to the PHP documentation, this should run until the connection is closed, but for some reason, it doesn't, instead it keeps running until the script times out. I've looked around online and some recommended adding
echo chr(0);
flush();
into the loop, but that doesn't seem to do anything either. Even worse, if I just leave it as
while(true) {
// do stuff
}
PHP still continues to run the script after the client disconnects. Does anyone know how to get this working? Is there a php.ini setting that I'm missing somewhere?
If it matters, I'm running PHP 5.3.5. Thanks in advance!
I'm a bit late to this party, but I just had this problem and got to the bottom of it. There are multiple things going on here -- a few of them mentioned here:
PHP doesn't detect connection abort at all
The gist of it: In order for connection_aborted() to work, PHP needs to attempt to send data to the client.
Output Buffers
As noted, PHP will not detect the connection is dead until it tries to actually send data to the client. This is not as simple as doing an echo, because echo sends the data to any output buffers that may exist, and PHP will not attempt a real send until those buffers are full enough. I will not go into the details of output buffering, but it's worth mentioning that there can be multiple nested buffers.
At any rate, if you'd like to test connection_abort(), you must first end all buffers:
while (ob_get_level()){ ob_end_clean(); }
Now, anytime you want to test if the connection is aborted, you must attempt to send data to the client:
echo "Something.";
flush();
// returns expected value...
// ... but only if ignore_user_abort is false!
connection_aborted();
Ignore User Abort
This is a very important setting that determines what PHP will do when the above flush() is called, and the user has aborted the connection (eg: hit the STOP button in their browser).
If true, the script will run along merrily. flush() will do essentially nothing.
If false, as is the default setting, execution will immediately stop in the following manner:
If PHP is not already shutting down, it will begin its shutdown
process.
If PHP is already shutting down, it will exit whatever shutdown
function it is in and move on to the next.
Destructors
If you'd like to do stuff when the user aborts the connection, you need to do three things:
Detect the user aborted the connection. This means you have to attempt to flush to the user periodically, as described further above. Clear all output buffers, echo, flush.
a. If ignore_connection_aborted is true, you need to manually test connection_aborted() after each flush.
b. If ignore_connection_aborted is false, a call to flush will cause the shutdown process to begin. You must then be especially careful not to cause flush from within your shutdown functions, or else PHP will immediate cease execution of that function and move on to the next shutdown function.
Putting it all together
Putting this all together, lets make an example that detects the user hitting "STOP" and does stuff.
class DestructTester {
private $fileHandle;
public function __construct($fileHandle){
// fileHandle that we log to
$this->fileHandle = $fileHandle;
// call $this->onShutdown() when PHP is shutting down.
register_shutdown_function(array($this, "onShutdown"));
}
public function onShutdown() {
$isAborted = connection_aborted();
fwrite($this->fileHandle, "PHP is shutting down. isAborted: $isAborted\n");
// NOTE
// If connection_aborted() AND ignore_user_abort = false, PHP will immediately terminate
// this function when it encounters flush. This means your shutdown functions can end
// prematurely if: connection is aborted, ignore_user_abort=false, and you try to flush().
echo "Test.";
flush();
fwrite($this->fileHandle, "This was written after a flush.\n");
}
public function __destruct() {
$isAborted = connection_aborted();
fwrite($this->fileHandle, "DestructTester is getting destructed. isAborted: $isAborted\n");
}
}
// Create a DestructTester
// It'll log to our file on PHP shutdown and __destruct().
$fileHandle = fopen("/path/to/destruct-tester-log.txt", "a+");
fwrite($fileHandle, "---BEGINNING TEST---\n");
$dt = new DestructTester($fileHandle);
// Set this value to see how the logs end up changing
// ignore_user_abort(true);
// Remove any buffers so that PHP attempts to send data on flush();
while (ob_get_level()){
ob_get_contents();
ob_end_clean();
}
// Let's loop for 10 seconds
// If ignore_user_abort=true:
// This will continue to run regardless.
// If ignore_user_abort=false:
// This will immediate terminate when the user disconnects and PHP tries to flush();
// PHP will begin its shutdown process.
// In either case, connection_aborted() should subsequently return "true" after the user
// has disconnected (hit STOP button in browser), AND after PHP has attempted to flush().
$numSleeps = 0;
while ($numSleeps++ < 10) {
$connAbortedStr = connection_aborted() ? "YES" : "NO";
$str = "Slept $numSleeps times. Connection aborted: $connAbortedStr";
echo "$str<br>";
// If ignore_user_abort = false, script will terminate right here.
// Shutdown functions will being.
// Otherwise, script will continue for all 10 loops and then shutdown.
flush();
$connAbortedStr = connection_aborted() ? "YES" : "NO";
fwrite($fileHandle, "flush()'d $numSleeps times. Connection aborted is now: $connAbortedStr\n");
sleep(1);
}
echo "DONE SLEEPING!<br>";
die;
The comments explain everything. You can fiddle with ignore_user_abort and look at the logs to see how this changes things.
I hope this helps anyone having trouble with connection_abort, register_shutdown_function, and __destruct.
Try using ob_flush(); just before flush(); and some browsers just won't update the page before some data is added.
Try doing something like
<? php
// preceding scripts
ignore_user_abort(true);
$i = 0;
while(!connection_aborted())
{ $i++;
echo $i;
echo str_pad('',4096); // yes i know this will increase the overhead but that can be reduced afterwords
ob_flush();
flush();
usleep(30000); // see what happens when u run this on my WAMP this runs perfectly
}
// Ending scripts
?>
Google Chrome has issues with this code, actually; it doesn't support streaming very nicely.
Try:
ignore_user_abort(true);
echo "Testing connection handling";
while (1) {
if (connection_status() != CONNECTION_NORMAL)
break;
sleep(1);
echo "test";
flush();
}
Buffering seems to cause issues depending on your server settings.
I tried disabling the buffer with ob_end_clean but that wasn't enough, I had to send some data to cause the buffer to fully flush out. Here is the final code that ended up working for me.
set_time_limit(0); // run the delay as long as the user stays connected
ignore_user_abort(false);
ob_end_clean();
echo "\n";
while ($delay-- > 0 && !connection_aborted())
{
echo str_repeat("\r", 1000) . "<!--sleep-->\n";
flush();
sleep(1);
}
ob_start();
I know this is a bit generic, but I'm sure you'll understand my explanation. Here is the situation:
The following code is executed every 10 minutes. Variable "var_x" is always read/written to an external text file when its refereed to.
if ( var_x != 1 )
{
var_x = 1;
//
// here is where the main body of the script is.
// it can take hours to completely execute.
//
var_x = 0;
}
else
{
// exit script as it's already running.
}
The problem is: if I simulate a hardware failure (do a hard reset when the script is executing) then the main script logic will never execute again because "var_x" will always be "1". (I already have logic to work out the restore point).
Thanks.
You should lock and unlock files with flock:
$fp = fopen($your_file);
if (flock($fp, LOCK_EX)) { )
{
//
// here is where the main body of the script is.
// it can take hours to completely execute.
//
flock($fp, LOCK_UN);
}
else
{
// exit script as it's already running.
}
Edit:
As flock seems not to work correctly on Windows machines, you have to resort to other solutions. From the top of my head an idea for a possible solution:
Instead of writing 1 to var_x, write the process ID retrieved via getmypid. When a new instance of the script reads the file, it should then lookup for a running process with this ID, and if the process is a PHP script. Of course, this can still go wrong, as there is the possibility of another PHP script obtaining the same PID after a hardware failure, so the solution is far from optimal.
Don't you think this would be better solved using file locks? (When the reset occurs file locks are reset as well)
http://php.net/flock
It sounds like you're doing some kind of manual semaphore for process management.
Rather than writing to a file, perhaps you should use an environment variable instead. That way, in the event of failure, your script will not have a closed semaphore when you restore.