I have a process that writes a file using file_put_contents():
file_put_contents ( $file, $data, LOCK_EX );
I have added the LOCK_EX parameter to prevent concurrent processes from writing to the same file, and prevent trying to read it when it's still being written to.
I'm having difficulties testing this properly due to the concurrent nature, and I'm not sure how to approach this. I've got this so far:
if (file_exists($file)) {
$fp = fopen($file, 'r+');
if (!flock($fp, LOCK_EX|LOCK_NB, $wouldblock)) {
if ($wouldblock) {
// how can I wait until the file is unlocked?
} else {
// what other reasons could there be for not being able to lock?
}
}
// does calling fclose automatically close all locks even is a flock was not obtained above?
fclose($file);
}
Questions being:
Is there a way to wait until the file is not locked anymore, while keeping the option to give this a time limit?
Does fclose() automatically unlock all locks when there would be another process that had locked the file?
I wrote a small test that uses sleep() so that I could simulate concurrent read/write processes with a simple AJAX call. It seems this answers both questions:
when the file is locked, a sleep that approximates estimated write duration and subsequent lock check allow for waiting. This could even be put in a while loop with an interval.
fclose() does indeed not remove the lock from the process that's already running as confirmed in some of the answers.
PHP5.5 and lower on windows does not support the $wouldblock parameter according to the docs,
I was able to test this on Windows + PHP5.3 and concluded that the file_is_locked() from my test still worked in this scenario:
flock() would still return false just not have the $wouldblock parameter but it would still be caught in my else check.
if (isset($_POST['action'])) {
$file = 'file.txt';
$fp = fopen($file, 'r+');
if ($wouldblock = file_is_locked($fp)) {
// wait and then try again;
sleep(5);
$wouldblock = file_is_locked($fp);
}
switch ($_POST['action']) {
case 'write':
if ($wouldblock) {
echo 'already writing';
} else {
flock($fp, LOCK_EX);
fwrite($fp, 'yadayada');
sleep(5);
echo 'done writing';
}
break;
case 'read':
if ($wouldblock) {
echo 'cant read, already writing';
} else {
echo fread($fp, filesize($file));
}
break;
}
fclose($fp);
die();
}
function file_is_locked( $fp ) {
if (!flock($fp, LOCK_EX|LOCK_NB, $wouldblock)) {
if ($wouldblock) {
return 'locked'; // file is locked
} else {
return 'no idea'; // can't lock for whatever reason (for example being locked in Windows + PHP5.3)
}
} else {
return false;
}
}
I often use a small class... that is secure and fast, basically you have to write only when you obtain exclusive lock on the file otherwise you should wait until is locked...
lock_file.php
<?php
/*
Reference Material
http://en.wikipedia.org/wiki/ACID
*/
class Exclusive_Lock {
/* Private variables */
public $filename; // The file to be locked
public $timeout = 30; // The timeout value of the lock
public $permission = 0755; // The permission value of the locked file
/* Constructor */
public function __construct($filename, $timeout = 1, $permission = null, $override = false) {
// Append '.lck' extension to filename for the locking mechanism
$this->filename = $filename . '.lck';
// Timeout should be some factor greater than the maximum script execution time
$temp = #get_cfg_var('max_execution_time');
if ($temp === false || $override === true) {
if ($timeout >= 1) $this->timeout = $timeout;
set_time_limit($this->timeout);
} else {
if ($timeout < 1) $this->timeout = $temp;
else $this->timeout = $timeout * $temp;
}
// Should some other permission value be necessary
if (isset($permission)) $this->permission = $permission;
}
/* Methods */
public function acquireLock() {
// Create the locked file, the 'x' parameter is used to detect a preexisting lock
$fp = #fopen($this->filename, 'x');
// If an error occurs fail lock
if ($fp === false) return false;
// If the permission set is unsuccessful fail lock
if (!#chmod($this->filename, $this->permission)) return false;
// If unable to write the timeout value fail lock
if (false === #fwrite($fp, time() + intval($this->timeout))) return false;
// If lock is successfully closed validate lock
return fclose($fp);
}
public function releaseLock() {
// Delete the file with the extension '.lck'
return #unlink($this->filename);
}
public function timeLock() {
// Retrieve the contents of the lock file
$timeout = #file_get_contents($this->filename);
// If no contents retrieved return error
if ($timeout === false) return false;
// Return the timeout value
return intval($timeout);
}
}
?>
Simple use as follow:
include("lock_file.php");
$file = new Exclusive_Lock("my_file.dat", 2);
if ($file->acquireLock()) {
$data = fopen("my_file.dat", "w+");
$read = "READ: YES";
fwrite($data, $read);
fclose($data);
$file->releaseLock();
chmod("my_file.dat", 0755);
unset($data);
unset($read);
}
If you want add more complex level you can use another trick... use while (1) to initialize a infinite loop that breaks only when the exclusive lock is acquired, not suggested since can block your server for an undefined time...
include("lock_file.php");
$file = new Exclusive_Lock("my_file.dat", 2);
while (1) {
if ($file->acquireLock()) {
$data = fopen("my_file.dat", "w+");
$read = "READ: YES";
fwrite($data, $read);
fclose($data);
$file->releaseLock();
chmod("my_file.dat", 0755);
unset($data);
unset($read);
break;
}
}
file_put_contents() is very fast and writes directly into file but as you say has a limit... race condition exists and may happen even if you try to use LOCK_EX. I think that a php class is more flexible and usable...
See you this thread that treats a similar question: php flock behaviour when file is locked by one process
The first question is answered here How to detect the finish with file_put_contents() in php? and beacuse PHP is single-threaded, only solution is to use extension of core PHP using PTHREADS and one good simple article about it is https://www.mullie.eu/parallel-processing-multi-tasking-php/
The second question is answered here Will flock'ed file be unlocked when the process die unexpectedly?
The fclose() will unlock only valid handle that is opened using fopen() or fsockopen() so if handle is still valid, yes it will close file and release lock.
Here is a fix for #Alessandro answer to work correctly and not lock the file forever
lock_file.php
<?php
/*
Reference Material
http://en.wikipedia.org/wiki/ACID
*/
class Exclusive_Lock {
/* Private variables */
public $filename; // The file to be locked
public $timeout = 30; // The timeout value of the lock
public $permission = 0755; // The permission value of the locked file
/* Constructor */
public function __construct($filename, $timeout = 1, $permission = null, $override = false) {
// Append '.lck' extension to filename for the locking mechanism
$this->filename = $filename . '.lck';
// Timeout should be some factor greater than the maximum script execution time
$temp = #get_cfg_var('max_execution_time');
if ($temp === false || $override === true) {
if ($timeout >= 1) $this->timeout = $timeout;
set_time_limit($this->timeout);
} else {
if ($timeout < 1) $this->timeout = $temp;
else $this->timeout = $timeout ;
}
// Should some other permission value be necessary
if (isset($permission)) $this->permission = $permission;
if($this->timeLock()){
$this->releaseLock();
}
}
/* Methods */
public function acquireLock() {
// Create the locked file, the 'x' parameter is used to detect a preexisting lock
$fp = #fopen($this->filename, 'x');
// If an error occurs fail lock
if ($fp === false) return false;
// If the permission set is unsuccessful fail lock
if (!#chmod($this->filename, $this->permission)) return false;
// If unable to write the timeout value fail lock
if (false === #fwrite($fp, time() + intval($this->timeout))) return false;
// If lock is successfully closed validate lock
return fclose($fp);
}
public function releaseLock() {
// Delete the file with the extension '.lck'
return #unlink($this->filename);
}
private function timeLock() {
// Retrieve the contents of the lock file
$timeout = #file_get_contents($this->filename);
// If no contents retrieved return true
if ($timeout === false) return true;
// Return the timeout value
return (intval($timeout) < time());
}
}
use as follow:
include("lock_file.php");
$file = new Exclusive_Lock("my_file.dat", 2);
if ($file->acquireLock()) {
$data = fopen("my_file.dat", "w+");
$read = "READ: YES";
fwrite($data, $read);
fclose($data);
$file->releaseLock();
chmod("my_file.dat", 0755);
unset($data);
unset($read);
}
hope that save some else time
Related
I have one file: configuration.txt.
This file gets read by PHP, then wrote by the same PHP, while a C++ program reads the content of the same file at a regular interval.
PHP:
$closeFlag = false;
$arrayInputs = new SplFixedArray(3);
$arrayInputs[0] = "URL not entered";
$arrayInputs[1] = "3";
$arrayInputs[2] = "50";
$configFilePath = "/var/www/html/configuration.txt";
$currentSettingsFile = fopen($configFilePath, "r");
if(flock($currentSettingsFile, LOCK_SH)) {
$arrayInputs = explode(PHP_EOL, fread($currentSettingsFile, filesize($configFilePath)));
flock($currentSettingsFile, LOCK_UN);
$closeFlag = fclose($currentSettingsFile);
}
if(isset( $_POST['save_values'])) {
if(!empty($_POST['getURL'])) {
$arrayInputs[0] = $_POST['getURL'];
}
if(!empty($_POST['getURR'])) {
$arrayInputs[1] = $_POST['getURR'];
}
if(!empty($_POST['getBrightness'])) {
$arrayInputs[2] = $_POST['getBrightness'];
}
}
if(!$closeFlag) fclose($currentSettingsFile);
$currentSettingsFile = fopen($configFilePath, "w");
if(flock($currentSettingsFile, LOCK_SH)) {
foreach ($arrayInputs as $key => $value) {
if($value != '')
fwrite($currentSettingsFile,$value.PHP_EOL);
}
flock($currentSettingsFile, LOCK_UN);
fclose($currentSettingsFile);
}
?>
C++
char configFilePath[]="/var/www/html/configuration.txt";
std::fstream configFile;
configFile.open(configFilePath, std::fstream::in);
if(configFile.is_open()){
// do stuff
} else {
std::cout<<"Error ! Could not open Configuration file to read"<<std::endl;
}
The c++ returned no error so far. It can open the file. And php will return Warning: fread(): Length parameter must be greater than 0 because the file is empty.
I believe that PHP is deleting the file's content.
When locking a file in PHP, you lock a LOCK file, not the main file. Example:
$myfile = 'myfile.txt';
$lockfile = 'myfile.lock';
$lock = fopen($lockfile,'a');
if(flock($lock, LOCK_EX)) // The lock file is locked in exclusive mode - so I can write to it.
{
$fp = fopen($myfile,'w');
fputs($fp, "I am writing safely!");
fclose($fp);
flock($lock, LOCK_UN); // Always unlock it!
}
fclose($lock);
You work similarly in C++ because PHP is not locking the actual file. It is locking a lock file. The exact syntax depends heavily on your version of C/C++ and the operating system. So, I will use minimal syntax.
int lock=fopen(lockfile, "r+");
if(flock(fileno(lock), LOCK_EX))
{
//Locked. You can open a stream to ANOTHER file and play with it.
flock(fileno(lock), LOCK_UN));
}
fclose(lock);
Here has a question: I need execute a task to put many data to another mysql database per minute; if the first task hasn't finish, the second has start; so,there has a multiple concurrent problem; how to resolve the problem??
I have some ideas, first, Let the task has a execute-time which less than the start time of next task;second, let the task support multi-process; but,i don't the how to write the code?
public function execute(Input $input, Output $output)
{
$tele_data = Telesales::field('*')->where([['create_time','<',time()-48*3600],['customer_label','in',[2,6,7]],['virtual_sale','=','0']])->whereRaw('phone is not null')->select()->toArray();
foreach($tele_data as $key=>$value) {
static::pushTeleToIdc($value);
}
}
private static function pushTeleToIdc($data = []) {
$res = Telesales::where('id',$value['id'])->update(['virtual_sale'=>'1']);
if(!$res) {
return;
}
$url = config('idc.tele_url');
$key = config('idc.tele_key');
$channel = config('idc.tele_channel');
$time = time();
$sign = md5($key.$channel.$time);
$urls = $url."?channel=".$channel."&sign=".$sign."&time=".$time;
$require_params = config('idc.require_params');
foreach($require_params as $key=>$value) {
if(array_key_exists($key,$data) && !empty($data[$key])) {
$d[$key] = $data[$key];
}else{
$d[$key] = empty($value)?'':$value[array_rand($value,1)];
}
}
$d['register_time'] = $d['create_time'];
$res = post_url($urls,$d);
$result = json_decode($res,true);
if (isset($result['code']) && $result['code'] != 0){
Log::init(['single'=>'tpushidc'])->error($res);
}
}
Could you help me resolve the problem?
The easiest thing to do is to setup a flag to tell that the process is already in progress and check if that's the case at the start of the function. I don't know how you want to setup the visibility of your code, so I leave it to you to extract $myFile to the file/class scope (same goes for the file path, you probably want to use some /var or /log folder for such stuff).
So the gist is: we create a file, if it doesn't exist or there is a 0 in it - it means we can start working. On other hand, if the contents of the file is 1, the process will die and it will be so every time you run it, until the first one finishes and rewrites the contents of the file to 0 (which means the process is not in progress anymore).
public function execute(Input $input, Output $output)
{
if ($this->isProcessInProgress()) {
die('Process is in progress');
}
$this->startProcess();
$tele_data = [...];
foreach($tele_data as $key=>$value) {
static::pushTeleToIdc($value);
}
$this->finishProcess();
}
private function isProcessInProgress() {
$myFile = 'tele_to_idc_process.txt';
$handle = fopen($myFile, 'r');
if (!$handle)
return false;
$status = fread($handle, 1);
fclose($handle);
return (bool) $status;
}
private function startProcess() {
$myFile = 'tele_to_idc_process.txt';
$handle = fopen($myFile, 'w');
if (!$handle)
return;
$status = fwrite($handle, '1');
fclose($handle);
}
private function finishProcess() {
$myFile = 'tele_to_idc_process.txt';
$handle = fopen($myFile, 'w');
if (!$handle)
return;
$status = fwrite($handle, '0');
fclose($handle);
}
You might get a warning if the file doesn't exist, you can suppress it with #fopen instead of fopen
Here is my Code with filename
it does work without problems if lets say i just use
update.php?pokemon=pikachu
it updates pikachu value in my found.txt +0.0001
But now my problem, when i have multiple threads running and randomly
2 threads are
update.php?pokemon=pikachu
and
update.php?pokemon=zaptos
i see the found.txt file
is empty than!!
so nothing is written in it then anymore.
So i guess its a bug when the php file is opened and another request is posted to the server.
How can i solve this problem this does accour often
found.txt
pikachu:2.2122
arktos:0
zaptos:0
lavados:9.2814
blabla:0
update.php
<?php
$file = "found.txt";
$fh = fopen($file,'r+');
$gotPokemon = $_GET['pokemon'];
$users = '';
while(!feof($fh)) {
$user = explode(':',fgets($fh));
$pokename = trim($user[0]);
$infound = trim($user[1]);
// check for empty indexes
if (!empty($pokename)) {
if ($pokename == $gotPokemon) {
if ($gotPokemon == "Pikachu"){
$infound+=0.0001;
}
if ($gotPokemon == "Arktos"){
$infound+=0.0001;
}
if ($gotPokemon == "Zaptos"){
$infound+=0.0001;
}
if ($gotPokemon == "Lavados"){
$infound+=0.0001;
}
}
$users .= $pokename . ':' . $infound;
$users .= "\r\n";
}
}
file_put_contents('found.txt', $users);
fclose($fh);
?>
I would create an exclusive lock after open the file and then release the lock before closing the file:
For creating an exclusive lock over the file:
flock($fh, LOCK_EX);
To delete it:
flock($fh, LOCK_UN);
Anyway you will need to check if other threads hot already the lock, so the first idea coming up is to try a few attempts to get the lock and if it's not finally possible, to inform the user, throw an exception or whatever other action to avoid an infinite loop:
$fh = fopen("found.txt", "w+");
$attempts = 0;
do {
$attempts++;
if ($attempts > 5) {
// throw exception or return response with http status code = 500
}
if ($attempts != 1) {
sleep(1);
}
} while (!flock($fh, LOCK_EX));
// rest of your code
file_put_contents('found.txt', $users);
flock($fh, LOCK_UN); // release the lock
fclose($fh);
Update
Probably the issue still remains because the reading part, so let's create also a shared lock before start reading and also let's simplify the code:
$file = "found.txt";
$fh = fopen($file,'r+');
$gotPokemon = $_GET['pokemon'];
$users = '';
$wouldblock = true;
// we add a shared lock for reading
$locked = flock($fh, LOCK_SH, $wouldblock); // it will wait if locked ($wouldblock = true)
while(!feof($fh)) {
// your code inside while loop
}
// we add an exclusive lock for writing
flock($fh, LOCK_EX, $wouldblock);
file_put_contents('found.txt', $users);
flock($fh, LOCK_UN); // release the locks
fclose($fh);
Let's see if it works
Let's consider a sample php script which deletes a line by user input:
$DELETE_LINE = $_GET['line'];
$out = array();
$data = #file("foo.txt");
if($data)
{
foreach($data as $line)
if(trim($line) != $DELETE_LINE)
$out[] = $line;
}
$fp = fopen("foo.txt", "w+");
flock($fp, LOCK_EX);
foreach($out as $line)
fwrite($fp, $line);
flock($fp, LOCK_UN);
fclose($fp);
I want to know if some user is currently executing this script and file "foo.txt" is locked, in same time or before completion of its execution, if some other user calls this script, then what will happen?
Will second users process wait for unlocking of file by first users? or line deletion by second users input will fail?
If you try to acquire an exclusive lock while another process has the file locked, your attempt will wait until the file is unlocked. This is the whole point of locking.
See the Linux documentation of flock(), which describes how it works in general across operating systems. PHP uses fcntl() under the hood so NFS shares are generally supported.
There's no timeout. If you want to implement a timeout yourself, you can do something like this:
$count = 0;
$timeout_secs = 10; //number of seconds of timeout
$got_lock = true;
while (!flock($fp, LOCK_EX | LOCK_NB, $wouldblock)) {
if ($wouldblock && $count++ < $timeout_secs) {
sleep(1);
} else {
$got_lock = false;
break;
}
}
if ($got_lock) {
// Do stuff with file
}
I am trying to stream/pipe a file to the user's browser through HTTP from FTP. That is, I am trying to print the contents of a file on an FTP server.
This is what I have so far:
public function echo_contents() {
$file = fopen('php://output', 'w+');
if(!$file) {
throw new Exception('Unable to open output');
}
try {
$this->ftp->get($this->path, $file);
} catch(Exception $e) {
fclose($file); // wtb finally
throw $e;
}
fclose($file);
}
$this->ftp->get looks like this:
public function get($path, $stream) {
ftp_fget($this->ftp, $stream, $path, FTP_BINARY); // Line 200
}
With this approach, I am only able to send small files to the user's browser. For larger files, nothing gets printed and I get a fatal error (readable from Apache logs):
PHP Fatal error: Allowed memory size of 16777216 bytes exhausted (tried to allocate 15994881 bytes) in /xxx/ftpconnection.php on line 200
I tried replacing php://output with php://stdout without success (nothing seems to be sent to the browser).
How can I efficiently download from FTP while sending that data to the browser at the same time?
Note: I would not like to use file_get_contents('ftp://user:pass#host:port/path/to/file'); or similar.
Found a solution!
Create a socket pair (anonymous pipe?). Use the non-blocking ftp_nb_fget function to write to one end of the pipe, and echo the other end of the pipe.
Tested to be fast (easily 10MB/s on a 100Mbps connection) so there's not much I/O overhead.
Be sure to clear any output buffers. Frameworks commonly buffer your output.
public function echo_contents() {
/* FTP writes to [0]. Data passed through from [1]. */
$sockets = stream_socket_pair(STREAM_PF_UNIX, STREAM_SOCK_STREAM, STREAM_IPPROTO_IP);
if($sockets === FALSE) {
throw new Exception('Unable to create socket pair');
}
stream_set_write_buffer($sockets[0], 0);
stream_set_timeout($sockets[1], 0);
try {
// $this->ftp is an FtpConnection
$get = $this->ftp->get_non_blocking($this->path, $sockets[0]);
while(!$get->is_finished()) {
$contents = stream_get_contents($sockets[1]);
if($contents !== false) {
echo $contents;
flush();
}
$get->resume();
}
$contents = stream_get_contents($sockets[1]);
if($contents !== false) {
echo $contents;
flush();
}
} catch(Exception $e) {
fclose($sockets[0]); // wtb finally
fclose($sockets[1]);
throw $e;
}
fclose($sockets[0]);
fclose($sockets[1]);
}
// class FtpConnection
public function get_non_blocking($path, $stream) {
// $this->ftp is the FTP resource returned by ftp_connect
return new FtpNonBlockingRequest($this->ftp, $path, $stream);
}
/* TODO Error handling. */
class FtpNonBlockingRequest {
protected $ftp = NULL;
protected $status = NULL;
public function __construct($ftp, $path, $stream) {
$this->ftp = $ftp;
$this->status = ftp_nb_fget($this->ftp, $stream, $path, FTP_BINARY);
}
public function is_finished() {
return $this->status !== FTP_MOREDATA;
}
public function resume() {
if($this->is_finished()) {
throw BadMethodCallException('Cannot continue download; already finished');
}
$this->status = ftp_nb_continue($this->ftp);
}
}
Try:
#readfile('ftp://username:password#host/path/file');
I find with a lot of file operations it's worthwhile letting the underlying OS functionality take care of it for you.
Sounds like you need to turn off output buffering for that page, otherwise PHP will try to fit it in all memory.
An easy way to do this is something like:
while (ob_end_clean()) {
; # do nothing
}
Put that ahead of your call to ->get(), and I think that will resolve your issue.
I know this is old, but some may still think it's useful.
I've tried your solution on a Windows environment, and it worked almost perfectly:
$conn_id = ftp_connect($host);
ftp_login($conn_id, $user, $pass) or die();
$sockets = stream_socket_pair(STREAM_PF_INET, STREAM_SOCK_STREAM,
STREAM_IPPROTO_IP) or die();
stream_set_write_buffer($sockets[0], 0);
stream_set_timeout($sockets[1], 0);
set_time_limit(0);
$status = ftp_nb_fget($conn_id, $sockets[0], $filename, FTP_BINARY);
while ($status === FTP_MOREDATA) {
echo stream_get_contents($sockets[1]);
flush();
$status = ftp_nb_continue($conn_id);
}
echo stream_get_contents($sockets[1]);
flush();
fclose($sockets[0]);
fclose($sockets[1]);
I used STREAM_PF_INET instead of STREAM_PF_UNIX because of Windows, and it worked flawlessly... until the last chunk, which was false for no apparent reason, and I couldn't understand why. So the output was missing the last part.
So I decided to use another approach:
$ctx = stream_context_create();
stream_context_set_params($ctx, array('notification' =>
function($code, $sev, $message, $msgcode, $bytes, $length) {
switch ($code) {
case STREAM_NOTIFY_CONNECT:
// Connection estabilished
break;
case STREAM_NOTIFY_FILE_SIZE_IS:
// Getting file size
break;
case STREAM_NOTIFY_PROGRESS:
// Some bytes were transferred
break;
default: break;
}
}));
#readfile("ftp://$user:$pass#$host/$filename", false, $ctx);
This worked like a charm with PHP 5.4.5. The bad part is that you can't catch the transferred data, only the chunk size.
a quick search brought up php’s flush.
this article might also be of interest: http://www.net2ftp.org/forums/viewtopic.php?id=3774
(I've never met this problem myself, so that's just a wild guess ; but, maybe... )
Maybe changing the size of the ouput buffer for the "file" you are writing to could help ?
For that, see stream_set_write_buffer.
For instance :
$fp = fopen('php://output', 'w+');
stream_set_write_buffer($fp, 0);
With this, your code should use a non-buffered stream -- this might help...