I'm using the following code to create and append data to a tar-archive in PHP. The problem is that phar does not use an exclusive lock on the tar-file causing problem when I have atomic writes to the same file.
function phar_put_contents($fname, $archive, $data) {
$i=0;
do {
$fp = #fopen($archive.'.lock', 'w');
if(!$fp) {
usleep(25);
continue;
}
if(flock($fp, LOCK_EX)) {
try{
$myPhar = new PharData($archive.'.tar',0);
$myPhar[$fname] = $data;
$myPhar->stopBuffering();
flock($fp, LOCK_UN) && #fclose($fp);
#unlink($archive.'.lock');
return true;
} catch (Exception $e) {
error_log($e->getMessage()." in ".$e->getFile().":".$e->getLine(),0);
unset($e);
#flock($fp, LOCK_UN) && #fclose($fp);
}
}
} while ($i++<8);
return false;
}
Using a look file seems to be a "good" solution but it's not optimal since my archives gets currupted quite frequently.
Ok, it seems like the Phar and PharData classes in PHP is somewhat unfinished, they neither have lock() nor close() making my approach for external locking nonworking..
The following code is what I used to try to have a function that appends data to a tar archive.
function phar_put_contents($fname, $archive, $data) {
$i=0;
do {
$fp = #fopen($archive.'.lock', 'w');
if(!$fp) {
usleep(25);
continue;
}
if(flock($fp, LOCK_EX)) {
try{
file_put_contents('/tmp/'.$fname, $data);
$tarCmd = "tar ". (file_exists($archive.".tar") ? "-rf ":"-cf ") .$archive.".tar -C /tmp ".$fname;
exec($tarCmd, $result, $status);
if($status!=0)
throw new Exception($tarCmd . implode($result, "\n"));
#unlink('/tmp/'.$fname);
flock($fp, LOCK_UN) && #fclose($fp);
#unlink($archive.'.lock');
return true;
} catch (Exception $e) {
error_log($e->getMessage()." in ".$e->getFile().":".$e->getLine(),0);
unset($e);
#flock($fp, LOCK_UN) && #fclose($fp);
}
}
} while ($i++<8);
return false;
}
Note that I'm using exec() and calls the external version of tar. This were a necessity since Phar does very unreliable flushes to the archive, making the tar-file broken since two instances of the code can modify the same file at the same time.
Related
I have one file: configuration.txt.
This file gets read by PHP, then wrote by the same PHP, while a C++ program reads the content of the same file at a regular interval.
PHP:
$closeFlag = false;
$arrayInputs = new SplFixedArray(3);
$arrayInputs[0] = "URL not entered";
$arrayInputs[1] = "3";
$arrayInputs[2] = "50";
$configFilePath = "/var/www/html/configuration.txt";
$currentSettingsFile = fopen($configFilePath, "r");
if(flock($currentSettingsFile, LOCK_SH)) {
$arrayInputs = explode(PHP_EOL, fread($currentSettingsFile, filesize($configFilePath)));
flock($currentSettingsFile, LOCK_UN);
$closeFlag = fclose($currentSettingsFile);
}
if(isset( $_POST['save_values'])) {
if(!empty($_POST['getURL'])) {
$arrayInputs[0] = $_POST['getURL'];
}
if(!empty($_POST['getURR'])) {
$arrayInputs[1] = $_POST['getURR'];
}
if(!empty($_POST['getBrightness'])) {
$arrayInputs[2] = $_POST['getBrightness'];
}
}
if(!$closeFlag) fclose($currentSettingsFile);
$currentSettingsFile = fopen($configFilePath, "w");
if(flock($currentSettingsFile, LOCK_SH)) {
foreach ($arrayInputs as $key => $value) {
if($value != '')
fwrite($currentSettingsFile,$value.PHP_EOL);
}
flock($currentSettingsFile, LOCK_UN);
fclose($currentSettingsFile);
}
?>
C++
char configFilePath[]="/var/www/html/configuration.txt";
std::fstream configFile;
configFile.open(configFilePath, std::fstream::in);
if(configFile.is_open()){
// do stuff
} else {
std::cout<<"Error ! Could not open Configuration file to read"<<std::endl;
}
The c++ returned no error so far. It can open the file. And php will return Warning: fread(): Length parameter must be greater than 0 because the file is empty.
I believe that PHP is deleting the file's content.
When locking a file in PHP, you lock a LOCK file, not the main file. Example:
$myfile = 'myfile.txt';
$lockfile = 'myfile.lock';
$lock = fopen($lockfile,'a');
if(flock($lock, LOCK_EX)) // The lock file is locked in exclusive mode - so I can write to it.
{
$fp = fopen($myfile,'w');
fputs($fp, "I am writing safely!");
fclose($fp);
flock($lock, LOCK_UN); // Always unlock it!
}
fclose($lock);
You work similarly in C++ because PHP is not locking the actual file. It is locking a lock file. The exact syntax depends heavily on your version of C/C++ and the operating system. So, I will use minimal syntax.
int lock=fopen(lockfile, "r+");
if(flock(fileno(lock), LOCK_EX))
{
//Locked. You can open a stream to ANOTHER file and play with it.
flock(fileno(lock), LOCK_UN));
}
fclose(lock);
I have a process that writes a file using file_put_contents():
file_put_contents ( $file, $data, LOCK_EX );
I have added the LOCK_EX parameter to prevent concurrent processes from writing to the same file, and prevent trying to read it when it's still being written to.
I'm having difficulties testing this properly due to the concurrent nature, and I'm not sure how to approach this. I've got this so far:
if (file_exists($file)) {
$fp = fopen($file, 'r+');
if (!flock($fp, LOCK_EX|LOCK_NB, $wouldblock)) {
if ($wouldblock) {
// how can I wait until the file is unlocked?
} else {
// what other reasons could there be for not being able to lock?
}
}
// does calling fclose automatically close all locks even is a flock was not obtained above?
fclose($file);
}
Questions being:
Is there a way to wait until the file is not locked anymore, while keeping the option to give this a time limit?
Does fclose() automatically unlock all locks when there would be another process that had locked the file?
I wrote a small test that uses sleep() so that I could simulate concurrent read/write processes with a simple AJAX call. It seems this answers both questions:
when the file is locked, a sleep that approximates estimated write duration and subsequent lock check allow for waiting. This could even be put in a while loop with an interval.
fclose() does indeed not remove the lock from the process that's already running as confirmed in some of the answers.
PHP5.5 and lower on windows does not support the $wouldblock parameter according to the docs,
I was able to test this on Windows + PHP5.3 and concluded that the file_is_locked() from my test still worked in this scenario:
flock() would still return false just not have the $wouldblock parameter but it would still be caught in my else check.
if (isset($_POST['action'])) {
$file = 'file.txt';
$fp = fopen($file, 'r+');
if ($wouldblock = file_is_locked($fp)) {
// wait and then try again;
sleep(5);
$wouldblock = file_is_locked($fp);
}
switch ($_POST['action']) {
case 'write':
if ($wouldblock) {
echo 'already writing';
} else {
flock($fp, LOCK_EX);
fwrite($fp, 'yadayada');
sleep(5);
echo 'done writing';
}
break;
case 'read':
if ($wouldblock) {
echo 'cant read, already writing';
} else {
echo fread($fp, filesize($file));
}
break;
}
fclose($fp);
die();
}
function file_is_locked( $fp ) {
if (!flock($fp, LOCK_EX|LOCK_NB, $wouldblock)) {
if ($wouldblock) {
return 'locked'; // file is locked
} else {
return 'no idea'; // can't lock for whatever reason (for example being locked in Windows + PHP5.3)
}
} else {
return false;
}
}
I often use a small class... that is secure and fast, basically you have to write only when you obtain exclusive lock on the file otherwise you should wait until is locked...
lock_file.php
<?php
/*
Reference Material
http://en.wikipedia.org/wiki/ACID
*/
class Exclusive_Lock {
/* Private variables */
public $filename; // The file to be locked
public $timeout = 30; // The timeout value of the lock
public $permission = 0755; // The permission value of the locked file
/* Constructor */
public function __construct($filename, $timeout = 1, $permission = null, $override = false) {
// Append '.lck' extension to filename for the locking mechanism
$this->filename = $filename . '.lck';
// Timeout should be some factor greater than the maximum script execution time
$temp = #get_cfg_var('max_execution_time');
if ($temp === false || $override === true) {
if ($timeout >= 1) $this->timeout = $timeout;
set_time_limit($this->timeout);
} else {
if ($timeout < 1) $this->timeout = $temp;
else $this->timeout = $timeout * $temp;
}
// Should some other permission value be necessary
if (isset($permission)) $this->permission = $permission;
}
/* Methods */
public function acquireLock() {
// Create the locked file, the 'x' parameter is used to detect a preexisting lock
$fp = #fopen($this->filename, 'x');
// If an error occurs fail lock
if ($fp === false) return false;
// If the permission set is unsuccessful fail lock
if (!#chmod($this->filename, $this->permission)) return false;
// If unable to write the timeout value fail lock
if (false === #fwrite($fp, time() + intval($this->timeout))) return false;
// If lock is successfully closed validate lock
return fclose($fp);
}
public function releaseLock() {
// Delete the file with the extension '.lck'
return #unlink($this->filename);
}
public function timeLock() {
// Retrieve the contents of the lock file
$timeout = #file_get_contents($this->filename);
// If no contents retrieved return error
if ($timeout === false) return false;
// Return the timeout value
return intval($timeout);
}
}
?>
Simple use as follow:
include("lock_file.php");
$file = new Exclusive_Lock("my_file.dat", 2);
if ($file->acquireLock()) {
$data = fopen("my_file.dat", "w+");
$read = "READ: YES";
fwrite($data, $read);
fclose($data);
$file->releaseLock();
chmod("my_file.dat", 0755);
unset($data);
unset($read);
}
If you want add more complex level you can use another trick... use while (1) to initialize a infinite loop that breaks only when the exclusive lock is acquired, not suggested since can block your server for an undefined time...
include("lock_file.php");
$file = new Exclusive_Lock("my_file.dat", 2);
while (1) {
if ($file->acquireLock()) {
$data = fopen("my_file.dat", "w+");
$read = "READ: YES";
fwrite($data, $read);
fclose($data);
$file->releaseLock();
chmod("my_file.dat", 0755);
unset($data);
unset($read);
break;
}
}
file_put_contents() is very fast and writes directly into file but as you say has a limit... race condition exists and may happen even if you try to use LOCK_EX. I think that a php class is more flexible and usable...
See you this thread that treats a similar question: php flock behaviour when file is locked by one process
The first question is answered here How to detect the finish with file_put_contents() in php? and beacuse PHP is single-threaded, only solution is to use extension of core PHP using PTHREADS and one good simple article about it is https://www.mullie.eu/parallel-processing-multi-tasking-php/
The second question is answered here Will flock'ed file be unlocked when the process die unexpectedly?
The fclose() will unlock only valid handle that is opened using fopen() or fsockopen() so if handle is still valid, yes it will close file and release lock.
Here is a fix for #Alessandro answer to work correctly and not lock the file forever
lock_file.php
<?php
/*
Reference Material
http://en.wikipedia.org/wiki/ACID
*/
class Exclusive_Lock {
/* Private variables */
public $filename; // The file to be locked
public $timeout = 30; // The timeout value of the lock
public $permission = 0755; // The permission value of the locked file
/* Constructor */
public function __construct($filename, $timeout = 1, $permission = null, $override = false) {
// Append '.lck' extension to filename for the locking mechanism
$this->filename = $filename . '.lck';
// Timeout should be some factor greater than the maximum script execution time
$temp = #get_cfg_var('max_execution_time');
if ($temp === false || $override === true) {
if ($timeout >= 1) $this->timeout = $timeout;
set_time_limit($this->timeout);
} else {
if ($timeout < 1) $this->timeout = $temp;
else $this->timeout = $timeout ;
}
// Should some other permission value be necessary
if (isset($permission)) $this->permission = $permission;
if($this->timeLock()){
$this->releaseLock();
}
}
/* Methods */
public function acquireLock() {
// Create the locked file, the 'x' parameter is used to detect a preexisting lock
$fp = #fopen($this->filename, 'x');
// If an error occurs fail lock
if ($fp === false) return false;
// If the permission set is unsuccessful fail lock
if (!#chmod($this->filename, $this->permission)) return false;
// If unable to write the timeout value fail lock
if (false === #fwrite($fp, time() + intval($this->timeout))) return false;
// If lock is successfully closed validate lock
return fclose($fp);
}
public function releaseLock() {
// Delete the file with the extension '.lck'
return #unlink($this->filename);
}
private function timeLock() {
// Retrieve the contents of the lock file
$timeout = #file_get_contents($this->filename);
// If no contents retrieved return true
if ($timeout === false) return true;
// Return the timeout value
return (intval($timeout) < time());
}
}
use as follow:
include("lock_file.php");
$file = new Exclusive_Lock("my_file.dat", 2);
if ($file->acquireLock()) {
$data = fopen("my_file.dat", "w+");
$read = "READ: YES";
fwrite($data, $read);
fclose($data);
$file->releaseLock();
chmod("my_file.dat", 0755);
unset($data);
unset($read);
}
hope that save some else time
I know that yield can be used to create a data iterator, e.g. to read data from a CSV file.
function csv_generator($file) {
$handle = fopen($file,"r");
while (!feof($handle)) {
yield fgetcsv($file);
}
fclose($file);
}
But the Generator::send() method suggests that I can do the same for sequential writing, instead of reading.
E.g. I want to use the thing like this:
function csv_output_generator($file) {
$handle = fopen('file.csv', 'w');
while (null !== $row = yield) {
fputcsv($handle, $row);
}
fclose($handle);
}
$output_generator = csv_output_generator($file);
$output_generator->send($rows[0]);
$output_generator->send($rows[1]);
$output_generator->send($rows[2]);
// Close the output generator.
$output_generator->send(null);
The above will work, I think.
But $output_generator->send(null); for closing seems wrong, or not ideal. It means that I can never send a literal null. Which is ok for csv writing, but maybe there is a use case for sending null.
Is there any "best practice" for using php generators for sequential writing?
Not saying this is a marvelous idea but if you're talking semantics, this 'feels' great.
Check against a class. Like pass in objects of a particular class to terminate the generator. Like:
// should probably use namespacing here.
class GeneratorUtilClose {}
class GeneratorUtil {
public static function close() {
return new GeneratorUtilClose;
}
}
function csv_output_generator($file) {
$handle = fopen('file.csv', 'w');
while (!(($row = yield) instanceof GeneratorUtilClose)) {
fputcsv($handle, $row);
}
fclose($handle);
}
$output_generator = csv_output_generator($file);
$output_generator->send($rows[0]);
$output_generator->send(GeneratorUtil::close());
Added a little factory in here for extra semantic sugar.
Not ideal either but works without creating any other class
function csv_output_generator($file) {
$handle = fopen($file, 'w');
try {
while ($row = yield) {
fputcsv($handle, $row);
}
} catch (ClosedGeneratorException $e) {
// closing generator
}
fclose($handle);
}
$output_generator = csv_output_generator($file);
$output_generator->send($rows[0]);
$output_generator->send($rows[1]);
$output_generator->send($rows[2]);
// Close the output generator.
$output_generator->throw(new ClosedGeneratorException());
I have an issue I can't seem to find the solution for. I am trying to write to a flat text file. I have echoed all variables out on the screen, verified permissions for the user (www-data) and just for grins set everything in the whole folder to 777 - all to no avail. Worst part is I can call on the same function from another file and it writes. I can't see to find the common thread here.....
function ReplaceAreaInFile($AreaStart, $AreaEnd, $File, $ReplaceWith){
$FileContents = GetFileAsString($File);
$Section = GetAreaFromFile($AreaStart, $AreaEnd, $FileContents, TRUE);
if(isset($Section)){
$SectionTop = $AreaStart."\n";
$SectionTop .= $ReplaceWith;
$NewContents = str_replace($Section, $SectionTop, $FileContents);
if (!$Handle = fopen($File, 'w')) {
return "Cannot open file ($File)";
exit;
}/*
if(!flock($Handle, LOCK_EX | LOCK_NB)) {
echo 'Unable to obtain file lock';
exit(-1);
}*/
if (fwrite($Handle, $NewContents) === FALSE) {
return "Cannot write to file ($File)";
exit;
}else{
return $NewContents;
}
}else{
return "<p align=\"center\">There was an issue saving your settings. Please try again. If the issue persists contact your provider.</p>";
}
}
Try with...
$Handle = fopen($File, 'w');
if ($Handle === false) {
die("Cannot open file ($File)");
}
$written = fwrite($Handle, $NewContents);
if ($written === false) {
die("Invalid arguments - could not write to file ($File)");
}
if ((strlen($NewContents) > 0) && ($written < strlen($NewContents))) {
die("There was a problem writing to $File - $written chars written");
}
fclose($Handle);
echo "Wrote $written bytes to $File\n"; // or log to a file
return $NewContents;
and also check for any problems in the error log. There should be something, assuming you've enabled error logging.
You need to check for number of characters written since in PHP fwrite behaves like this:
After having problems with fwrite() returning 0 in cases where one
would fully expect a return value of false, I took a look at the
source code for php's fwrite() itself. The function will only return
false if you pass in invalid arguments. Any other error, just as a
broken pipe or closed connection, will result in a return value of
less than strlen($string), in most cases 0.
Also, note that you might be writing to a file, but to a different file that you're expecting to write. Absolute paths might help with tracking this.
The final solution I ended up using for this:
function ReplaceAreaInFile($AreaStart, $AreaEnd, $File, $ReplaceWith){
$FileContents = GetFileAsString($File);
$Section = GetAreaFromFile($AreaStart, $AreaEnd, $FileContents, TRUE);
if(isset($Section)){
$SectionTop = $AreaStart."\n";
$SectionTop .= $ReplaceWith;
$NewContents = str_replace($Section, $SectionTop, $FileContents);
return $NewContents;
}else{
return "<p align=\"center\">There was an issue saving your settings.</p>";
}
}
function WriteNewConfigToFile($File2WriteName, $ContentsForFile){
file_put_contents($File2WriteName, $ContentsForFile, LOCK_EX);
}
I did end up using absolute file paths and had to check the permissions on the files. I had to make sure the www-data user in Apache was able to write to the files and was also the user running the script.
I am trying to stream/pipe a file to the user's browser through HTTP from FTP. That is, I am trying to print the contents of a file on an FTP server.
This is what I have so far:
public function echo_contents() {
$file = fopen('php://output', 'w+');
if(!$file) {
throw new Exception('Unable to open output');
}
try {
$this->ftp->get($this->path, $file);
} catch(Exception $e) {
fclose($file); // wtb finally
throw $e;
}
fclose($file);
}
$this->ftp->get looks like this:
public function get($path, $stream) {
ftp_fget($this->ftp, $stream, $path, FTP_BINARY); // Line 200
}
With this approach, I am only able to send small files to the user's browser. For larger files, nothing gets printed and I get a fatal error (readable from Apache logs):
PHP Fatal error: Allowed memory size of 16777216 bytes exhausted (tried to allocate 15994881 bytes) in /xxx/ftpconnection.php on line 200
I tried replacing php://output with php://stdout without success (nothing seems to be sent to the browser).
How can I efficiently download from FTP while sending that data to the browser at the same time?
Note: I would not like to use file_get_contents('ftp://user:pass#host:port/path/to/file'); or similar.
Found a solution!
Create a socket pair (anonymous pipe?). Use the non-blocking ftp_nb_fget function to write to one end of the pipe, and echo the other end of the pipe.
Tested to be fast (easily 10MB/s on a 100Mbps connection) so there's not much I/O overhead.
Be sure to clear any output buffers. Frameworks commonly buffer your output.
public function echo_contents() {
/* FTP writes to [0]. Data passed through from [1]. */
$sockets = stream_socket_pair(STREAM_PF_UNIX, STREAM_SOCK_STREAM, STREAM_IPPROTO_IP);
if($sockets === FALSE) {
throw new Exception('Unable to create socket pair');
}
stream_set_write_buffer($sockets[0], 0);
stream_set_timeout($sockets[1], 0);
try {
// $this->ftp is an FtpConnection
$get = $this->ftp->get_non_blocking($this->path, $sockets[0]);
while(!$get->is_finished()) {
$contents = stream_get_contents($sockets[1]);
if($contents !== false) {
echo $contents;
flush();
}
$get->resume();
}
$contents = stream_get_contents($sockets[1]);
if($contents !== false) {
echo $contents;
flush();
}
} catch(Exception $e) {
fclose($sockets[0]); // wtb finally
fclose($sockets[1]);
throw $e;
}
fclose($sockets[0]);
fclose($sockets[1]);
}
// class FtpConnection
public function get_non_blocking($path, $stream) {
// $this->ftp is the FTP resource returned by ftp_connect
return new FtpNonBlockingRequest($this->ftp, $path, $stream);
}
/* TODO Error handling. */
class FtpNonBlockingRequest {
protected $ftp = NULL;
protected $status = NULL;
public function __construct($ftp, $path, $stream) {
$this->ftp = $ftp;
$this->status = ftp_nb_fget($this->ftp, $stream, $path, FTP_BINARY);
}
public function is_finished() {
return $this->status !== FTP_MOREDATA;
}
public function resume() {
if($this->is_finished()) {
throw BadMethodCallException('Cannot continue download; already finished');
}
$this->status = ftp_nb_continue($this->ftp);
}
}
Try:
#readfile('ftp://username:password#host/path/file');
I find with a lot of file operations it's worthwhile letting the underlying OS functionality take care of it for you.
Sounds like you need to turn off output buffering for that page, otherwise PHP will try to fit it in all memory.
An easy way to do this is something like:
while (ob_end_clean()) {
; # do nothing
}
Put that ahead of your call to ->get(), and I think that will resolve your issue.
I know this is old, but some may still think it's useful.
I've tried your solution on a Windows environment, and it worked almost perfectly:
$conn_id = ftp_connect($host);
ftp_login($conn_id, $user, $pass) or die();
$sockets = stream_socket_pair(STREAM_PF_INET, STREAM_SOCK_STREAM,
STREAM_IPPROTO_IP) or die();
stream_set_write_buffer($sockets[0], 0);
stream_set_timeout($sockets[1], 0);
set_time_limit(0);
$status = ftp_nb_fget($conn_id, $sockets[0], $filename, FTP_BINARY);
while ($status === FTP_MOREDATA) {
echo stream_get_contents($sockets[1]);
flush();
$status = ftp_nb_continue($conn_id);
}
echo stream_get_contents($sockets[1]);
flush();
fclose($sockets[0]);
fclose($sockets[1]);
I used STREAM_PF_INET instead of STREAM_PF_UNIX because of Windows, and it worked flawlessly... until the last chunk, which was false for no apparent reason, and I couldn't understand why. So the output was missing the last part.
So I decided to use another approach:
$ctx = stream_context_create();
stream_context_set_params($ctx, array('notification' =>
function($code, $sev, $message, $msgcode, $bytes, $length) {
switch ($code) {
case STREAM_NOTIFY_CONNECT:
// Connection estabilished
break;
case STREAM_NOTIFY_FILE_SIZE_IS:
// Getting file size
break;
case STREAM_NOTIFY_PROGRESS:
// Some bytes were transferred
break;
default: break;
}
}));
#readfile("ftp://$user:$pass#$host/$filename", false, $ctx);
This worked like a charm with PHP 5.4.5. The bad part is that you can't catch the transferred data, only the chunk size.
a quick search brought up php’s flush.
this article might also be of interest: http://www.net2ftp.org/forums/viewtopic.php?id=3774
(I've never met this problem myself, so that's just a wild guess ; but, maybe... )
Maybe changing the size of the ouput buffer for the "file" you are writing to could help ?
For that, see stream_set_write_buffer.
For instance :
$fp = fopen('php://output', 'w+');
stream_set_write_buffer($fp, 0);
With this, your code should use a non-buffered stream -- this might help...