I am trying to stream/pipe a file to the user's browser through HTTP from FTP. That is, I am trying to print the contents of a file on an FTP server.
This is what I have so far:
public function echo_contents() {
$file = fopen('php://output', 'w+');
if(!$file) {
throw new Exception('Unable to open output');
}
try {
$this->ftp->get($this->path, $file);
} catch(Exception $e) {
fclose($file); // wtb finally
throw $e;
}
fclose($file);
}
$this->ftp->get looks like this:
public function get($path, $stream) {
ftp_fget($this->ftp, $stream, $path, FTP_BINARY); // Line 200
}
With this approach, I am only able to send small files to the user's browser. For larger files, nothing gets printed and I get a fatal error (readable from Apache logs):
PHP Fatal error: Allowed memory size of 16777216 bytes exhausted (tried to allocate 15994881 bytes) in /xxx/ftpconnection.php on line 200
I tried replacing php://output with php://stdout without success (nothing seems to be sent to the browser).
How can I efficiently download from FTP while sending that data to the browser at the same time?
Note: I would not like to use file_get_contents('ftp://user:pass#host:port/path/to/file'); or similar.
Found a solution!
Create a socket pair (anonymous pipe?). Use the non-blocking ftp_nb_fget function to write to one end of the pipe, and echo the other end of the pipe.
Tested to be fast (easily 10MB/s on a 100Mbps connection) so there's not much I/O overhead.
Be sure to clear any output buffers. Frameworks commonly buffer your output.
public function echo_contents() {
/* FTP writes to [0]. Data passed through from [1]. */
$sockets = stream_socket_pair(STREAM_PF_UNIX, STREAM_SOCK_STREAM, STREAM_IPPROTO_IP);
if($sockets === FALSE) {
throw new Exception('Unable to create socket pair');
}
stream_set_write_buffer($sockets[0], 0);
stream_set_timeout($sockets[1], 0);
try {
// $this->ftp is an FtpConnection
$get = $this->ftp->get_non_blocking($this->path, $sockets[0]);
while(!$get->is_finished()) {
$contents = stream_get_contents($sockets[1]);
if($contents !== false) {
echo $contents;
flush();
}
$get->resume();
}
$contents = stream_get_contents($sockets[1]);
if($contents !== false) {
echo $contents;
flush();
}
} catch(Exception $e) {
fclose($sockets[0]); // wtb finally
fclose($sockets[1]);
throw $e;
}
fclose($sockets[0]);
fclose($sockets[1]);
}
// class FtpConnection
public function get_non_blocking($path, $stream) {
// $this->ftp is the FTP resource returned by ftp_connect
return new FtpNonBlockingRequest($this->ftp, $path, $stream);
}
/* TODO Error handling. */
class FtpNonBlockingRequest {
protected $ftp = NULL;
protected $status = NULL;
public function __construct($ftp, $path, $stream) {
$this->ftp = $ftp;
$this->status = ftp_nb_fget($this->ftp, $stream, $path, FTP_BINARY);
}
public function is_finished() {
return $this->status !== FTP_MOREDATA;
}
public function resume() {
if($this->is_finished()) {
throw BadMethodCallException('Cannot continue download; already finished');
}
$this->status = ftp_nb_continue($this->ftp);
}
}
Try:
#readfile('ftp://username:password#host/path/file');
I find with a lot of file operations it's worthwhile letting the underlying OS functionality take care of it for you.
Sounds like you need to turn off output buffering for that page, otherwise PHP will try to fit it in all memory.
An easy way to do this is something like:
while (ob_end_clean()) {
; # do nothing
}
Put that ahead of your call to ->get(), and I think that will resolve your issue.
I know this is old, but some may still think it's useful.
I've tried your solution on a Windows environment, and it worked almost perfectly:
$conn_id = ftp_connect($host);
ftp_login($conn_id, $user, $pass) or die();
$sockets = stream_socket_pair(STREAM_PF_INET, STREAM_SOCK_STREAM,
STREAM_IPPROTO_IP) or die();
stream_set_write_buffer($sockets[0], 0);
stream_set_timeout($sockets[1], 0);
set_time_limit(0);
$status = ftp_nb_fget($conn_id, $sockets[0], $filename, FTP_BINARY);
while ($status === FTP_MOREDATA) {
echo stream_get_contents($sockets[1]);
flush();
$status = ftp_nb_continue($conn_id);
}
echo stream_get_contents($sockets[1]);
flush();
fclose($sockets[0]);
fclose($sockets[1]);
I used STREAM_PF_INET instead of STREAM_PF_UNIX because of Windows, and it worked flawlessly... until the last chunk, which was false for no apparent reason, and I couldn't understand why. So the output was missing the last part.
So I decided to use another approach:
$ctx = stream_context_create();
stream_context_set_params($ctx, array('notification' =>
function($code, $sev, $message, $msgcode, $bytes, $length) {
switch ($code) {
case STREAM_NOTIFY_CONNECT:
// Connection estabilished
break;
case STREAM_NOTIFY_FILE_SIZE_IS:
// Getting file size
break;
case STREAM_NOTIFY_PROGRESS:
// Some bytes were transferred
break;
default: break;
}
}));
#readfile("ftp://$user:$pass#$host/$filename", false, $ctx);
This worked like a charm with PHP 5.4.5. The bad part is that you can't catch the transferred data, only the chunk size.
a quick search brought up php’s flush.
this article might also be of interest: http://www.net2ftp.org/forums/viewtopic.php?id=3774
(I've never met this problem myself, so that's just a wild guess ; but, maybe... )
Maybe changing the size of the ouput buffer for the "file" you are writing to could help ?
For that, see stream_set_write_buffer.
For instance :
$fp = fopen('php://output', 'w+');
stream_set_write_buffer($fp, 0);
With this, your code should use a non-buffered stream -- this might help...
Related
I have a process that writes a file using file_put_contents():
file_put_contents ( $file, $data, LOCK_EX );
I have added the LOCK_EX parameter to prevent concurrent processes from writing to the same file, and prevent trying to read it when it's still being written to.
I'm having difficulties testing this properly due to the concurrent nature, and I'm not sure how to approach this. I've got this so far:
if (file_exists($file)) {
$fp = fopen($file, 'r+');
if (!flock($fp, LOCK_EX|LOCK_NB, $wouldblock)) {
if ($wouldblock) {
// how can I wait until the file is unlocked?
} else {
// what other reasons could there be for not being able to lock?
}
}
// does calling fclose automatically close all locks even is a flock was not obtained above?
fclose($file);
}
Questions being:
Is there a way to wait until the file is not locked anymore, while keeping the option to give this a time limit?
Does fclose() automatically unlock all locks when there would be another process that had locked the file?
I wrote a small test that uses sleep() so that I could simulate concurrent read/write processes with a simple AJAX call. It seems this answers both questions:
when the file is locked, a sleep that approximates estimated write duration and subsequent lock check allow for waiting. This could even be put in a while loop with an interval.
fclose() does indeed not remove the lock from the process that's already running as confirmed in some of the answers.
PHP5.5 and lower on windows does not support the $wouldblock parameter according to the docs,
I was able to test this on Windows + PHP5.3 and concluded that the file_is_locked() from my test still worked in this scenario:
flock() would still return false just not have the $wouldblock parameter but it would still be caught in my else check.
if (isset($_POST['action'])) {
$file = 'file.txt';
$fp = fopen($file, 'r+');
if ($wouldblock = file_is_locked($fp)) {
// wait and then try again;
sleep(5);
$wouldblock = file_is_locked($fp);
}
switch ($_POST['action']) {
case 'write':
if ($wouldblock) {
echo 'already writing';
} else {
flock($fp, LOCK_EX);
fwrite($fp, 'yadayada');
sleep(5);
echo 'done writing';
}
break;
case 'read':
if ($wouldblock) {
echo 'cant read, already writing';
} else {
echo fread($fp, filesize($file));
}
break;
}
fclose($fp);
die();
}
function file_is_locked( $fp ) {
if (!flock($fp, LOCK_EX|LOCK_NB, $wouldblock)) {
if ($wouldblock) {
return 'locked'; // file is locked
} else {
return 'no idea'; // can't lock for whatever reason (for example being locked in Windows + PHP5.3)
}
} else {
return false;
}
}
I often use a small class... that is secure and fast, basically you have to write only when you obtain exclusive lock on the file otherwise you should wait until is locked...
lock_file.php
<?php
/*
Reference Material
http://en.wikipedia.org/wiki/ACID
*/
class Exclusive_Lock {
/* Private variables */
public $filename; // The file to be locked
public $timeout = 30; // The timeout value of the lock
public $permission = 0755; // The permission value of the locked file
/* Constructor */
public function __construct($filename, $timeout = 1, $permission = null, $override = false) {
// Append '.lck' extension to filename for the locking mechanism
$this->filename = $filename . '.lck';
// Timeout should be some factor greater than the maximum script execution time
$temp = #get_cfg_var('max_execution_time');
if ($temp === false || $override === true) {
if ($timeout >= 1) $this->timeout = $timeout;
set_time_limit($this->timeout);
} else {
if ($timeout < 1) $this->timeout = $temp;
else $this->timeout = $timeout * $temp;
}
// Should some other permission value be necessary
if (isset($permission)) $this->permission = $permission;
}
/* Methods */
public function acquireLock() {
// Create the locked file, the 'x' parameter is used to detect a preexisting lock
$fp = #fopen($this->filename, 'x');
// If an error occurs fail lock
if ($fp === false) return false;
// If the permission set is unsuccessful fail lock
if (!#chmod($this->filename, $this->permission)) return false;
// If unable to write the timeout value fail lock
if (false === #fwrite($fp, time() + intval($this->timeout))) return false;
// If lock is successfully closed validate lock
return fclose($fp);
}
public function releaseLock() {
// Delete the file with the extension '.lck'
return #unlink($this->filename);
}
public function timeLock() {
// Retrieve the contents of the lock file
$timeout = #file_get_contents($this->filename);
// If no contents retrieved return error
if ($timeout === false) return false;
// Return the timeout value
return intval($timeout);
}
}
?>
Simple use as follow:
include("lock_file.php");
$file = new Exclusive_Lock("my_file.dat", 2);
if ($file->acquireLock()) {
$data = fopen("my_file.dat", "w+");
$read = "READ: YES";
fwrite($data, $read);
fclose($data);
$file->releaseLock();
chmod("my_file.dat", 0755);
unset($data);
unset($read);
}
If you want add more complex level you can use another trick... use while (1) to initialize a infinite loop that breaks only when the exclusive lock is acquired, not suggested since can block your server for an undefined time...
include("lock_file.php");
$file = new Exclusive_Lock("my_file.dat", 2);
while (1) {
if ($file->acquireLock()) {
$data = fopen("my_file.dat", "w+");
$read = "READ: YES";
fwrite($data, $read);
fclose($data);
$file->releaseLock();
chmod("my_file.dat", 0755);
unset($data);
unset($read);
break;
}
}
file_put_contents() is very fast and writes directly into file but as you say has a limit... race condition exists and may happen even if you try to use LOCK_EX. I think that a php class is more flexible and usable...
See you this thread that treats a similar question: php flock behaviour when file is locked by one process
The first question is answered here How to detect the finish with file_put_contents() in php? and beacuse PHP is single-threaded, only solution is to use extension of core PHP using PTHREADS and one good simple article about it is https://www.mullie.eu/parallel-processing-multi-tasking-php/
The second question is answered here Will flock'ed file be unlocked when the process die unexpectedly?
The fclose() will unlock only valid handle that is opened using fopen() or fsockopen() so if handle is still valid, yes it will close file and release lock.
Here is a fix for #Alessandro answer to work correctly and not lock the file forever
lock_file.php
<?php
/*
Reference Material
http://en.wikipedia.org/wiki/ACID
*/
class Exclusive_Lock {
/* Private variables */
public $filename; // The file to be locked
public $timeout = 30; // The timeout value of the lock
public $permission = 0755; // The permission value of the locked file
/* Constructor */
public function __construct($filename, $timeout = 1, $permission = null, $override = false) {
// Append '.lck' extension to filename for the locking mechanism
$this->filename = $filename . '.lck';
// Timeout should be some factor greater than the maximum script execution time
$temp = #get_cfg_var('max_execution_time');
if ($temp === false || $override === true) {
if ($timeout >= 1) $this->timeout = $timeout;
set_time_limit($this->timeout);
} else {
if ($timeout < 1) $this->timeout = $temp;
else $this->timeout = $timeout ;
}
// Should some other permission value be necessary
if (isset($permission)) $this->permission = $permission;
if($this->timeLock()){
$this->releaseLock();
}
}
/* Methods */
public function acquireLock() {
// Create the locked file, the 'x' parameter is used to detect a preexisting lock
$fp = #fopen($this->filename, 'x');
// If an error occurs fail lock
if ($fp === false) return false;
// If the permission set is unsuccessful fail lock
if (!#chmod($this->filename, $this->permission)) return false;
// If unable to write the timeout value fail lock
if (false === #fwrite($fp, time() + intval($this->timeout))) return false;
// If lock is successfully closed validate lock
return fclose($fp);
}
public function releaseLock() {
// Delete the file with the extension '.lck'
return #unlink($this->filename);
}
private function timeLock() {
// Retrieve the contents of the lock file
$timeout = #file_get_contents($this->filename);
// If no contents retrieved return true
if ($timeout === false) return true;
// Return the timeout value
return (intval($timeout) < time());
}
}
use as follow:
include("lock_file.php");
$file = new Exclusive_Lock("my_file.dat", 2);
if ($file->acquireLock()) {
$data = fopen("my_file.dat", "w+");
$read = "READ: YES";
fwrite($data, $read);
fclose($data);
$file->releaseLock();
chmod("my_file.dat", 0755);
unset($data);
unset($read);
}
hope that save some else time
I'm using the following code to create and append data to a tar-archive in PHP. The problem is that phar does not use an exclusive lock on the tar-file causing problem when I have atomic writes to the same file.
function phar_put_contents($fname, $archive, $data) {
$i=0;
do {
$fp = #fopen($archive.'.lock', 'w');
if(!$fp) {
usleep(25);
continue;
}
if(flock($fp, LOCK_EX)) {
try{
$myPhar = new PharData($archive.'.tar',0);
$myPhar[$fname] = $data;
$myPhar->stopBuffering();
flock($fp, LOCK_UN) && #fclose($fp);
#unlink($archive.'.lock');
return true;
} catch (Exception $e) {
error_log($e->getMessage()." in ".$e->getFile().":".$e->getLine(),0);
unset($e);
#flock($fp, LOCK_UN) && #fclose($fp);
}
}
} while ($i++<8);
return false;
}
Using a look file seems to be a "good" solution but it's not optimal since my archives gets currupted quite frequently.
Ok, it seems like the Phar and PharData classes in PHP is somewhat unfinished, they neither have lock() nor close() making my approach for external locking nonworking..
The following code is what I used to try to have a function that appends data to a tar archive.
function phar_put_contents($fname, $archive, $data) {
$i=0;
do {
$fp = #fopen($archive.'.lock', 'w');
if(!$fp) {
usleep(25);
continue;
}
if(flock($fp, LOCK_EX)) {
try{
file_put_contents('/tmp/'.$fname, $data);
$tarCmd = "tar ". (file_exists($archive.".tar") ? "-rf ":"-cf ") .$archive.".tar -C /tmp ".$fname;
exec($tarCmd, $result, $status);
if($status!=0)
throw new Exception($tarCmd . implode($result, "\n"));
#unlink('/tmp/'.$fname);
flock($fp, LOCK_UN) && #fclose($fp);
#unlink($archive.'.lock');
return true;
} catch (Exception $e) {
error_log($e->getMessage()." in ".$e->getFile().":".$e->getLine(),0);
unset($e);
#flock($fp, LOCK_UN) && #fclose($fp);
}
}
} while ($i++<8);
return false;
}
Note that I'm using exec() and calls the external version of tar. This were a necessity since Phar does very unreliable flushes to the archive, making the tar-file broken since two instances of the code can modify the same file at the same time.
I have an issue I can't seem to find the solution for. I am trying to write to a flat text file. I have echoed all variables out on the screen, verified permissions for the user (www-data) and just for grins set everything in the whole folder to 777 - all to no avail. Worst part is I can call on the same function from another file and it writes. I can't see to find the common thread here.....
function ReplaceAreaInFile($AreaStart, $AreaEnd, $File, $ReplaceWith){
$FileContents = GetFileAsString($File);
$Section = GetAreaFromFile($AreaStart, $AreaEnd, $FileContents, TRUE);
if(isset($Section)){
$SectionTop = $AreaStart."\n";
$SectionTop .= $ReplaceWith;
$NewContents = str_replace($Section, $SectionTop, $FileContents);
if (!$Handle = fopen($File, 'w')) {
return "Cannot open file ($File)";
exit;
}/*
if(!flock($Handle, LOCK_EX | LOCK_NB)) {
echo 'Unable to obtain file lock';
exit(-1);
}*/
if (fwrite($Handle, $NewContents) === FALSE) {
return "Cannot write to file ($File)";
exit;
}else{
return $NewContents;
}
}else{
return "<p align=\"center\">There was an issue saving your settings. Please try again. If the issue persists contact your provider.</p>";
}
}
Try with...
$Handle = fopen($File, 'w');
if ($Handle === false) {
die("Cannot open file ($File)");
}
$written = fwrite($Handle, $NewContents);
if ($written === false) {
die("Invalid arguments - could not write to file ($File)");
}
if ((strlen($NewContents) > 0) && ($written < strlen($NewContents))) {
die("There was a problem writing to $File - $written chars written");
}
fclose($Handle);
echo "Wrote $written bytes to $File\n"; // or log to a file
return $NewContents;
and also check for any problems in the error log. There should be something, assuming you've enabled error logging.
You need to check for number of characters written since in PHP fwrite behaves like this:
After having problems with fwrite() returning 0 in cases where one
would fully expect a return value of false, I took a look at the
source code for php's fwrite() itself. The function will only return
false if you pass in invalid arguments. Any other error, just as a
broken pipe or closed connection, will result in a return value of
less than strlen($string), in most cases 0.
Also, note that you might be writing to a file, but to a different file that you're expecting to write. Absolute paths might help with tracking this.
The final solution I ended up using for this:
function ReplaceAreaInFile($AreaStart, $AreaEnd, $File, $ReplaceWith){
$FileContents = GetFileAsString($File);
$Section = GetAreaFromFile($AreaStart, $AreaEnd, $FileContents, TRUE);
if(isset($Section)){
$SectionTop = $AreaStart."\n";
$SectionTop .= $ReplaceWith;
$NewContents = str_replace($Section, $SectionTop, $FileContents);
return $NewContents;
}else{
return "<p align=\"center\">There was an issue saving your settings.</p>";
}
}
function WriteNewConfigToFile($File2WriteName, $ContentsForFile){
file_put_contents($File2WriteName, $ContentsForFile, LOCK_EX);
}
I did end up using absolute file paths and had to check the permissions on the files. I had to make sure the www-data user in Apache was able to write to the files and was also the user running the script.
I've got a bit of code which, simplified, looks something like:
$fout = fsockopen($host, 80);
stream_set_timeout($fout, 10*3600); // 10 hours
$fin = fopen($file, 'rb'); // not really a file stream, but good enough proxy here
$readbytes = stream_copy_to_stream($fin, $fout);
if(!$readbytes) die('copy failed');
However, I'm sometimes getting the following type of error:
Notice: stream_copy_to_stream(): send of 952 bytes failed with errno=104 Connection reset by peer in ...
Notice: stream_copy_to_stream(): send of 952 bytes failed with errno=32 Broken pipe in ...
And the check on $readbytes there won't pick up the error.
I'm aware that it may be possible to check the total length of the file with the number of bytes copied, but this only works if the total length of the stream can be determined in advance.
As this happens randomly, I presume that the connection is just being dropped for some weird reason (but if anyone has any suggestions to reduce the likeliness of this happening, I'm all ears). But it'd be nice to be able to know whether the transfer fully succeeded or not.
Is there anyway to detect a problem without:
having to know the length of the stream in advance
hook into the PHP error handler to pick up the error
...or perhaps using a buffered fread/fwrite loop, checking the length of bytes written, the best solution here?
Thanks.
Okay, so here's my stream copy function, which tries to detect errors, but it seems to fail still (and I thought fwrite was meant to return the correct number of bytes)
function stream_copy($in, $out, $limit=null, $offset=null) {
$bufsize = 32*1024;
if(isset($offset)) fseek($in, $offset, SEEK_CUR);
while(!feof($in)) {
if(isset($limit)) {
if(!$limit) break;
$data = fread($in, min($limit,$bufsize));
} else
$data = fread($in, $bufsize);
$datalen = strlen($data);
$written = fwrite($out, $data);
if($written != $datalen) {
return false; // write failed
}
if(isset($limit)) $limit -= $datalen;
}
return true;
}
In above function, I'm still getting a 'true' returned even when an error is displayed.
So I'm just going to try hooking into PHP's error handler. Haven't tested the following, but my guess is that it should work
function stream_copy_to_stream_testerr($source, $dest) {
$args = func_get_args();
global $__stream_copy_to_stream_fine;
$__stream_copy_to_stream_fine = true;
set_error_handler('__stream_copy_to_stream_testerr');
call_user_func_array('stream_copy_to_stream', $args);
restore_error_handler();
return $__stream_copy_to_stream_fine;
}
function __stream_copy_to_stream_testerr() {
$GLOBALS['__stream_copy_to_stream_fine'] = false;
return true;
}
Ugly, but the only solution I can see.
I'm using a simple unzip function (as seen below) for my files so I don't have to unzip files manually before they are processed further.
function uncompress($srcName, $dstName) {
$string = implode("", gzfile($srcName));
$fp = fopen($dstName, "w");
fwrite($fp, $string, strlen($string));
fclose($fp);
}
The problem is that if the gzip file is large (e.g. 50mb) the unzipping takes a large amount of ram to process.
The question: can I parse a gzipped file in chunks and still get the correct result? Or is there a better other way to handle the issue of extracting large gzip files (even if it takes a few seconds more)?
gzfile() is a convenience method that calls gzopen, gzread, and gzclose.
So, yes, you can manually do the gzopen and gzread the file in chunks.
This will uncompress the file in 4kB chunks:
function uncompress($srcName, $dstName) {
$sfp = gzopen($srcName, "rb");
$fp = fopen($dstName, "w");
while (!gzeof($sfp)) {
$string = gzread($sfp, 4096);
fwrite($fp, $string, strlen($string));
}
gzclose($sfp);
fclose($fp);
}
try with
function uncompress($srcName, $dstName) {
$fp = fopen($dstName, "w");
fwrite($fp, implode("", gzfile($srcName)));
fclose($fp);
}
$length parameter is optional.
If you are on a Linux host, have the required privilegies to run commands, and the gzip command is installed, you could try calling it with something like shell_exec
SOmething a bit like this, I guess, would do :
shell_exec('gzip -d your_file.gz');
This way, the file wouldn't be unzip by PHP.
As a sidenote :
Take care where the command is run from (ot use a swith to tell "decompress to that directory")
You might want to take a look at escapeshellarg too ;-)
As maliayas mentioned, it may lead to a bug. I experienced an unexpected fall out of the while loop, but the gz file has been decompressed successfully. The whole code looks like this and works better for me:
function gzDecompressFile($srcName, $dstName) {
$error = false;
if( $file = gzopen($srcName, 'rb') ) { // open gz file
$out_file = fopen($dstName, 'wb'); // open destination file
while (($string = gzread($file, 4096)) != '') { // read 4kb at a time
if( !fwrite($out_file, $string) ) { // check if writing was successful
$error = true;
}
}
// close files
fclose($out_file);
gzclose($file);
} else {
$error = true;
}
if ($error)
return false;
else
return true;
}