Having problems reading/writing the php://temp stream - php

I'm having trouble with reading and writing the php://temp stream in PHP 5.3.2
I basically have:
file_put_contents('php://temp/test', 'test');
var_dump(file_get_contents('php://temp/test'));
The only output I get is string(0) ""
Shouldn't I get my 'test' back?

php://temp is not a file path, it's a pseudo protocol that always creates a new random temp file when used. The /test is actually being ignored entirely. The only extra "arguments" the php://temp wrapper accepts is /maxmemory:n. You need to keep a file handle around to the opened temp stream, or it will be discarded:
$tmp = fopen('php://temp', 'r+');
fwrite($tmp, 'test');
rewind($tmp);
fpassthru($tmp);
fclose($tmp);
See http://php.net/manual/en/wrappers.php.php#refsect1-wrappers.php-examples

Each time, when you use fopen to get handler, content of php://temp will be flushed. Use rewind() and stream_get_contents() to get content. Or, use normal cachers, like APC or memcache :)

Finally found a documented small note, that explains why
Example 5 at the PHP Manual used almost your exact same code sample and says
php://memory and php://temp are not reusable, i.e. after the streams
have been closed there is no way to refer to them again.
file_put_contents('php://memory', 'PHP');
echo file_get_contents('php://memory'); // prints nothing
I guess this means that file_put_contents() closes the stream internally, which makes file_get_contents() unable to recover the data in the stream again

I know this is late, but in addition to #OZ_'s answer, i just discovered that 'fread' works too, after you rewind.
$handle = fopen('php://temp', 'w+');
fwrite($handle, 'I am freaking awesome');
fread($handle); // returns '';
rewind($handle); // resets the position of pointer
fread($handle, fstat($handle)['size']); // I am freaking awesome

Related

Is there a guaranteed file resource on PHP?

Is there some url/stream that fopen will successfully open on most PHP installations? /dev/null is not available or openable on some systems. Something like php://temp should be a fairly safe bet, right?
The application for this code that guarantees a file resource, instead of the mixed filetype of bool|resource you have with fopen:
/**
* #return resource
*/
function openFileWithResourceGuarantee() {
$fh = #fopen('/write/protected/location.txt', 'w');
if ( $fh === false ) {
error_log('Could not open /write/protected/location.txt');
$fh = fopen('php://temp');
}
return $fh;
}
In PHP 7 with strict types, the above function should guarantee a resource and avoid bools. I know that resources are not official types, but still want to be as type-safe as possible.
php://memory should be universally available.
If you need a stream for writing errors to why are you not writing to php://stderr?
Example from the docs:
When logging to apache on windows, both error_log and also
trigger_error result in an apache status of error on the front of the
message. This is bad if all you want to do is log information. However
you can simply log to stderr however you will have to do all message
assembly:
LogToApache($Message) {
$stderr = fopen('php://stderr', 'w');
fwrite($stderr,$Message);
fclose($stderr);
}
Note: php://stderr is sometimes the same as php://stdout, but not always.
For streams see: http://php.net/manual/en/wrappers.php.php
Something like php://temp should be a fairly safe bet, right?
As #weirdan already pointed out php://memory is probably safer as it does not even need to create any file. Memory access MUST be possible. From the docs:
php://memory and php://temp are read-write streams that allow
temporary data to be stored in a file-like wrapper. The only
difference between the two is that php://memory will always store its
data in memory, whereas php://temp will use a temporary file once the
amount of data stored hits a predefined limit (the default is 2 MB).
The location of this temporary file is determined in the same way as
the sys_get_temp_dir() function.
Not sure if this answers your question completely but does it lead you into the right direction?

PHP flock(): can I use it around file-get- and file-put-contents (read-modify-write)

I want to do a read-modify-write to a file whilst I have an exclusive lock. Clearly I can use flock() and do the sequence fopen, flock, fread, fwrite, flock, fclose, but I my code would look neater if it I could use file_get_contents and file_put_contents. However, I need to lock both processes and I was wondering if was possible to do that using flock "somehow". The danger is, of course, that I'll write something that seems to work but doesnt actually lock anything :-)
as #jonathan said you can use the flag LOCK_EX with file_put_contents but for file_get_contents you can build your own custom function and then use it in your code .
the code below can be a starting point for you:
function fget_contents($path,$operation=LOCK_EX|LOCK_NB,&$wouldblock=0,$use_include_path = false ,$context=NULL ,$offset = 0 ,$maxlen=null){
if(!file_exists($path)||!is_readable($path)){
trigger_error("fget_contents($path): failed to open stream: No such file or directory in",E_USER_WARNING);
return false;
}
if($maxlen<0){
trigger_error("fget_contents(): length must be greater than or equal to zero",E_USER_WARNING);
return false;
}
$maxlen=is_null($maxlen)?filesize($path):$maxlen;
$context=!is_resource($context)?NULL:$context;
$use_include_path=($use_include_path!==false&&$use_include_path!==FILE_USE_INCLUDE_PATH)?false:$use_include_path;
if(is_resource($context))
$resource=fopen($path,'r',(bool)$use_include_path,$context);
else
$resource=fopen($path,'r',(bool)$use_include_path);
$operation=($operation!==LOCK_EX|LOCK_NB&&$operation!==LOCK_SH|LOCK_NB&&$operation!==LOCK_EX&&$operation!==LOCK_SH)?LOCK_EX|LOCK_NB:$operation;
if(!flock($resource,$operation,$wouldblock)){
trigger_error("fget_contents(): the file can't be locked",E_USER_WARNING);
return false;
}
if(-1===fseek($resource,$offset)){
trigger_error("fget_contents(): can't move to offset $offset.The stream doesn't support fseek ",E_USER_WARNING);
return false;
}
$contents=fread($resource,$maxlen);
flock($resource, LOCK_UN);
fclose($resource);
return $contents;
}
for a little explanation :
i just combine the parameters for flock with those for file_get_contents so you just need to read alittle about these two functions to understand the code.However if you don't need advanced usage of this function you can just do
$variable=fget_contents($yourpathhere);
I think this line is the same as :
$variable=file_get_contents($yourpathhere);
Except that fget_contents will actually lock the file according to your flag...

Load a 'php://temp' or 'php://memory' file within a Symfony File object

I have a blob resource from my db. I want to wrap temporaly this file into Symfony File object because I want to use specific methods like the extension guesser, and apply symfony file validators.
I want to store this temporary file into memory, because the blobs are small files and i dont want to create a file in disk in every request.
I tried to do this in that way:
$file = new File ('php://temp');
but symfony throws an error that says 'The file "php://temp" does not exist'. Looking at File source, the error is caused by a "is_file($path)" check that is made in the constructor, and I can invalidate this putting false in the second argument. But, if I do:
$file = new File ('php://temp', false);
the File is created, but then the error comes back later, e.g. when i use the guesser:
$file->guessExtension($file)
because in Symfony/Component/HttpFoundation/File/MimeType/MimeTypeGuesser.php:
public function guess($path)
{
if (!is_file($path)) {
throw new FileNotFoundException($path);
}
(...)
Ok. Then my question is: There is a way to load a 'php://temp' or 'php://memory' within a File object?
Pretty sure php://temp writes to memory until it is full and then writes to a file, whereas php://memory ensures only in memory with no fall back.
This likely happens because php://temp and php://memory are non-reusable, so once you've written to it the content may not still be there when you next want it. From the PHP manual:
php://memory and php://temp are not reusable, i.e. after the streams have been closed there is no way to refer to them again.
file_put_contents('php://memory', 'PHP');
echo file_get_contents('php://memory'); // prints nothing
How are you writing to php://temp to begin with? That will be the more important issue rather than with Symfony's File class. I suspect that by the time you are creating a File instance that php://temp has already gone.
It's worth noting that using php://temp will create a file on disk in the temporary location, so you might as well use write to a tempnam() handle anyway. At least then you will have a reference to a physical (but temporary) file.
I suggested to allow passing the file contents (instead of the path) to Symfony's MIME type guesser, to enable guessing "on-the-fly": https://github.com/symfony/symfony/issues/40916
Here's how I do it right now:
use Symfony\Component\Mime\MimeTypes;
$tmpFilename = tempnam(sys_get_temp_dir(), 'guessMimeType_');
file_put_contents($tmpFilename, $content);
$mimeTypes = new MimeTypes();
$guessedMimeType = $mimeTypes->guessMimeType($tmpFilename);
unlink($tmpFilename);
The first line is taken from https://www.php.net/manual/en/function.tempnam.php#93256

About PHP parallel file read/write

Have a file in a website. A PHP script modifies it like this:
$contents = file_get_contents("MyFile");
// ** Modify $contents **
// Now rewrite:
$file = fopen("MyFile","w+");
fwrite($file, $contents);
fclose($file);
The modification is pretty simple. It grabs the file's contents and adds a few lines. Then it overwrites the file.
I am aware that PHP has a function for appending contents to a file rather than overwriting it all over again. However, I want to keep using this method since I'll probably change the modification algorithm in the future (so appending may not be enough).
Anyway, I was testing this out, making like 100 requests. Each time I call the script, I add a new line to the file:
First call:
First!
Second call:
First!
Second!
Third call:
First!
Second!
Third!
Pretty cool. But then:
Fourth call:
Fourth!
Fifth call:
Fourth!
Fifth!
As you can see, the first, second and third lines simply disappeared.
I've determined that the problem isn't the contents string modification algorithm (I've tested it separately). Something is messed up either when reading or writing the file.
I think it is very likely that the issue is when the file's contents are read: if $contents, for some odd reason, is empty, then the behavior shown above makes sense.
I'm no expert with PHP, but perhaps the fact that I performed 100 calls almost simultaneously caused this issue. What if there are two processes, and one is writing the file while the other is reading it?
What is the recommended approach for this issue? How should I manage file modifications when several processes could be writing/reading the same file?
What you need to do is use flock() (file lock)
What I think is happening is your script is grabbing the file while the previous script is still writing to it. Since the file is still being written to, it doesn't exist at the moment when PHP grabs it, so php gets an empty string, and once the later processes is done it overwrites the previous file.
The solution is to have the script usleep() for a few milliseconds when the file is locked and then try again. Just be sure to put a limit on how many times your script can try.
NOTICE:
If another PHP script or application accesses the file, it may not necessarily use/check for file locks. This is because file locks are often seen as an optional extra, since in most cases they aren't needed.
So the issue is parallel accesses to the same file, while one is writing to the file another instance is reading before the file has been updated.
PHP luckily has a mechanisms for locking the file so no one can read from it until the lock is released and the file has been updated.
flock()
can be used and the documentation is here
You need to create a lock, so that any concurrent requests will have to wait their turn. This can be done using the flock() function. You will have to use fopen(), as opposed to file_get_contents(), but it should not be a problem:
$file = 'file.txt';
$fh = fopen($file, 'r+');
if (flock($fh, LOCK_EX)) { // Get an exclusive lock
$data = fread($fh, filesize($file)); // Get the contents of file
// Do something with data here...
ftruncate($fh, 0); // Empty the file
fwrite($fh, $newData); // Write new data to file
fclose($fh); // Close handle and release lock
} else {
die('Unable to get a lock on file: '.$file);
}

PHP fopen() memory efficiency and usage

I am building a system to create files that will range from a few Kb to say, around 50Mb, and this question is more out of curiosity than anything else. I couldn't find any answers online.
If I use
$handle=fopen($file,'w');
where is the $handle stored before I call
fclose($handle);
? Is it stored in the system's memory, or in a temp file somewhere?
Secondly, I am building the file using a loop that takes 1024 bytes of data at a time, and each time writes data as:
fwrite($handle, $content);
It then calls
fclose($handle);
when the loop is complete and all data is written. However, would it be more efficient or memory friendly to use a loop that did
$handle = fopen($file, 'a');
fwrite($handle, $content);
fclose($handle);
?
In PHP terminology, fopen() (as well as many other functions) returns a resource. So $handle is a resource that references the file handle that is associated with your $file.
Resources are in-memory objects, they are not persisted to the file system by PHP.
Your current methodology is the more efficient of the two options. Opening, writing to, and then closing the same file over and over again is less efficient than just opening it once, writing to it many times, and then closing it. Opening and closing the file requires setting up input and output buffers and allocating other internal resources, which are comparatively expensive operations.
Your file handle is just another memory reference and is stored in the stack memory just like other program variables and resources. Also in terms of file I/O, open and close once, and write as many times as you need - that is the most efficient way.
$handle = fopen($file, 'a'); //open once
while(condition){
fwrite($handle, $content); //write many
}
fclose($handle); //close once
According to PHP DOCS fopen() creates a stream which is a File handle. It is associated with file in filesystem.
Creating new File handle every time you need to write another 1024 bytes would be terribly slow.

Categories