Some PHP functions, like fopen(), have a return value of type "resource".
However, most of these functions require some actual outside resource, such as a file or database. Or they require additional PHP extension to be installed, such as curl_open().
I sometimes want to experiment with different value types on https://3v4l.org, where I cannot rely on external resources.
Another scenario where this might be relevant is unit tests, where we generally want as little side effects as possible.
So, what is the simplest way to get a value of type resource, without external side effects, 3rd party extensions, or external dependencies?
I use
fopen('php://memory', 'w'); or fopen('php://temp', 'w'); when I just need a file stream resource to play with.
php://temp is better if the buffer will exceed 2mb.
You can use php://memory or php://temp as resource. The first one doesn't even need access to the system /tmp folder.
Example:
$resource = fopen('php://temp', 'w+');
The best I've come up with so far is tmpfile().
It does work in https://3v4l.org/00VlY. Probably they have set up some kind of sandbox filesystem.
$resource = tmpfile();
var_dump(gettype($resource));
var_dump($resource);
var_dump(intval($resource));
I would say it is still not completely free of side effects, because it does something with a file somewhere. Better ideas are welcome!
Related
Let's say I'd like to download some information from a file on the internet within PHP, but I do not need the entire file. Therefore, loading the full file through
$my_file = file_get_contents("https://www.webpage.com/".$filename);
would use up more memory and resources than necessary.
Is there a way to download only e.g. the first 5kb of a file as plain text with PHP?
EDIT:
In the comments it was suggested to use e.g. maxlen arg for file_get_contents or similar. But what I noticed that the execution time of the call does not vary appreciably for different maxlen which means that the function loads the full file and then just returns a substring to the variable.
Is there a way to make PHP download just the required amount of bytes and no more, to speed things up?
<?php
$fp = fopen("https://www.webpage.com/".$filename, "r");
$content = fread($fp,5*1024);
fclose($fp);
?>
Note: Make sure allow_url_fopen is enabled.
PHP Doc: fopen, fread
This question is closely related to my new findings, regarding this question.
Is there any way to preserve the in stream data of php://memory or php://temp between handles? I read (somewhere I can't source off hand) that subsequent openings of the aforementioned streams clears existing data.
$mem1 = fopen('php://memory', 'r+');
fwrite($mem1, 'hello world');
rewind($mem1);
fpassthru($mem1); // "hello world"
$mem2 = fopen('php://memory', 'r+');
rewind($mem2);
fpassthru($mem2); // empty
So again my question is, is there anyway to force existing data to persist in stream when creating a new handle to it?
(The latter call to fpassthru() would of course dump hello world given this is possible)
Opening one of the pseudo-streams php://temp or php://memory always opens a new stream, what means, that every stream your open this way is unique. So you can't read the content of the stream you have previously written to another one.
If you need in-memory virtual stream that persists data you can use https://github.com/mikey179/vfsStream - although it's mainly used for testing I/O operations it should fulfill your requirements - it stores data within internal objects which are identified by virtual URLs so you can access same data in memory by accessing same URL.
The handlers are unique, so you'll have to pass the handler, or (god forbid) keep the handler global
$GLOBALS['my_global_memory_stream']=fopen('php://memory','r+');
Question about this helper http://codeigniter.com/user_guide/helpers/download_helper.html
If, for example, program.exe weights 4 GB, will it take a lot of PHP memory for reading and delivering that file?
$data = file_get_contents("/path/to/program.exe"); // Read the file's contents
$name = 'software.exe';
force_download($name, $data);
force_download function just set the proper HTTP headers to make the client's browser download the file. So, it won't open the file, just pass it's URL to the client.
Check the helper source code, if you need: https://bitbucket.org/ellislab/codeigniter-reactor/src/31b5c1dcf2ed/system/helpers/download_helper.php
Edit: I'd sugest creating your own version of the helper, and, instead of using strlen to get the file size, use the php function filesize, which takes only the file name as argument and returns the size in bytes.
More info, at http://www.php.net/manual/en/function.filesize.php
Yea... that could get... bad...
file_get_contents reads the entire contents of a file into a string. For large files, that can get, well, bad. I would look into readfile. Please remember too -- since CI automatically caches when you are loading a view, that means there will be no discernible benefit to readfile if it is used in a CI view. It would almost be better to handle this with an external script or by outputting directly from the controller and not calling the view at all.
is file_get_contents() enough for downloading remote movie files located on a server ?
i just think that perhaps storing large movie files to string is harmful ? according to the php docs.
OR do i need to use cURL ? I dont know cURL.
UPDATE: these are big movie files. around 200MB each.
file_get_contents() is a problem because it's going to load the entire file into memory in one go. If you have enough memory to support the operation (taking into account that if this is a web server, you may have multiple hits that generate this behavior simultaneously, and therefore each need that much memory), then file_get_contents() should be fine. However, it's not the right way to do it - you should use a library specifically intended for these sort of operations. As mentioned by others, cURL will do the trick, or wget. You might also have good luck using fopen('http://someurl', 'r') and reading blocks from the file and then dumping them straight to a local file that's been opened for write privileges.
As #mopoke suggested it could depend on the size of the file. For a small movie it may suffice. In general I think cURL would be a better fit though. You have much more flexibility with it than with file_get_contents().
For the best performance you may find it makes sense to just use a standard unix util like WGET. You should be able to call it with system("wget ...") or exec()
http://www.php.net/manual/en/function.system.php
you can read a few bytes at a time using fread().
$src="http://somewhere/test.avi";
$dst="test.avi";
$f = fopen($src, 'rb');
$o = fopen($dst, 'wb');
while (!feof($f)) {
if (fwrite($o, fread($f, 2048)) === FALSE) {
return 1;
}
}
fclose($f);
fclose($o);
I would like to know the best way to save an image from a URL in php.
At the moment I am using
file_put_contents($pk, file_get_contents($PIC_URL));
which is not ideal. I am unable to use curl. Is there a method specifically for this?
Using file_get_contents is fine, unless the file is very large. In that case, you don't really need to be holding the entire thing in memory.
For a large retrieval, you could fopen the remote file, fread it, say, 32KB at a time, and fwrite it locally in a loop until all the file has been read.
For example:
$fout = fopen('/tmp/verylarge.jpeg', 'w');
$fin = fopen("http://www.example.com/verylarge.jpeg", "rb");
while (!feof($fin)) {
$buffer= fread($fin, 32*1024);
fwrite($fout,$buffer);
}
fclose($fin);
fclose($fout);
(Devoid of error checking for simplicity!)
Alternatively, you could forego using the url wrappers and use a class like PEAR's HTTP_Request, or roll your own HTTP client code using fsockopen etc. This would enable you to do efficient things like send If-Modified-Since headers if you are maintaining a cache of remote files.
I'd recommend using Paul Dixon's strategy, but replacing fopen with fsockopen(). The reason is that some server configurations disallow URL access for fopen() and file_get_contents(). The setting may be found in php.ini and is called allow_url_fopen.