Download the first 5kb of a file with PHP as plain text? - php

Let's say I'd like to download some information from a file on the internet within PHP, but I do not need the entire file. Therefore, loading the full file through
$my_file = file_get_contents("https://www.webpage.com/".$filename);
would use up more memory and resources than necessary.
Is there a way to download only e.g. the first 5kb of a file as plain text with PHP?
EDIT:
In the comments it was suggested to use e.g. maxlen arg for file_get_contents or similar. But what I noticed that the execution time of the call does not vary appreciably for different maxlen which means that the function loads the full file and then just returns a substring to the variable.
Is there a way to make PHP download just the required amount of bytes and no more, to speed things up?

<?php
$fp = fopen("https://www.webpage.com/".$filename, "r");
$content = fread($fp,5*1024);
fclose($fp);
?>
Note: Make sure allow_url_fopen is enabled.
PHP Doc: fopen, fread

Related

PHP: GET # of characters from a URL and then stop/exit?

For parsing large files on the internet, or just wanting to get the opengraph tags of a website, is there a way to GET a webpage's first 1000 characters and then to stop downloading anything else from the page?
When a file is several megabytes, it can take the server a while to parse the file. This is especially the case when operating with many of these files. Even more troublesome than bandwidth is CPU/RAM conditions as files that are too large are difficult to work with in PHP as the server can run out of memory.
Here are some PHP commands that can open a webpage:
fopen
file_get_contents
include
fread
url_get_contents
curl_init
curl_setopt
parse_url
Can any of these be set to download a specific number of characters and then exit?
Something like that?
<?php
if ($handle = fopen("http://www.example.com/", "rb")) {
echo fread($handle, 8192);
}
Got from php.net official functions doc examples...

Gzcompressed or plain string

I have a file plain.cache which is little over 10MB and I made a gzcompressed file gz.cache out of the original plain.cache file. Then, I made two separate files which load each of the mentioned cache files and was kind of surprised that the page load speed of both files was almost the same. So, my question is - am I being right by concluding that gzcompressed file does not in any way benefit the load speed of the page? Now, I would conclude that the gzuncompress that I use in the gz.php file "makes" the same exact string just as when I read it from the plain file. Given all these staments - a general question is how can I (if it all in all is done this way) increase the load speed by compressing the file with gzcompress.
The image of the files is below, and the code of files is as follows:
_makeCache.php, in which I make the gzcompressed version of the plain.cache file:
$str = file_get_contents("plain.cache");
$strCompressed = gzcompress($str, 9);
$file = "gz.cache";
$fp = fopen($file, "w");
fwrite($fp, $strCompressed);
fclose($fp);
plain.php:
echo file_get_contents("plain.cache");
gz.php:
echo gzuncompress(file_get_contents("plain.cache"));
Your http server is compressing the plain.cache automatically on the fly, using gzip as well, and the client decompresses it. So you should see almost no difference.

Issue to determine a currently downloading file size?

I have an interesting problem. I need to do a progress bar from an asycronusly php file downloading. I thought the best way to do it is before the download starts the script is making a txt file which is including the file name and the original file size as well.
Now we have an ajax function which calling a php script what is intended to check the local file size. I have 2 main problems.
files are bigger then 2GB so filesize() function is out of business
i tried to find a different way to determine the local file size like this:
.
function getSize($filename) {
$a = fopen($filename, 'r');
fseek($a, 0, SEEK_END);
$filesize = ftell($a);
fclose($a);
return $filesize;
}
Unfortunately the second way giving me a tons of error assuming that i cannot open a file which is currently downloading.
Is there any way i can check a size of a file which is currently downloading and the file size will be bigger then 2 GB?
Any help is greatly appreciated.
I found the solution by using an exec() function:
exec("ls -s -k /path/to/your/file/".$file_name,$out);
Just change your OS and PHP to support 64 bit computing. and you can still use filesize().
From filesize() manual:
Return Values
Returns the size of the file in bytes, or FALSE (and generates an
error of level E_WARNING) in case of an error.
Note: Because PHP's integer type is signed and many platforms use
32bit integers, some filesystem functions may return unexpected
results for files which are larger than 2GB.

CodeIgniter. Download Helper. Memory usage question

Question about this helper http://codeigniter.com/user_guide/helpers/download_helper.html
If, for example, program.exe weights 4 GB, will it take a lot of PHP memory for reading and delivering that file?
$data = file_get_contents("/path/to/program.exe"); // Read the file's contents
$name = 'software.exe';
force_download($name, $data);
force_download function just set the proper HTTP headers to make the client's browser download the file. So, it won't open the file, just pass it's URL to the client.
Check the helper source code, if you need: https://bitbucket.org/ellislab/codeigniter-reactor/src/31b5c1dcf2ed/system/helpers/download_helper.php
Edit: I'd sugest creating your own version of the helper, and, instead of using strlen to get the file size, use the php function filesize, which takes only the file name as argument and returns the size in bytes.
More info, at http://www.php.net/manual/en/function.filesize.php
Yea... that could get... bad...
file_get_contents reads the entire contents of a file into a string. For large files, that can get, well, bad. I would look into readfile. Please remember too -- since CI automatically caches when you are loading a view, that means there will be no discernible benefit to readfile if it is used in a CI view. It would almost be better to handle this with an external script or by outputting directly from the controller and not calling the view at all.

Best way to store an image from a url in php?

I would like to know the best way to save an image from a URL in php.
At the moment I am using
file_put_contents($pk, file_get_contents($PIC_URL));
which is not ideal. I am unable to use curl. Is there a method specifically for this?
Using file_get_contents is fine, unless the file is very large. In that case, you don't really need to be holding the entire thing in memory.
For a large retrieval, you could fopen the remote file, fread it, say, 32KB at a time, and fwrite it locally in a loop until all the file has been read.
For example:
$fout = fopen('/tmp/verylarge.jpeg', 'w');
$fin = fopen("http://www.example.com/verylarge.jpeg", "rb");
while (!feof($fin)) {
$buffer= fread($fin, 32*1024);
fwrite($fout,$buffer);
}
fclose($fin);
fclose($fout);
(Devoid of error checking for simplicity!)
Alternatively, you could forego using the url wrappers and use a class like PEAR's HTTP_Request, or roll your own HTTP client code using fsockopen etc. This would enable you to do efficient things like send If-Modified-Since headers if you are maintaining a cache of remote files.
I'd recommend using Paul Dixon's strategy, but replacing fopen with fsockopen(). The reason is that some server configurations disallow URL access for fopen() and file_get_contents(). The setting may be found in php.ini and is called allow_url_fopen.

Categories