PHP Stop Reading Remote Files when after it is Fully Downloaded - php

I'm getting a 30 second timeout error because the code keeps checking if the file is over 5mb when it's under. The code is designed to reject files over 5mb but i need it to also stop executing when the file is under 5mb. Is there a way to check the file transfer chunk to see if it's empty? I'm currently using this example by DaveRandom:
PHP Stop Remote File Download if it Exceeds 5mb
Code by DaveRandom:
$url = 'http://www.spacetelescope.org/static/archives/images/large/heic0601a.jpg';
$file = '../temp/test.jpg';
$limit = 5 * 1024 * 1024; // 5MB
if (!$rfp = fopen($url, 'r')) {
// error, could not open remote file
}
if (!$lfp = fopen($file, 'w')) {
// error, could not open local file
}
// Check the content-length for exceeding the limit
foreach ($http_response_header as $header) {
if (preg_match('/^\s*content-length\s*:\s*(\d+)\s*$/', $header, $matches)) {
if ($matches[1] > $limit) {
// error, file too large
}
}
}
$downloaded = 0;
while ($downloaded < $limit) {
$chunk = fread($rfp, 8192);
fwrite($lfp, $chunk);
$downloaded += strlen($chunk);
}
if ($downloaded > $limit) {
// error, file too large
unlink($file); // delete local data
} else {
// success
}

You should check if you have reached the end of the file:
while (!feof($rfp) && $downloaded < $limit) {
$chunk = fread($rfp, 8192);
fwrite($lfp, $chunk);
$downloaded += strlen($chunk);
}

Related

Serve huge file via php, not located in public_html

I want to serve huge files from a folder above the public_html.
Currently I do:
<?php
// Authenticate
if ($_GET['key'] !== "MY-API-KEY") {
header('HTTP/1.0 403 Forbidden');
echo "You are not authorized.";
return;
}
define('CHUNK_SIZE', 1024*1024);
$PATH_ROOT_AUTOPILOT_ACTIVITY_STREAMS = "../../../data/csv/";
// Read a file and display its content chunk by chunk
function readfile_chunked($filename, $retbytes = TRUE) {
$buffer = '';
$cnt = 0;
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = fread($handle, CHUNK_SIZE);
echo $buffer;
ob_flush();
flush();
if ($retbytes) {
$cnt += strlen($buffer);
}
}
$status = fclose($handle);
if ($retbytes && $status) {
return $cnt; // return num. bytes delivered like readfile() does.
}
return $status;
}
// Get the file parameter
$file = basename(urldecode($_GET['file']));
$fileDir = $PATH_ROOT_AUTOPILOT_ACTIVITY_STREAMS;
$filePath = $fileDir . $file;
if (file_exists($filePath))
{
// Get the file's mime type to send the correct content type header
$finfo = finfo_open(FILEINFO_MIME_TYPE);
$mime_type = finfo_file($finfo, $filePath);
// Send the headers
header("Content-Disposition: attachment; filename=$file.csv;");
header("Content-Type: $mime_type");
header('Content-Length: ' . filesize($filePath));
// Stream the file
readfile_chunked($filePath);
exit;
}
?>
This currently fails for some reason I don't understand. curl outputs:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 457M 0 457M 0 0 1228k 0 --:--:-- 0:06:21 --:--:-- 1762k
curl: (92) HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)
Is there a better way to serve big files programatically, via PHP?
Currently about 2/3 of the file is served. The response is not complete because it crashes. There are no logs.

Download 40M zip file using PHP

I have a 40M zip file that is at a web address of say http://info/data/bigfile.zip that I would like to download to my local server. What is the best way currently to download a zip file of that size using PHP or header requests such that it won't time out at 8M or give me a 500 error? Right now, I keep getting timed out.
In order to download large file via php try something like this (source http://teddy.fr/2007/11/28/how-serve-big-files-through-php/):
<?php
define('CHUNK_SIZE', 1024*1024); // Size (in bytes) of tiles chunk
// Read a file and display its content chunk by chunk
function readfile_chunked($filename, $retbytes = TRUE) {
$buffer = '';
$cnt = 0;
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = fread($handle, CHUNK_SIZE);
echo $buffer;
ob_flush();
flush();
if ($retbytes) {
$cnt += strlen($buffer);
}
}
$status = fclose($handle);
if ($retbytes && $status) {
return $cnt; // return num. bytes delivered like readfile() does.
}
return $status;
}
// Here goes your code for checking that the user is logged in
// ...
// ...
$filename = 'path/to/your/file'; // url of your file
$mimetype = 'mime/type';
header('Content-Type: '.$mimetype );
readfile_chunked($filename);
?>
**Second solution **
Copy the file one small chunk at a time
/**
* Copy remote file over HTTP one small chunk at a time.
*
* #param $infile The full URL to the remote file
* #param $outfile The path where to save the file
*/
function copyfile_chunked($infile, $outfile) {
$chunksize = 10 * (1024 * 1024); // 10 Megs
/**
* parse_url breaks a part a URL into it's parts, i.e. host, path,
* query string, etc.
*/
$parts = parse_url($infile);
$i_handle = fsockopen($parts['host'], 80, $errstr, $errcode, 5);
$o_handle = fopen($outfile, 'wb');
if ($i_handle == false || $o_handle == false) {
return false;
}
if (!empty($parts['query'])) {
$parts['path'] .= '?' . $parts['query'];
}
/**
* Send the request to the server for the file
*/
$request = "GET {$parts['path']} HTTP/1.1\r\n";
$request .= "Host: {$parts['host']}\r\n";
$request .= "User-Agent: Mozilla/5.0\r\n";
$request .= "Keep-Alive: 115\r\n";
$request .= "Connection: keep-alive\r\n\r\n";
fwrite($i_handle, $request);
/**
* Now read the headers from the remote server. We'll need
* to get the content length.
*/
$headers = array();
while(!feof($i_handle)) {
$line = fgets($i_handle);
if ($line == "\r\n") break;
$headers[] = $line;
}
/**
* Look for the Content-Length header, and get the size
* of the remote file.
*/
$length = 0;
foreach($headers as $header) {
if (stripos($header, 'Content-Length:') === 0) {
$length = (int)str_replace('Content-Length: ', '', $header);
break;
}
}
/**
* Start reading in the remote file, and writing it to the
* local file one chunk at a time.
*/
$cnt = 0;
while(!feof($i_handle)) {
$buf = '';
$buf = fread($i_handle, $chunksize);
$bytes = fwrite($o_handle, $buf);
if ($bytes == false) {
return false;
}
$cnt += $bytes;
/**
* We're done reading when we've reached the conent length
*/
if ($cnt >= $length) break;
}
fclose($i_handle);
fclose($o_handle);
return $cnt;
}
Adjust the $chunksize variable to your needs. This has only been mildly tested. It could easily break for a number of reasons.
Usage:
copyfile_chunked('http://somesite.com/somefile.jpg', '/local/path/somefile.jpg');
Not a lot of detail given, but it sounds like php.ini defaults are restricting the servers ability to transfer large files via php web interface in a timely manner.
Namely these settings post_max_size, upload_max_filesize, or max_execution_time
Might jump in your php.ini file, up the sizes, restart Apache, and retry the file transfer.
HTH

PHP Script using CronJob

I have been working on a small personal project that involves a php script using ftp to connect to a game server, and then reading log files to record kills and deaths into a database which will then be placed on my website.
At the current moment, my plan is to have the cronjob run each day at midnight (new log file created at midnight) and the script will then loop as long as the current day matches the date when the script was started. It opens the ftp file stream, reads lines, and records the line number when at the end of the file. Then, it will sleep for 1 minute. Then it will check if the file has become longer than its previously recorded length. If it has, then it will read those lines.
I have set_time_limit(0) as well.
Here is a snippet of my code that I think is causing my problem
$lastpos = 1;
$length = 0;
$filename = ftp........./logs_03_27_16.txt
while($daycount == (date("d") + 0)) {
sleep(60);
clearstatcache(false, $filename);
$length = filesize($filename);
if ($length < $lastpos) {
$lastpos = $length;
} elseif ($length > $lastpos) {
$file = fopen($filename, "rb");
while ($file === false) {
sleep(60);
$file = fopen($filename, "rb");
}
fseek($file, $lastpos);
while(!feof($file)) {
//Do Operations on file
}
$lastpos = ftell($file);
fclose($file);
flush();
}
}
fclose($file);
Correction: Now I get 504 Gateway Time-out when attempting to run it in my browser

Split a large zip file in small chunks using php script

I am using below script for spliting a large zip file in small chucks.
$filename = "pro.zip";
$targetfolder = '/tmp';
// File size in Mb per piece/split.
// For a 200Mb file if piecesize=10 it will create twenty 10Mb files
$piecesize = 10; // splitted file size in MB
$buffer = 1024;
$piece = 1048576*$piecesize;
$current = 0;
$splitnum = 1;
if(!file_exists($targetfolder)) {
if(mkdir($targetfolder)) {
echo "Created target folder $targetfolder".br();
}
}
if(!$handle = fopen($filename, "rb")) {
die("Unable to open $filename for read! Make sure you edited filesplit.php correctly!".br());
}
$base_filename = basename($filename);
$piece_name = $targetfolder.'/'.$base_filename.'.'.str_pad($splitnum, 3, "0", STR_PAD_LEFT);
if(!$fw = fopen($piece_name,"w")) {
die("Unable to open $piece_name for write. Make sure target folder is writeable.".br());
}
echo "Splitting $base_filename into $piecesize Mb files ".br()."(last piece may be smaller in size)".br();
echo "Writing $piece_name...".br();
while (!feof($handle) and $splitnum < 999) {
if($current < $piece) {
if($content = fread($handle, $buffer)) {
if(fwrite($fw, $content)) {
$current += $buffer;
} else {
die("filesplit.php is unable to write to target folder");
}
}
} else {
fclose($fw);
$current = 0;
$splitnum++;
$piece_name = $targetfolder.'/'.$base_filename.'.'.str_pad($splitnum, 3, "0", STR_PAD_LEFT);
echo "Writing $piece_name...".br();
$fw = fopen($piece_name,"w");
}
}
fclose($fw);
fclose($handle);
echo "Done! ".br();
exit;
function br() {
return (!empty($_SERVER['SERVER_SOFTWARE']))?'<br>':"\n";
}
?>
But this script not creating small files after split in target temp folder. Script runs successfully without any error.
Please help me to found out what is issue here? Or If you have any other working script for similar functinality, Please provide me.
As indicated in the comments above, you can use split to split a file into smaller pieces, and can then use cat to join them back together.
split -b50m filename x
and to put them back
cat xaa xab xac > filename
If you are looking to split the zipfile into a spanning type archive, so that you do not need to rejoin the them together take a look at zipsplit
zipslit -n (size) filename
so you can just call zipsplit from your exec script and then most standard unzip utils should be able to put it back together. man zipslit for more options, including setting output path, etc..

php how to get web image size in kb?

php how to get web image size in kb?
getimagesize only get the width and height.
and filesize caused waring.
$imgsize=filesize("http://static.adzerk.net/Advertisers/2564.jpg");
echo $imgsize;
Warning: filesize() [function.filesize]: stat failed for http://static.adzerk.net/Advertisers/2564.jpg
Is there any other way to get a web image size in kb?
Short of doing a complete HTTP request, there is no easy way:
$img = get_headers("http://static.adzerk.net/Advertisers/2564.jpg", 1);
print $img["Content-Length"];
You can likely utilize cURL however to send a lighter HEAD request instead.
<?php
$file_size = filesize($_SERVER['DOCUMENT_ROOT']."/Advertisers/2564.jpg"); // Get file size in bytes
$file_size = $file_size / 1024; // Get file size in KB
echo $file_size; // Echo file size
?>
Not sure about using filesize() for remote files, but there are good snippets on php.net though about using cURL.
http://www.php.net/manual/en/function.filesize.php#92462
That sounds like a permissions issue because filesize() should work just fine.
Here is an example:
php > echo filesize("./9832712.jpg");
1433719
Make sure the permissions are set correctly on the image and that the path is also correct. You will need to apply some math to convert from bytes to KB but after doing that you should be in good shape!
Here is a good link regarding filesize()
You cannot use filesize() to retrieve remote file information. It must first be downloaded or determined by another method
Using Curl here is a good method:
Tutorial
You can use also this function
<?php
$filesize=file_get_size($dir.'/'.$ff);
$filesize=$filesize/1024;// to convert in KB
echo $filesize;
function file_get_size($file) {
//open file
$fh = fopen($file, "r");
//declare some variables
$size = "0";
$char = "";
//set file pointer to 0; I'm a little bit paranoid, you can remove this
fseek($fh, 0, SEEK_SET);
//set multiplicator to zero
$count = 0;
while (true) {
//jump 1 MB forward in file
fseek($fh, 1048576, SEEK_CUR);
//check if we actually left the file
if (($char = fgetc($fh)) !== false) {
//if not, go on
$count ++;
} else {
//else jump back where we were before leaving and exit loop
fseek($fh, -1048576, SEEK_CUR);
break;
}
}
//we could make $count jumps, so the file is at least $count * 1.000001 MB large
//1048577 because we jump 1 MB and fgetc goes 1 B forward too
$size = bcmul("1048577", $count);
//now count the last few bytes; they're always less than 1048576 so it's quite fast
$fine = 0;
while(false !== ($char = fgetc($fh))) {
$fine ++;
}
//and add them
$size = bcadd($size, $fine);
fclose($fh);
return $size;
}
?>
You can get the file size by using the get_headers() function. Use below code:
$image = get_headers($url, 1);
$bytes = $image["Content-Length"];
$mb = $bytes/(1024 * 1024);
echo number_format($mb,2) . " MB";

Categories