Increase Efficiency in Zip Reader in PHP - php

I have some php code that reads the first file of a remote zip file containing mp3s. I know that in the zip file format, each file has a compressed size in its header. However, when I use fread to that number after going through the header, it only goes to about 5454 characters. I have made a fix, but it is very slow.
$comp = "";
while(strlen($comp) <= $cpsz){
/*$cpsz is the compressed size*/
$comp .= fread($fh, 1);
}
header("Accept-Ranges:bytes");
header("Connection: Keep-Alive");
header("Content-Type:audio/mpeg");
header("Content-Length:" . $ucps);
header("Content-Range:bytes 0-" . ($ucps-1) . "/" . $ucps);
header("ETag: xyz");
http_response_code (206);
$file = gzinflate($comp);
echo $file;
Is there anyway to make this more efficient than reading one byte at a time?

Related

My downloaded file is changing its name to 'download' although I changed it in the headers

I'm trying to make 2 option names to download a file: Dictionary and Random.
When the user selects the dictionary option it downloads the file constantly with the name 'download' instead of some random word from the dictionary and the random option works fine.
I tried using the load function and printing what it returns and it does print the actual randomly picked word from the dictionary but it doesn't in the download part. I'm guessing its something with the headers but it might not.
That's the load function which loads the dictionary from a file on the server:
function load($file) {
$file_arr = file($file);
$num_lines = count($file_arr);
$last_arr_index = $num_lines - 1;
$rand_index = rand(0, $last_arr_index);
$rand_text = $file_arr[$rand_index];
return $rand_text;
}
That's the download function which downloads the file:
function download($file, $filename) {
header("Content-Type: application/jpeg");
header("Content-Length: " . filesize($file));
header('Content-Disposition: attachment; filename="' . $filename . '"');
readfile($file);
exit();
}
That's how I call the functions:
$filename = load("includes/dictionary_words.txt");
download($file, $filename);
Lastly the $file variable:
$file = "main/app.exe";
Instead of downloading the file with a random dictionary name it downloads it with a 'download' name. I didn't get any errors or notices.

Check file size before creation

I'm creating a system to generate sitemaps for a application I'm working on, and one of the requirements for generating sitemaps is that each sitemap shouldn't have a file size greater than 10mb (10,485,760 bytes) as can be seen here.
This is my code to create the sitemap:
$fp = fopen($this->getSitemapPath() . $filename, 'w');
fwrite($fp, $siteMap->__toString());
fclose($fp);
The method $siteMap->__toString() holds a maximum of 50000 links.
Is there a way to check the resulting file size before calling the function fwrite?
Sure, you can use mb_strlen to get the length of your string before you write it out to a file.
$contents = $siteMap->__toString();
if(mb_strlen($contents, '8bit') >= 10485760) {
echo "Oops, this is too big";
} else {
fwrite($fp, $contents);
}

Download issue in php

I have a php script for downloading mp4 videp files . It is working properly now for files having small size (upto 60 MB is tested) . When i try for a big one (300 MB) it displays an error related with "memory limit" . So that done some edits in ini file for increasing the memory limit upto 400 MB (memory_limit = 400M) . Then i try to run my script , But the download box is displaying a content size of 189 bytes. I tried it with an another mp4 file (small size) , its working well . I dont understand the reason for it . So please kindly direct me . My script is as follows..
$file = $_SERVER['DOCUMENT_ROOT']."/folder_name/filename.mp4";
$filesize = filesize($file);
$fileName=$file;
$offset = 0;
$length = $filesize;
if ( isset($_SERVER['HTTP_RANGE']) ) {
$partialContent = true;
preg_match('/bytes=(\d+)-?/', $_SERVER['HTTP_RANGE'], $matches);
$offset = intval($matches[1]);
$length = $filesize - $offset;
}else {
$partialContent = false;
}
$file = fopen($file, 'r');
fseek($file, $offset);
$data = fread($file, $length);
fclose($file);
if ( $partialContent ) {
header('HTTP/1.1 206 Partial Content');
header('Content-Range: bytes ' . $offset . '-' . ($offset + $length) . '/' . $filesize);
}
header("Content-type: video/mp4");
header('Content-Length: ' . $filesize);
header('Content-Disposition: attachment; filename="' . $fileName . '"');
header('Accept-Ranges: bytes');
print($data);
The download box displaying is..
Please check with your server sometime your server not support this many memory allocation to your script.
Please add following line on top of script.
ini_set('memory_limit', '-1');
ini_set('max_execution_time', '0');
then let me know if that not working.
It isnt good to change all directives without a good reason, so the first step is to know what specific problem you have.
When you run a script for downloading files, like in this case, mp4 video files, and the download box is displaying a content size much smaller than the actual file, its easy to get a clue about what was wrong:
You just need to download that file and, no matter it looks like a video, open it with a text editor.
Right click > open with... and then select any text editor.
You will find that the actual content of the file is the information about the error occurred in the server. With this you will have better information about what happened, so you can figure out for example, if it is related to the memory_limit, to the max_execution_time, or to any other reason.
Now you can decide to change one or more of the directives, or probably change your script to get a better performance.
Note: if the file you downloaded is empty, or doesnt contain any error information, probably you have the error reporting disabled. To enable it you can add this two lines at the begining of the script:
ini_set("display_errors","1");
error_reporting(E_ALL);

How to upload a file byte by byte in php

I have a form input type="file" element and it accepts a file . When I upload it and pass this on to the server side php script .
How do I write the temporary file stored in $_FILES["file"]["tmp_name"] byte by byte ?
The below code does not work . It seems to write at the end of the request . IF for eg a connection is lost in between i would like to see if 40 % was complete so that i can resume it .
Any pointers ?
$target_path = "uploads/";
$target_path = $target_path . basename($name);
if (isset($_FILES['file']['tmp_name']) && is_uploaded_file($_FILES['file']['tmp_name'])) {
// Open temp file
$out = fopen($target_path, "wb");
if ($out) {
// Read binary input stream and append it to temp file
$in = fopen($_FILES['file']['tmp_name'], "rb");
if ($in) {
while ($buff = fread($in, 4096))
fwrite($out, $buff);
}
fclose($in);
fclose($out);
}
}
PHP does not hand over control to the file upload target script until AFTER the upload is complete (or has failed). You don't have to do anything to 'accept' the file - PHP and Apache will take care of writing it to the filename specified in the ['tmp_name'] parameter of the $_FILES array.
If you're trying to resume failed uploads, you'll need a much more complicated script.

Fastest way possible to read contents of a file

Ok, I'm looking for the fastest possible way to read all of the contents of a file via php with a filepath on the server, also these files can be huge. So it's very important that it does a READ ONLY to it as fast as possible.
Is reading it line by line faster than reading the entire contents? Though, I remember reading up on this some, that reading the entire contents can produce errors for huge files. Is this true?
If you want to load the full-content of a file to a PHP variable, the easiest (and, probably fastest) way would be file_get_contents.
But, if you are working with big files, loading the whole file into memory might not be such a good idea : you'll probably end up with a memory_limit error, as PHP will not allow your script to use more than (usually) a couple mega-bytes of memory.
So, even if it's not the fastest solution, reading the file line by line (fopen+fgets+fclose), and working with those lines on the fly, without loading the whole file into memory, might be necessary...
file_get_contents() is the most optimized way to read files in PHP, however - since you're reading files in memory you're always limited to the amount of memory available.
You can issue a ini_set('memory_limit', -1) if you have the right permissions but you'll still be limited by the amount of memory available on your system, this is common to all programming languages.
The only solution is to read the file in chunks, for that you can use file_get_contents() with the fourth and fifth arguments ($offset and $maxlen - specified in bytes):
string file_get_contents(string $filename[, bool $use_include_path = false[, resource $context[, int $offset = -1[, int $maxlen = -1]]]])
Here is an example where I use this technique to serve large download files:
public function Download($path, $speed = null)
{
if (is_file($path) === true)
{
set_time_limit(0);
while (ob_get_level() > 0)
{
ob_end_clean();
}
$size = sprintf('%u', filesize($path));
$speed = (is_int($speed) === true) ? $size : intval($speed) * 1024;
header('Expires: 0');
header('Pragma: public');
header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
header('Content-Type: application/octet-stream');
header('Content-Length: ' . $size);
header('Content-Disposition: attachment; filename="' . basename($path) . '"');
header('Content-Transfer-Encoding: binary');
for ($i = 0; $i <= $size; $i = $i + $speed)
{
ph()->HTTP->Flush(file_get_contents($path, false, null, $i, $speed));
ph()->HTTP->Sleep(1);
}
exit();
}
return false;
}
Another option is the use the less optimized fopen(), feof(), fgets() and fclose() functions, specially if you care about getting whole lines at once, here is another example I provided in another StackOverflow question for importing large SQL queries into the database:
function SplitSQL($file, $delimiter = ';')
{
set_time_limit(0);
if (is_file($file) === true)
{
$file = fopen($file, 'r');
if (is_resource($file) === true)
{
$query = array();
while (feof($file) === false)
{
$query[] = fgets($file);
if (preg_match('~' . preg_quote($delimiter, '~') . '\s*$~iS', end($query)) === 1)
{
$query = trim(implode('', $query));
if (mysql_query($query) === false)
{
echo '<h3>ERROR: ' . $query . '</h3>' . "\n";
}
else
{
echo '<h3>SUCCESS: ' . $query . '</h3>' . "\n";
}
while (ob_get_level() > 0)
{
ob_end_flush();
}
flush();
}
if (is_string($query) === true)
{
$query = array();
}
}
return fclose($file);
}
}
return false;
}
Which technique you use will really depend on what you're trying to do (as you can see with the SQL import function and the download function), but you'll always have to read the data in chunks.
$file_handle = fopen("myfile", "r");
while (!feof($file_handle)) {
$line = fgets($file_handle);
echo $line;
}
fclose($file_handle);
Open the file and stores in $file_handle as reference to the file itself.
Check whether you are already at the end of the file.
Keep reading the file until you are at the end, printing each line as you read it.
Close the file.
You could use file_get_contents
Example:
$homepage = file_get_contents('http://www.example.com/');
echo $homepage;
Use fpassthru or readfile.
Both use constant memory with increasing file size.
http://raditha.com/wiki/Readfile_vs_include
foreach (new SplFileObject($filepath) as $lineNumber => $lineContent) {
echo $lineNumber."==>".$lineContent;
//process your operations here
}
Reading the whole file in one go is faster.
But huge files may eat up all your memory and cause problems. Then your safest bet is to read line by line.
If you're not worried about memory and file size,
$lines = file($path);
$lines is then the array of the file.
You Could Try cURL (http://php.net/manual/en/book.curl.php).
Altho You Might Want To Check, It Has Its Limits As Well
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://example.com/");
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec ($ch); // Whole Page As String
curl_close ($ch);

Categories