Yii resumable implementing - php

I have a task to implement resumable in Yii, and I implemented upload control, but never Resumable before.
public function actionUpload()
{
$model=new User;
if(isset($_POST['User'])) {
$model->attributes=$_POST['User'];
$model->image=CUploadedFile::getInstance($model,'image');
if($model->save()) {
$model->image->saveAs('upload/'.$model->image->name);
$this->redirect(array('view','id'=>$model->uUserID));
}
}
$this->render('upload',array('model'=>$model));
}
The task is to chunk file in small pieces.
Example: one file can be 1 GB. And I try to send that file with rest service.

See Sample server implementation in PHP
I copy-paste here essential part of the code, provided on that page:
/**
*
* Check if all the parts exist, and
* gather all the parts of the file together
* #param string $dir - the temporary directory holding all the parts of the file
* #param string $fileName - the original file name
* #param string $chunkSize - each chunk size (in bytes)
* #param string $totalSize - original file size (in bytes)
*/
function createFileFromChunks($temp_dir, $fileName, $chunkSize, $totalSize) {
// count all the parts of this file
$total_files = 0;
foreach(scandir($temp_dir) as $file) {
if (stripos($file, $fileName) !== false) {
$total_files++;
}
}
// check that all the parts are present
// the size of the last part is between chunkSize and 2*$chunkSize
if ($total_files * $chunkSize >= ($totalSize - $chunkSize + 1)) {
// create the final destination file
if (($fp = fopen('temp/'.$fileName, 'w')) !== false) {
for ($i=1; $i<=$total_files; $i++) {
fwrite($fp, file_get_contents($temp_dir.'/'.$fileName.'.part'.$i));
_log('writing chunk '.$i);
}
fclose($fp);
} else {
_log('cannot create the destination file');
return false;
}
// rename the temporary directory (to avoid access from other
// concurrent chunks uploads) and than delete it
if (rename($temp_dir, $temp_dir.'_UNUSED')) {
rrmdir($temp_dir.'_UNUSED');
} else {
rrmdir($temp_dir);
}
}
}

Related

Best way to only search for Excel files

So I've made this Upload file function on my php page. What this function does is uploading the file into a folder with the same name as the file. The user can choose a lot of different projects, from a dropdown. When the user picks a project with a existing file, another function will scan all the information from the Excel file.
My question here is:
How can I search for only existing Excel files, and not every single file in the directory?
Code:
/**
* Checks if there is an existing uploaded RVTM file. File was uploaded through 'upload.php'.
*
* #access public
* #param string $fileName Name of the spreadsheet file
* #param array $OverviewResult Array of information about verdicts, TestSuiteCollectionIds, and TestJobIds
* #author Mads Sander Hoegstrup
*/
function Create_Table_RVTM($fileName, $OverviewResult){
#Scan directory for any files, and return them (should only be a single file), when the file is deleted the directory is whiped clean: Delete_RVTM_file(...)
$dir = "./DashboardFiles/files/".$fileName;
$FindFiles = scandir($dir);
$files = array_values(array_diff($FindFiles, array('.', '..')));
if ($files) {
$path = "./DashboardFiles/files/".$fileName."/".$files[0];
echo "<br>The file $fileName exists";
RVTM_Excel($fileName, $OverviewResult, $path);
}else {
echo "<br>The file $fileName does not exists";
}
}
You can use PHP builds in glob with a wildcard something like
<?php
foreach (glob("/path/to/folder/*.xl*") as $filename) {
echo "$filename size " . filesize($filename) . "\n";
}
?>
You can do it by looking at the file extension, for example .xls.
Here is the full list
https://support.microsoft.com/en-us/office/file-formats-that-are-supported-in-excel-0943ff2c-6014-4e8d-aaea-b83d51d46247
.xlsx
.xlsm
.xlsb
.xltx
.xltm
.xls
.xlt
.xls
.xlam
.xla
.xlw
.xlr
and here you can see how to get a file extension
$ext = pathinfo($path, PATHINFO_EXTENSION);
and read the directory using spl
https://www.the-art-of-web.com/php/directory-list-spl/
use \DirectoryIterator;
function getFileList($dir)
{
// array to hold return value
$retval = [];
// add trailing slash if missing
if(substr($dir, -1) != "/") $dir .= "/";
// open directory for reading
$d = new DirectoryIterator($dir) or die("getFileList: Failed opening directory $dir for reading");
foreach($d as $fileinfo) {
// skip hidden files
if($fileinfo->isDot()) continue;
$retval[] = [
'name' => "{$dir}{$fileinfo}",
'type' => ($fileinfo->getType() == "dir") ? "dir" : mime_content_type($fileinfo->getRealPath()),
'size' => $fileinfo->getSize(),
'lastmod' => $fileinfo->getMTime()
];
}
return $retval;
}

how to identified zip error

When I tried to extract a zip file downloaded, it does'nt work. How to identifed the error ?
the response is failed;
Thank you.
File location
//home/www/boutique/includes/ClicShopping/Work/IceCat/daily.index.xml.gz
public function ExtractZip() {
if (is_file($this->selectFile())) {
$zip = new \ZipArchive;
if ($zip->open($this->selectFile()) === true) {
$zip->extractTo($this->IceCatDirectory);
$zip->close();
echo 'file downloaded an unzipped';
}
} else {
echo 'error no file found in ' . $this->selectFile();
}
}
Follow to comment, there the correct function
public function ExtractGzip() {
// Raising this value may increase performance
$buffer_size = 4096; // read 4kb at a time
$out_file_name = str_replace('.gz', '', $this->selectFile());
// Open our files (in binary mode)
$file = gzopen($this->selectFile(), 'rb');
$out_file = fopen($out_file_name, 'wb');
// Keep repeating until the end of the input file
while(!gzeof($file)) {
// Read buffer-size bytes
// Both fwrite and gzread and binary-safe
fwrite($out_file, gzread($file, $buffer_size));
}
// Files are done, close files
fclose($out_file);
gzclose($file);
}

Download 40M zip file using PHP

I have a 40M zip file that is at a web address of say http://info/data/bigfile.zip that I would like to download to my local server. What is the best way currently to download a zip file of that size using PHP or header requests such that it won't time out at 8M or give me a 500 error? Right now, I keep getting timed out.
In order to download large file via php try something like this (source http://teddy.fr/2007/11/28/how-serve-big-files-through-php/):
<?php
define('CHUNK_SIZE', 1024*1024); // Size (in bytes) of tiles chunk
// Read a file and display its content chunk by chunk
function readfile_chunked($filename, $retbytes = TRUE) {
$buffer = '';
$cnt = 0;
$handle = fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = fread($handle, CHUNK_SIZE);
echo $buffer;
ob_flush();
flush();
if ($retbytes) {
$cnt += strlen($buffer);
}
}
$status = fclose($handle);
if ($retbytes && $status) {
return $cnt; // return num. bytes delivered like readfile() does.
}
return $status;
}
// Here goes your code for checking that the user is logged in
// ...
// ...
$filename = 'path/to/your/file'; // url of your file
$mimetype = 'mime/type';
header('Content-Type: '.$mimetype );
readfile_chunked($filename);
?>
**Second solution **
Copy the file one small chunk at a time
/**
* Copy remote file over HTTP one small chunk at a time.
*
* #param $infile The full URL to the remote file
* #param $outfile The path where to save the file
*/
function copyfile_chunked($infile, $outfile) {
$chunksize = 10 * (1024 * 1024); // 10 Megs
/**
* parse_url breaks a part a URL into it's parts, i.e. host, path,
* query string, etc.
*/
$parts = parse_url($infile);
$i_handle = fsockopen($parts['host'], 80, $errstr, $errcode, 5);
$o_handle = fopen($outfile, 'wb');
if ($i_handle == false || $o_handle == false) {
return false;
}
if (!empty($parts['query'])) {
$parts['path'] .= '?' . $parts['query'];
}
/**
* Send the request to the server for the file
*/
$request = "GET {$parts['path']} HTTP/1.1\r\n";
$request .= "Host: {$parts['host']}\r\n";
$request .= "User-Agent: Mozilla/5.0\r\n";
$request .= "Keep-Alive: 115\r\n";
$request .= "Connection: keep-alive\r\n\r\n";
fwrite($i_handle, $request);
/**
* Now read the headers from the remote server. We'll need
* to get the content length.
*/
$headers = array();
while(!feof($i_handle)) {
$line = fgets($i_handle);
if ($line == "\r\n") break;
$headers[] = $line;
}
/**
* Look for the Content-Length header, and get the size
* of the remote file.
*/
$length = 0;
foreach($headers as $header) {
if (stripos($header, 'Content-Length:') === 0) {
$length = (int)str_replace('Content-Length: ', '', $header);
break;
}
}
/**
* Start reading in the remote file, and writing it to the
* local file one chunk at a time.
*/
$cnt = 0;
while(!feof($i_handle)) {
$buf = '';
$buf = fread($i_handle, $chunksize);
$bytes = fwrite($o_handle, $buf);
if ($bytes == false) {
return false;
}
$cnt += $bytes;
/**
* We're done reading when we've reached the conent length
*/
if ($cnt >= $length) break;
}
fclose($i_handle);
fclose($o_handle);
return $cnt;
}
Adjust the $chunksize variable to your needs. This has only been mildly tested. It could easily break for a number of reasons.
Usage:
copyfile_chunked('http://somesite.com/somefile.jpg', '/local/path/somefile.jpg');
Not a lot of detail given, but it sounds like php.ini defaults are restricting the servers ability to transfer large files via php web interface in a timely manner.
Namely these settings post_max_size, upload_max_filesize, or max_execution_time
Might jump in your php.ini file, up the sizes, restart Apache, and retry the file transfer.
HTH

Replace CSV row in PHP, memory error

I have a csv file that is 2Mb size, and has pipe delimiter. I would like to take the first row and replace its data then resave the file. Here is what I did :
//Creating a new first row with the modified data.
$file = fopen($path,"r");//$path is where the file is located : outputs/my_file.csv
$size = filesize($path);
$firstLine = fgetcsv(fopen($path,"r")); //$firstLine has all the data of the first row as array
fclose($file);
$firstLine = explode("|", $firstLine[0]);//To get each column row
$newHeader = array();
for($i = 0; $i<sizeof($firstLine ); $i++){
if($i == 4){
array_push($newHeader, "modified column in row 1 ");//Only column 4 in row 1 is modified
}else{
array_push($newHeader, $firstLine [$i]);
}
}
$Header = implode("|", $newHeader);
//Creating the new csv file
$row = 0;
while (($data = fgetcsv(fopen($path,"r"), "|")) !== false) {
if($row == 0){
$data[0] = $Header;
}
$newCsvData[] = $data;
}
return $newCsvData; //I wanted to display the new content of the csv before saving it
This code should print the new content of the csv file that I will store but I get an error : Allowed memory size of 536870912 bytes exhausted (tried to allocate 332 bytes) How can I do that in a very fast way ? the file is about 19122 row.
Thanks
If it's only 2mb, maybe read the entire file into memory and then write out a new file (overwriting the previous file). Here are some helper functions to help you read and write the file, and I'm certain you're proficient in editing the resulting array:
/**
* Reads a file into an array
*
* #param $FILE string the file to open
*
* #return $lines The Lines of the file as an array
*/
public static function readFile($FILE) {
$lines = array(); // the array to store each line of the file in
$handle = fopen($FILE, "r");
if ($handle) {
// $FILE successfully opened for reading,
while (($line = fgets($handle)) !== false) {
$lines[] = $line; //add each line of the file to $lines
}
} else {
throw new Exception("error opening the file...");
}
fclose($handle); // close the file
return $lines; // return the lines of the file as an array
}
/**
* Writes the $lines of a file into $FILE
*
* #param $FILE string The file to write
* #param $lines array An array containing the lines of the file
*
* #return $result int|NULL The number of bytes written, or null on failure. See: php.net/fwrite#refsect1-function.fwrite-returnvalues
*/
public static writeFile($FILE, $lines) {
// Add newline at the end of each line of the array
// output is now a single string which we will write in one pass
// (instead of line-by-line)
$output = implode("\n", $lines);
$handle = fopen($FILE, "w+");
if ($handle) {
// $FILE successfully opened for writing, write to the file
$result = fwrite($handle, $output);
} else {
throw new Exception("error opening the file...");
}
fclose($handle); // close the file
return $result; // The number of bytes written to the file, or NULL on failure
}
<?php
$source = fopen('filename','r');
$destination = fopen('newfilename','w');
//write new header to new file
fwrite($destination,"your|replacement|header\n");
//set the pointer in the old file to the second row
fgets($source);
//copy the entire stream
stream_copy_to_stream($source,$destination);
//rename the newfilename to the old filename
rename('newfilename','filename');
//check what memory we used:
var_dump(memory_get_peak_usage());
That resulted in 142260 bytes used of memory at its peak for a 2MB file. BTW: the memory usage of a 2GB file is exactly if I test it here.

Can I detect animated gifs using php and gd?

I'm currently running into some issues resizing images using GD.
Everything works fine until i want to resize an animated gif, which delivers the first frame on a black background.
I've tried using getimagesize but that only gives me dimensions and nothing to distinguish between just any gif and an animated one.
Actual resizing is not required for animated gifs, just being able to skip them would be enough for our purposes.
Any clues?
PS. I don't have access to imagemagick.
Kind regards,
Kris
While searching for a solution to the same problem I noticed that the php.net site has a follow-up to the code Davide and Kris are referring to, but, according to the author, less memory-intensive, and possibly less disk-intensive.
I'll replicate it here, because it may be of interest.
source: http://www.php.net/manual/en/function.imagecreatefromgif.php#88005
function is_ani($filename) {
if(!($fh = #fopen($filename, 'rb')))
return false;
$count = 0;
//an animated gif contains multiple "frames", with each frame having a
//header made up of:
// * a static 4-byte sequence (\x00\x21\xF9\x04)
// * 4 variable bytes
// * a static 2-byte sequence (\x00\x2C)
// We read through the file til we reach the end of the file, or we've found
// at least 2 frame headers
while(!feof($fh) && $count < 2) {
$chunk = fread($fh, 1024 * 100); //read 100kb at a time
$count += preg_match_all('#\x00\x21\xF9\x04.{4}\x00[\x2C\x21]#s', $chunk, $matches);
}
fclose($fh);
return $count > 1;
}
There is a brief snippet of code in the PHP manual page of the imagecreatefromgif() function that should be what you need:
imagecreatefromgif comment #59787 by ZeBadger
Here's the working function:
/**
* Thanks to ZeBadger for original example, and Davide Gualano for pointing me to it
* Original at http://it.php.net/manual/en/function.imagecreatefromgif.php#59787
**/
function is_animated_gif( $filename )
{
$raw = file_get_contents( $filename );
$offset = 0;
$frames = 0;
while ($frames < 2)
{
$where1 = strpos($raw, "\x00\x21\xF9\x04", $offset);
if ( $where1 === false )
{
break;
}
else
{
$offset = $where1 + 1;
$where2 = strpos( $raw, "\x00\x2C", $offset );
if ( $where2 === false )
{
break;
}
else
{
if ( $where1 + 8 == $where2 )
{
$frames ++;
}
$offset = $where2 + 1;
}
}
}
return $frames > 1;
}
This is an improvement of the current top voted answer but I don't have enough reputation to comment yet. The problem with that answer is that it reads the file in 100Kb chunks and the end of frame marker might be split in between 2 chunks. A fix for that is to add the last 20b of the previous frame to the next one:
<?php
function is_ani($filename) {
if(!($fh = #fopen($filename, 'rb')))
return false;
$count = 0;
//an animated gif contains multiple "frames", with each frame having a
//header made up of:
// * a static 4-byte sequence (\x00\x21\xF9\x04)
// * 4 variable bytes
// * a static 2-byte sequence (\x00\x2C) (some variants may use \x00\x21 ?)
// We read through the file til we reach the end of the file, or we've found
// at least 2 frame headers
$chunk = false;
while(!feof($fh) && $count < 2) {
//add the last 20 characters from the previous string, to make sure the searched pattern is not split.
$chunk = ($chunk ? substr($chunk, -20) : "") . fread($fh, 1024 * 100); //read 100kb at a time
$count += preg_match_all('#\x00\x21\xF9\x04.{4}\x00(\x2C|\x21)#s', $chunk, $matches);
}
fclose($fh);
return $count > 1;
}
Reading whole file with file_get_contents may take too much memory if the given file is too large. I've re-factored the function previously given which reads just enough bytes to check frames and returns as soon as it finds at least 2 frames.
<?php
/**
* Detects animated GIF from given file pointer resource or filename.
*
* #param resource|string $file File pointer resource or filename
* #return bool
*/
function is_animated_gif($file)
{
$fp = null;
if (is_string($file)) {
$fp = fopen($file, "rb");
} else {
$fp = $file;
/* Make sure that we are at the beginning of the file */
fseek($fp, 0);
}
if (fread($fp, 3) !== "GIF") {
fclose($fp);
return false;
}
$frames = 0;
while (!feof($fp) && $frames < 2) {
if (fread($fp, 1) === "\x00") {
/* Some of the animated GIFs do not contain graphic control extension (starts with 21 f9) */
if (fread($fp, 1) === "\x2c" || fread($fp, 2) === "\x21\xf9") {
$frames++;
}
}
}
fclose($fp);
return $frames > 1;
}
Animated GIF must have the following string
"\x21\xFF\x0B\x4E\x45\x54\x53\x43\x41\x50\x45\x32\x2E\x30"
I've tested on a few animated gifs and it seem the string is at the pos 781 of a file (found with file_get_contents and strpos)

Categories