Split a large zip file in small chunks using php script - php

I am using below script for spliting a large zip file in small chucks.
$filename = "pro.zip";
$targetfolder = '/tmp';
// File size in Mb per piece/split.
// For a 200Mb file if piecesize=10 it will create twenty 10Mb files
$piecesize = 10; // splitted file size in MB
$buffer = 1024;
$piece = 1048576*$piecesize;
$current = 0;
$splitnum = 1;
if(!file_exists($targetfolder)) {
if(mkdir($targetfolder)) {
echo "Created target folder $targetfolder".br();
}
}
if(!$handle = fopen($filename, "rb")) {
die("Unable to open $filename for read! Make sure you edited filesplit.php correctly!".br());
}
$base_filename = basename($filename);
$piece_name = $targetfolder.'/'.$base_filename.'.'.str_pad($splitnum, 3, "0", STR_PAD_LEFT);
if(!$fw = fopen($piece_name,"w")) {
die("Unable to open $piece_name for write. Make sure target folder is writeable.".br());
}
echo "Splitting $base_filename into $piecesize Mb files ".br()."(last piece may be smaller in size)".br();
echo "Writing $piece_name...".br();
while (!feof($handle) and $splitnum < 999) {
if($current < $piece) {
if($content = fread($handle, $buffer)) {
if(fwrite($fw, $content)) {
$current += $buffer;
} else {
die("filesplit.php is unable to write to target folder");
}
}
} else {
fclose($fw);
$current = 0;
$splitnum++;
$piece_name = $targetfolder.'/'.$base_filename.'.'.str_pad($splitnum, 3, "0", STR_PAD_LEFT);
echo "Writing $piece_name...".br();
$fw = fopen($piece_name,"w");
}
}
fclose($fw);
fclose($handle);
echo "Done! ".br();
exit;
function br() {
return (!empty($_SERVER['SERVER_SOFTWARE']))?'<br>':"\n";
}
?>
But this script not creating small files after split in target temp folder. Script runs successfully without any error.
Please help me to found out what is issue here? Or If you have any other working script for similar functinality, Please provide me.

As indicated in the comments above, you can use split to split a file into smaller pieces, and can then use cat to join them back together.
split -b50m filename x
and to put them back
cat xaa xab xac > filename
If you are looking to split the zipfile into a spanning type archive, so that you do not need to rejoin the them together take a look at zipsplit
zipslit -n (size) filename
so you can just call zipsplit from your exec script and then most standard unzip utils should be able to put it back together. man zipslit for more options, including setting output path, etc..

Related

PHP to Increase counter by 1 and save in a txt file

I am trying my php script to read the current count from "counter.txt" file and add 1 and save it back in the counter text file.
$filename = 'counter.txt';
// create the file if it doesn't exist
if (!file_exists($filename)) {
$counter_file = fopen($filename, "w");
fwrite($counter_file, "0");
$counter = 0;
} else {
$counter_file = fopen($filename, "r");
// will read the first line
$counter = fgets($counter_file);
}
// increase $counter
$counter++;
// echo counter
echo $counter;
// save the increased counter
fwrite($counter_file, "0");
// close the file
fclose($counter_file);
The script reads and echos the number fine but it doesn't save the file with the increased number.
Please help
This may not be helpful if this is just a learning exercise to familiarize yourself with the various file functions, but this could be a lot simpler using file_get_contents and file_put_contents.
$file = 'counter.txt';
// default the counter value to 1
$counter = 1;
// add the previous counter value if the file exists
if (file_exists($file)) {
$counter += file_get_contents($file);
}
// write the new counter value to the file
file_put_contents($file, $counter);
Of course, whether or not this will work either this way or the way you're trying to do it totally depends on whether or not the file is writable, so you'll also need to be sure its permissions are set properly.
Works for me:
$counter = 1;
$file_name = 'test.'.$counter.'.ext';
if (file_exists($file_name)) {
$counter++;
fopen('test'.$counter.'.ext' , "w");
}
else {
fopen('test'.$counter.'.ext' , "w");
}
<?php
$filename = 'counter.txt';
// create the file if it doesn't exist
if (!file_exists($filename)) {
$counter_file = fopen($filename, "w");
fwrite($counter_file, "1");
fclose($counter_file);
} else {
$counter_file = fopen($filename, "w+");
$fsize = filesize($filename);
// will read the whole file
$counter = (int) fread($counter_file, $fsize);
$counter++;
fwrite($counter_file, $counter);
fclose($counter_file);
}

how to identified zip error

When I tried to extract a zip file downloaded, it does'nt work. How to identifed the error ?
the response is failed;
Thank you.
File location
//home/www/boutique/includes/ClicShopping/Work/IceCat/daily.index.xml.gz
public function ExtractZip() {
if (is_file($this->selectFile())) {
$zip = new \ZipArchive;
if ($zip->open($this->selectFile()) === true) {
$zip->extractTo($this->IceCatDirectory);
$zip->close();
echo 'file downloaded an unzipped';
}
} else {
echo 'error no file found in ' . $this->selectFile();
}
}
Follow to comment, there the correct function
public function ExtractGzip() {
// Raising this value may increase performance
$buffer_size = 4096; // read 4kb at a time
$out_file_name = str_replace('.gz', '', $this->selectFile());
// Open our files (in binary mode)
$file = gzopen($this->selectFile(), 'rb');
$out_file = fopen($out_file_name, 'wb');
// Keep repeating until the end of the input file
while(!gzeof($file)) {
// Read buffer-size bytes
// Both fwrite and gzread and binary-safe
fwrite($out_file, gzread($file, $buffer_size));
}
// Files are done, close files
fclose($out_file);
gzclose($file);
}

getimagesize() limiting file size for remote URL

I could use getimagesize() to validate an image, but the problem is what if the mischievous user puts a link to a 10GB random file then it would whack my production server's bandwidth. How do I limit the filesize getimagesize() is getting? (eg. 5MB max image size)
PS: I did research before asking.
You can download the file separately, imposing a maximum size you wish to download:
function mygetimagesize($url, $max_size = -1)
{
// create temporary file to store data from $url
if (false === ($tmpfname = tempnam(sys_get_temp_dir(), uniqid('mgis')))) {
return false;
}
// open input and output
if (false === ($in = fopen($url, 'rb')) || false === ($out = fopen($tmpfname, 'wb'))) {
unlink($tmpfname);
return false;
}
// copy at most $max_size bytes
stream_copy_to_stream($in, $out, $max_size);
// close input and output file
fclose($in); fclose($out);
// retrieve image information
$info = getimagesize($tmpfname);
// get rid of temporary file
unlink($tmpfname);
return $info;
}
You don't want to do something like getimagesize('http://example.com') to begin with, since this will download the image once, check the size, then discard the downloaded image data. That's a real waste of bandwidth.
So, separate the download process from the checking of the image size. For example, use fopen to open the image URL, read little by little and write it to a temporary file, keeping count of how much you have read. Once you cross 5MB and are still not finished reading, you stop and reject the image.
You could try to read the HTTP Content-Size header before starting the actual download to weed out obviously large files, but you cannot rely on it, since it can be spoofed or omitted.
Here is an example, you need to make some change to fit your requirement.
function getimagesize_limit($url, $limit)
{
global $phpbb_root_path;
$tmpfilename = tempnam($phpbb_root_path . 'store/', unique_id() . '-');
$fp = fopen($url, 'r');
if (!$fp) return false;
$tmpfile = fopen($tmpfilename, 'w');
$size = 0;
while (!feof($fp) && $size<$limit)
{
$content = fread($fp, 8192);
$size += 8192; fwrite($tmpfile, $content);
}
fclose($fp);
fclose($tmpfile);
$is = getimagesize($tmpfilename);
unlink($tmpfilename);
return $is;
}

Read only pdf files and get the filename of that pdf files in a directory

I need to read only pdf files in a directory and then read the filename of every files then I will use the filename to rename some txt files. I have tried using only eregi function. but it seems cannot read all I need. how to read them well?
here's my code :
$savePath ='D:/dir/';
$dir = opendir($savePath);
$filename = array();
while ($filename = readdir($dir)) {
if (eregi("\.pdf",$filename)){
$read = strtok ($filename,"."); //get the filenames
//to rename some txt files using the filenames that I get before
//$testfile is text files that I've read before
$testfile = "$read.txt";
$file = fopen($testfile,"r") or die ('cannot open file');
if (filesize($testfile)==0){}
else{
$text = fread($file,55024);
fclose($file);
echo "</br>"; echo "</br>";
}
}
More elegant:
foreach (glob("D:/dir/*.pdf") as $filename) {
// do something with $filename
}
To get the filename only:
foreach (glob("D:/dir/*.pdf") as $filename) {
$filename = basename($filename);
// do something with $filename
}
You can do this by filter file type.. following is sample code.
<?php
// directory path can be either absolute or relative
$dirPath = '.';
// open the specified directory and check if it's opened successfully
if ($handle = opendir($dirPath)) {
// keep reading the directory entries 'til the end
$i=0;
while (false !== ($file = readdir($handle))) {
$i++;
// just skip the reference to current and parent directory
if (eregi("\.jpg",$file) || eregi("\.gif",$file) || eregi("\.png",$file)){
if (is_dir("$dirPath/$file")) {
// found a directory, do something with it?
echo " [$file]<br>";
} else {
// found an ordinary file
echo $i."- $file<br>";
}
}
}
// ALWAYS remember to close what you opened
closedir($handle);
}
?>
Above is demonstrating for file type related to images you can do the same for .PDF files.
Better explained here

php how to get web image size in kb?

php how to get web image size in kb?
getimagesize only get the width and height.
and filesize caused waring.
$imgsize=filesize("http://static.adzerk.net/Advertisers/2564.jpg");
echo $imgsize;
Warning: filesize() [function.filesize]: stat failed for http://static.adzerk.net/Advertisers/2564.jpg
Is there any other way to get a web image size in kb?
Short of doing a complete HTTP request, there is no easy way:
$img = get_headers("http://static.adzerk.net/Advertisers/2564.jpg", 1);
print $img["Content-Length"];
You can likely utilize cURL however to send a lighter HEAD request instead.
<?php
$file_size = filesize($_SERVER['DOCUMENT_ROOT']."/Advertisers/2564.jpg"); // Get file size in bytes
$file_size = $file_size / 1024; // Get file size in KB
echo $file_size; // Echo file size
?>
Not sure about using filesize() for remote files, but there are good snippets on php.net though about using cURL.
http://www.php.net/manual/en/function.filesize.php#92462
That sounds like a permissions issue because filesize() should work just fine.
Here is an example:
php > echo filesize("./9832712.jpg");
1433719
Make sure the permissions are set correctly on the image and that the path is also correct. You will need to apply some math to convert from bytes to KB but after doing that you should be in good shape!
Here is a good link regarding filesize()
You cannot use filesize() to retrieve remote file information. It must first be downloaded or determined by another method
Using Curl here is a good method:
Tutorial
You can use also this function
<?php
$filesize=file_get_size($dir.'/'.$ff);
$filesize=$filesize/1024;// to convert in KB
echo $filesize;
function file_get_size($file) {
//open file
$fh = fopen($file, "r");
//declare some variables
$size = "0";
$char = "";
//set file pointer to 0; I'm a little bit paranoid, you can remove this
fseek($fh, 0, SEEK_SET);
//set multiplicator to zero
$count = 0;
while (true) {
//jump 1 MB forward in file
fseek($fh, 1048576, SEEK_CUR);
//check if we actually left the file
if (($char = fgetc($fh)) !== false) {
//if not, go on
$count ++;
} else {
//else jump back where we were before leaving and exit loop
fseek($fh, -1048576, SEEK_CUR);
break;
}
}
//we could make $count jumps, so the file is at least $count * 1.000001 MB large
//1048577 because we jump 1 MB and fgetc goes 1 B forward too
$size = bcmul("1048577", $count);
//now count the last few bytes; they're always less than 1048576 so it's quite fast
$fine = 0;
while(false !== ($char = fgetc($fh))) {
$fine ++;
}
//and add them
$size = bcadd($size, $fine);
fclose($fh);
return $size;
}
?>
You can get the file size by using the get_headers() function. Use below code:
$image = get_headers($url, 1);
$bytes = $image["Content-Length"];
$mb = $bytes/(1024 * 1024);
echo number_format($mb,2) . " MB";

Categories