Codeigniter ZIP file download corrupted - php

I Am using the standard CI zip library to read a directory and create a zip file for downloading however when I am testing this in windows I get:
and in OS X the zip unpacks a cpgz file which then unpacks to a zip file - infinitely
My CI function:
public function downloadPackage($unique) {
$this->load->library('zip');
$text = $this->syndication_m->getTextForContent($unique);
$path = '/var/www/html/uploads/'.$unique.'/';
if(file_exists($path."copy-".$unique.".txt")) {
unlink($path."copy-".$unique.".txt");
$fp = fopen($path."copy-".$unique.".txt","wb");
fwrite($fp,$text);
fclose($fp);
}
$this->zip->read_dir($path);
$this->zip->download('dl-'.$unique.'.zip');
}
Can anyone help me with a fix or suggest what to do here? Thanks
EDIT
public function downloadPackage($unique) {
$this->load->library('zip');
$path = '/var/www/html/uploads/'.$unique.'/';
$text = $this->syndication_m->getTextForContent($unique);
$this->zip->read_dir($path, TRUE);
$this->zip->add_data('copy-'.$unique.'.txt', $text->synd_text);
$this->zip->download('dl-'.$unique.'.zip');
}

I have just come across a very similar issue where the downloaded zip file got corrupted and this is what solved my problem:
I call the ob_end_clean() function just before the $this->zip->download() line.

From what i know and what codeigniter's documentation is saying:
$path = '/var/www/html/uploads/'.$unique.'/';
$this->zip->read_dir($path)
Permits you to compress a folder (and its contents) that already exists somewhere on your server. Supply a file path to the directory and the zip class will recursively read it and recreate it as a Zip archive. All files contained within the supplied path will be encoded, as will any sub-folders contained within it.
Basically you are creating a zip with everything starting from /var and ending with the files in the $unique folder, something is bound to be wrong ...
I recommend setting the second parameter to false, as the documentation specifies:
If you want the tree preceding the target folder to be ignored you can pass FALSE (boolean) in the second parameter.
$this->zip->read_dir($path, FALSE)
This will create a ZIP with the folder "$unique" inside, then all sub-folders stored correctly inside that, but will not include the folders /var/www/html/uploads.
I also suggest/recommend, for better control of data and i believe fewer resources, to use:
$this->zip->add_data() instead of fopen() and $this->zip->read_dir()
Example:
public function downloadPackage($unique) {
$this->load->library('zip');
$text = $this->syndication_m->getTextForContent($unique);
$this->zip->add_data('copy-'.$unique.'.txt', $text);
$this->zip->download('dl-'.$unique.'.zip');
}
NOTE: Do not display any data in the controller in which you call this function since it sends various server headers that cause the download to happen and the file to be treated as binary.
--------------- EDIT ---------------
Unless you want to add to the zip whatever files are in the $unuqie directory, there is no use for $this->zip->read_dir().
<?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');
class Testing extends CI_Controller {
public function downloadPackage($unique = '') {
$this->load->library('zip');
$unique = '4ts5yegq;';
$text = 'I am zorro ...';
$this->zip->add_data('copy-'.$unique.'.txt', $text);
$this->zip->download('dl-'.$unique.'.zip');
}
}

In place of writing file using fopen try to use "$this->zip->add_data($text);" and make sure directory(sub directories) $path pointing to does not have any soft link(shortcuts) files.
Apart from this try, not to include empty folders in zip file sometimes it creates these kind of issues.

I had similar issue. whenever i was creating new zip file and unzipping .zip file, new file was creating as .cpgz. this was happening due to my zip file was not creating properly. Means it was creating as corrupt zip file.
Meanwhile i was getting Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 262144 bytes) error. Once i solved this Fetal error, my zip and unzip issue got resolved.
I resolved below error
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried
to allocate 262144 bytes)
Add into index.php
ini_set('memory_limit', '-1');

deletes the topmost output buffer and all of its contents using the ob_end_clean() function. Add ob_end_clean(); function just before db download function call.

Related

Laravel Filesystem - Move file from one disk to another

I am trying to move files from my FTP server to a local directory. First I need to find the correct file from the FTP server, as there can be a few hundreds:
//Search for the file.
$fileName= array_filter(Storage::disk('ftp')->files(), function ($file)
{
return preg_match('/('.date("Y-m-d" ,time()).').*.XLSX/', $file);
});
Above finds the correct file. If I dd($fileName), I get this:
My File Name.XLSX
I then try to move that file, to my public disk:
$ftp_file = Storage::disk('ftp')->get($fileName);
$local_file = Storage::disk('public')->move($ftp_file, "moved_file.xlsx");
However above code doesn't work. I get below error:
preg_match() expects parameter 2 to be string, array given
Which I have identified being on below function:
$ftp_file = Storage::disk('ftp')->get($fileName);
What am I doing wrong? How can I move the file, which I am able to find on my FTP server, to my local disk?
Thank you in advance.
As #Loek pointed out, $fileName was an array, and therefore to access it I needed to use:
$ftp_file = Storage::disk('ftp')->get($fileName[0]);

Laravel maximum execution time error when trying to open .txt file

I have a .txt file with countries and their codes and I want to get the contents from it and insert into the database.
But when why try to open the file using the php function fopen() it throws maximum execution time error
here's the code:
web.php:
Route::get('/countries', 'PageController#insertCountries');
PageController:
public function insertCountries()
{
$file = fopen(asset('databases/countries.txt'), 'r');
return 'ok';
}
The size of the file is 6KB. I am using Laravel 5.4
EDIT: the file is in mu public folder in folder databases
If you want to open local file, use File facade to work directly with filesystem, also you shouldn't use asset() helper. So, do something like this instead:
$file = File::get('/full/path/to/the/file/countries.txt');

php - unlink throws error: Resource temporarily unavailable

Here is the piece of code:
public function uploadPhoto(){
$filename = '../storage/temp/image.jpg';
file_put_contents($filename,file_get_contents('http://example.com/image.jpg'));
$photoService->uploadPhoto($filename);
echo("If file exists: ".file_exists($filename));
unlink($filename);
}
I am trying to do the following things:
Get a photo from a URL and save it in a temp folder in my server. This works fine. The image file is created and echoes If file exists: 1 when echo("If file exists: ".file_exists('../storage/temp/image.jpg'));.
Pass that file to another function that hanldes uploading the file to Amazon s3 bucket. The file gets stored in my s3 bucket.
Delete the photo stored in the temp folder. This doesn't work! I get an error saying:
unlink(../storage/temp/image.jpg): Resource temporarily unavailable
If I use rename($filename,'../storage/temp/renimage.jpg'); instead of unlink($filename); i get an error:
rename(../storage/temp/image.jpg,../storage/temp/renimage.jpg): The process cannot access the file because it is being used by another process. (code: 32)
If I remove the function call $photoService->uploadPhoto($filename);, everything works perfectly fine.
If the file is being used by another process, how do I unlink it after the process has been completed and the file is no longer being used by any process? I do not want to use timers.
Please help! Thanks in advance.
Simplest solution:
gc_collect_cycles();
unlink($file);
Does it for me!
Straight after uploading a file to amazon S3 it allows me to delete the file on my server.
See here: https://github.com/aws/aws-sdk-php/issues/841
The GuzzleHttp\Stream object holds onto a resource handle until its
__destruct method is called. Normally, this means that resources are freed as soon as a stream falls out of scope, but sometimes, depending
on the PHP version and whether a script has yet filled the garbage
collector's buffer, garbage collection can be deferred.
gc_collect_cycles will force the collector to run and call __destruct
on all unreachable stream objects.
:)
Just had to deal with a similar Error.
It seems your $photoService is holding on to the image for some reason...
Since you didn't share the code of $photoService, my suggestion would be to do something like this (assuming you don't need $photoService anymore):
[...]
echo("If file exists: ".file_exists($filename));
unset($photoService);
unlink($filename);
}
The unset() method will destroy the given variable/object, so it can't "use" (or wharever it does) any files.
I sat over this problem for an hour or two, and finally realized that "temporarily unavailable" really means "temporarily".
In my case, concurrent PHP scripts access the file, either writing or reading. And when the unlink() process had a poor timing, then the whole thing failed.
The solution was quite simple: Use the (generally not very advisable) # to prevent the error being shown to the user (sure, one could also stop errors from beinf printed), and then have another try:
$gone = false;
for ($trial=0; $trial<10; $trial++) {
if ($gone = #unlink($filename)) {
break;
}
// Wait a short time
usleep(250000);
// Maybe a concurrent script has deleted the file in the meantime
clearstatcache();
if (!file_exists($filename)) {
$gone = true;
break;
}
}
if (!$gone) {
trigger_error('Warning: Could not delete file '.htmlspecialchars($filename), E_USER_WARNING);
}
After solving this issue and pushing my luck further, I could also trigger the "Resource temporarily unavailable" issue with file_put_contents(). Same solution, now everything works fine.
If I'm wise enough and/or unlinking fails in the future, I'll replace the # by ob_start(), so the error message could tell me the exact error.
I had the same problem. The S3 Client doesn't seem to want to unlock before unlink is being executed. If you extract the contents into a variable and set it as the 'body' in the putObject array:
$fileContent = file_get_contents($filepath);
$result = $s3->putObject(array(
'Bucket' => $bucket,
'Key' => $folderPath,
'Body' => $fileContent,
//'SourceFile' => $filepath,
'ContentType' => 'text/csv',
'ACL' => 'public-read'
));
See this answer: How to unlock the file after AWS S3 Helper uploading file?
The unlink method return bool value, so you can build a cycle, with some wait() and retries limit to wait for your processes to complete.
Additionally put "#" on the unlink, to hide the access error.
Throw another error/exception if retries count reached.

PHP handles a zip file as if it's empty

Here's a very stripped down version of the code I'm using.
$url = "http://server.com/getDaFile";
//Get the file from the server
$zip_file_contents = file_get_contents($url);
//Write file to disk
file_put_contents("file.zip", $zip_file_contents);
//open zip file
$zip = zip_open("file.zip");
if(is_resource($zip))
{
while($zip_entry = zip_read($zip))
{
if(zip_entry_open($zip, $zip_entry, 'r'))
{
//Read the whole file
$buf = zip_entry_read($zip_entry, zip_entry_filesize($zip_entry));
/*
Do stuff with $buf!!!
*/
zip_entry_close($zip_entry);
}
}
zip_close($zip);
}
else
{
echo "Not a resource. Oh noes!\n";
}
So : get the file, save it to disk, unzip it to extract files it contains, do stuff with files. The problem here is that, for some reason I cannot figure out, zip_read returns FALSE, as if it couldn't read files inside the ZIP archive. $zip does contain a resource, I've checked with var_dump.
What makes this even stranger is that I downloaded the ZIP file on my PC using the URL on top, manually uploaded it to the PHP server, and commented out the calls to file_get_contents and file_put_contents so PHP uses the local version. When I do this, zip_read correctly finds the right amount of files inside the ZIP and processing proceeds as it should.
I also tried doing this : $zip = zip_open($url) but $zip fails the is_resource($zip) check.
Something is obviously wrong with my code since the URL works and returns a valid ZIP archive. What is it?
So I finally found out the problem. Following #diolemo's suggestion, I opened my server's ZIP archive in a hex editor. Here's what I found at the top, followed by the usual ZIP binary data : http://pastebin.com/vQEXJtTN
It turns out there was a PHP error mixed in with the actual content of the ZIP file. Unsure of how to fix this (but knowing it certainly had to do with HTTP headers), I tried this guy's code and, what do you know, my code works perfectly now!
Lessons learned? Never trust your data, even if it seems all right (both 7-Zip and Winrar managed to open the file without problem).

Extract a file from a ZIP string

I have a BASE64 string of a zip file that contains one single XML file.
Any ideas on how I could get the contents of the XML file without having to deal with files on the disk?
I would like very much to keep the whole process in the memory as the XML only has 1-5k.
It would be annoying to have to write the zip, extract the XML and then load it up and delete everything.
I had a similar problem, I ended up doing it manually.
https://www.pkware.com/documents/casestudies/APPNOTE.TXT
This extracts a single file (just the first one), no error/crc checks, assumes deflate was used.
// zip in a string
$data = file_get_contents('test.zip');
// magic
$head = unpack("Vsig/vver/vflag/vmeth/vmodt/vmodd/Vcrc/Vcsize/Vsize/vnamelen/vexlen", substr($data,0,30));
$filename = substr($data,30,$head['namelen']);
$raw = gzinflate(substr($data,30+$head['namelen']+$head['exlen'],$head['csize']));
// first file uncompressed and ready to use
file_put_contents($filename,$raw);
After some hours of research I think it's surprisingly not possible do handle a zip without a temporary file:
The first try with php://memory will not work, beacuse it's a stream that cannot be read by functions like file_get_contents() or ZipArchive::open(). In the comments is a link to the php-bugtracker for the lack of documentation of this problem.
There is a stream support ZipArchive with ::getStream() but as stated in the manual, it only supports reading operation on an opened file. So you cannot build a archive on-the-fly with that.
The zip:// wrapper is also read-only: Create ZIP file with fopen() wrapper
I also did some attempts with the other php wrappers/protocolls like
file_get_contents("zip://data://text/plain;base64,{$base64_string}#test.txt")
$zip->open("php://filter/read=convert.base64-decode/resource={$base64_string}")
$zip->open("php://filter/read=/resource=php://memory")
but for me they don't work at all, even if there are examples like that in the manual. So you have to swallow the pill and create a temporary file.
Original Answer:
This is just the way of temporary storing. I hope you manage the zip handling and parsing of xml on your own.
Use the php php://memory (doc) wrapper. Be aware, that this is only usefull for small files, because its stored in the memory - obviously. Otherwise use php://temp instead.
<?php
// the decoded content of your zip file
$text = 'base64 _decoded_ zip content';
// this will empty the memory and appen your zip content
$written = file_put_contents('php://memory', $text);
// bytes written to memory
var_dump($written);
// new instance of the ZipArchive
$zip = new ZipArchive;
// success of the archive reading
var_dump(true === $zip->open('php://memory'));
toster-cx had it right,you should award him the points, this is an example where the zip comes from a soap response as a byte array (binary), the content is an XML file:
$objResponse = $objClient->__soapCall("sendBill",array(parameters));
$fileData=unzipByteArray($objResponse->applicationResponse);
header("Content-type: text/xml");
echo $fileData;
function unzipByteArray($data){
/*this firts is a directory*/
$head = unpack("Vsig/vver/vflag/vmeth/vmodt/vmodd/Vcrc/Vcsize/Vsize/vnamelen/vexlen", substr($data,0,30));
$filename = substr($data,30,$head['namelen']);
$if=30+$head['namelen']+$head['exlen']+$head['csize'];
/*this second is the actua file*/
$head = unpack("Vsig/vver/vflag/vmeth/vmodt/vmodd/Vcrc/Vcsize/Vsize/vnamelen/vexlen", substr($data,$if,30));
$raw = gzinflate(substr($data,$if+$head['namelen']+$head['exlen']+30,$head['csize']));
/*you can create a loop and continue decompressing more files if the were*/
return $raw;
}
If you know the file name inside the .zip, just do this:
<?php
$xml = file_get_contents('zip://./your-zip.zip#your-file.xml');
If you have a plain string, just do this:
<?php
$xml = file_get_contents('compress.zlib://data://text/plain;base64,'.$base64_encoded_string);
[edit] Documentation is there: http://www.php.net/manual/en/wrappers.php
From the comments: if you don't have a base64 encoded string, you need to urlencode() it before using the data:// wrapper.
<?php
$xml = file_get_contents('compress.zlib://data://text/plain,'.urlencode($text));
[edit 2] Even if you already found a solution with a file, there's a solution (to test) I didn't see in your answer:
<?php
$zip = new ZipArchive;
$zip->open('data::text/plain,'.urlencode($base64_decoded_string));
$zip2 = new ZipArchive;
$zip2->open('data::text/plain;base64,'.urlencode($base64_string));
If you are running on Linux and have administration of the system. You could mount a small ramdisk using tmpfs, the standard file_get / put and ZipArchive functions will then work, except it does not write to disk, it writes to memory.
To have it permanently ready, the fstab is something like:
/media/ramdisk tmpfs nodev,nosuid,noexec,nodiratime,size=2M 0 0
Set your size and location accordingly so it suits you.
Using php to mount a ramdisk and remove it after using it (if it even has the privileges) is probably less efficient than just writing to disk, unless you have a massive number of files to process in one go.
Although this is not a pure php solution, nor is it portable.
You will still need to remove the "files" after use, or have the OS clean up old files.
They will of coarse not persist over reboots or remounts of the ramdisk.
if you want to read the content of a file from zip like and xml inside you shoud look at this i use it to count words from docx (wich is a zip )
if (!function_exists('docx_word_count')) {
function docx_word_count($filename)
{
$zip = new ZipArchive();
if ($zip->open($filename) === true) {
if (($index = $zip->locateName('docProps/app.xml')) !== false) {
$data = $zip->getFromIndex($index);
$zip->close();
$xml = new SimpleXMLElement($data);
return $xml->Words;
}
$zip->close();
}
return 0;
}
}
The idea comes from toster-cx is pretty useful to approach malformed zip files too!
I had one with missing data in the header, so I had to extract the central directory file header by using his method:
$CDFHoffset = strpos( $zipFile, "\x50\x4b\x01\x02" );
$CDFH = unpack( "Vsig/vverby/vverex/vflag/vmeth/vmodt/vmodd/Vcrc/Vcsize/Vsize/vnamelen/vexlen", substr( $zipFile, $CDFHoffset, 46 ) );

Categories