gzcompress won't produce a valid zipped file? - php

Consider this:
$text = "hello";
$text_compressed = gzcompress($text, 6);
$success = file_put_contents('file.gz', $text_compressed);
When i try to open file.gz, i get errors. How can i open file.gz under terminal without calling php? (using gzuncompress works just fine!)
I can't recode every file i did, since that i now have almost a Billion files encoded this way! So if there is a solution... :)

You need to use gzencode() instead.
Luckily for you, the fix is easy: just write a script that opens each of your files one by one, uses gzuncompress() to uncompress that file, and then writes that file back out with gzencode() instead of gzcompress(), repeating the process for all of the files.
Alternatively (since you said you "didn't want to recode your files"), you could use uncompress to open the existing files from the command line (instead of gunzip/zcat).
As noted on the gzcompress() manual page:
gzcompress
This is not the same as gzip compression, which includes some header data. See gzencode() for gzip compression.

As said, you don't really have gzipped files. To open your files from a terminal you need the uncompress utility.

Related

file_put_contents from fopen creates different file than a browser

This is driving me crazy. I can download a file from internet through browser and its working fine. However if I write a script that does this for me using PHP, the file ends up being completely different (as per online diff tools).
$link = "https://torcache.net/torrent/9AE1726935FF9C08DF422CCE3C4445FC9484478B.torrent?title=[kat.cr]the.big.bang.theory.s08e24.720p.hdtv.x264.dimension.rartv";
file_put_contents($file_name, fopen( $link, 'r'));
First I tried to play around with encoding, but that should not matter as the file is binary, right? Also tried file_get_contents instead of fopen first, the same problem.
My PHP app is running on UTF-8 w/o BOM files. Can someone help? What am I doing wrong?
I think I got what you mean. The responses from torcache.net are all compressed with gzip. If you download the file from your browser, the file is automatically decoded, but if you do the same with php you get the same file but still encoded. You can use gzdecode to decodes it.
file_put_contents($file_name, gzdecode(file_get_contents($link)));
You have some errors in your code.
First you try to add a string to a variable without parentheses.
$link = "https://torcache.net/torrent/9AE1726935FF9C08DF422CCE3C4445FC9484478B.torrent?title=[kat.cr]the.big.bang.theory.s08e24.720p.hdtv.x264.dimension.rartv"
The next one you try to download that file over https. Check if your openssl is enabled and if allow_url_fopen is set correctly in your php.ini
https://secure.php.net/manual/en/features.remote-files.php
There are 3 ways to solve your problem:
enable openssl for https calls
Use curl and set the ssl verifier to 0
Replace the https with http to make a normal call.
Option 3 is the easiest one.

performance of passthru("cat file")

I'm using passthru("cat filepath") in my download script. My concern is that it might use a lot of server resource.
What is the difference between directly link a file in a public directory and download a file using passthru("cat filepath") in php?
What is the difference between directly link a file in a public directory and download a file using passthru("cat filepath") in php?
The difference is that linking directly to a file does not invoke PHP, while running a PHP script which in turn runs cat causes, well, both PHP and cat to be invoked. This will take up a moderate amount of extra memory, but won't cause server load under most circumstances.
I was using readfile(), but this function can't be used for files larger than 2gb
You might want to find a better solution than passing all of the file contents through PHP, in that case. Look into X-Sendfile support in your web server software of choice.
Don't use passthru() for that, you're opening yourself to CLI Injection and performance is terrible. readfile() exists just for that.
readfile($filepath);
There is a small overhead when passing through PHP compared to a direct link but we are usually talking of milliseconds. However, the browser will not be able to request a 206 Partial when using readfile() unless you code support for it or use something like PEAR::HTTP_Download.
EDIT: Seems you are using passthru() because apparently readfile() doesn't handle >2GB files properly (I never had that problem with readfile(), in fact I just tested it with a 7.2 GB file and it worked fine). In which case, at least escape your parameters.
function readfile_ext($filepath) {
if(!file_exists($filepath))
return false;
passthru('cat ' . escapeshellarg($filepath));
return true;
}
Instead of passthru('cat filepath'), use the PHP native readfile('filepath'), which has better performance.
Both methods will be slower than simply directly linking to the file though, since PHP has a certain overhead.

LAMP: How to create .Zip of large files for the user on the fly, without disk/CPU thrashing

Often a web service needs to zip up several large files for download by the client. The most obvious way to do this is to create a temporary zip file, then either echo it to the user or save it to disk and redirect (deleting it some time in the future).
However, doing things that way has drawbacks:
a initial phase of intensive CPU and disk thrashing, resulting in...
a considerable initial delay to the user while the archive is prepared
very high memory footprint per request
use of substantial temporary disk space
if the user cancels the download half way through, all resources used in the initial phase (CPU, memory, disk) will have been wasted
Solutions like ZipStream-PHP improve on this by shovelling the data into Apache file by file. However, the result is still high memory usage (files are loaded entirely into memory), and large, thrashy spikes in disk and CPU usage.
In contrast, consider the following bash snippet:
ls -1 | zip -# - | cat > file.zip
# Note -# is not supported on MacOS
Here, zip operates in streaming mode, resulting in a low memory footprint. A pipe has an integral buffer – when the buffer is full, the OS suspends the writing program (program on the left of the pipe). This here ensures that zip works only as fast as its output can be written by cat.
The optimal way, then, would be to do the same: replace cat with a web server process, streaming the zip file to the user with it created on the fly. This would create little overhead compared to just streaming the files, and would have an unproblematic, non-spiky resource profile.
How can you achieve this on a LAMP stack?
You can use popen() (docs) or proc_open() (docs) to execute a unix command (eg. zip or gzip), and get back stdout as a php stream. flush() (docs) will do its very best to push the contents of php's output buffer to the browser.
Combining all of this will give you what you want (provided that nothing else gets in the way -- see esp. the caveats on the docs page for flush()).
(Note: don't use flush(). See the update below for details.)
Something like the following can do the trick:
<?php
// make sure to send all headers first
// Content-Type is the most important one (probably)
//
header('Content-Type: application/x-gzip');
// use popen to execute a unix command pipeline
// and grab the stdout as a php stream
// (you can use proc_open instead if you need to
// control the input of the pipeline too)
//
$fp = popen('tar cf - file1 file2 file3 | gzip -c', 'r');
// pick a bufsize that makes you happy (64k may be a bit too big).
$bufsize = 65535;
$buff = '';
while( !feof($fp) ) {
$buff = fread($fp, $bufsize);
echo $buff;
}
pclose($fp);
You asked about "other technologies": to which I'll say, "anything that supports non-blocking i/o for the entire lifecycle of the request". You could build such a component as a stand-alone server in Java or C/C++ (or any of many other available languages), if you were willing to get into the "down and dirty" of non-blocking file access and whatnot.
If you want a non-blocking implementation, but you would rather avoid the "down and dirty", the easiest path (IMHO) would be to use nodeJS. There is plenty of support for all the features you need in the existing release of nodejs: use the http module (of course) for the http server; and use child_process module to spawn the tar/zip/whatever pipeline.
Finally, if (and only if) you're running a multi-processor (or multi-core) server, and you want the most from nodejs, you can use Spark2 to run multiple instances on the same port. Don't run more than one nodejs instance per-processor-core.
Update (from Benji's excellent feedback in the comments section on this answer)
1. The docs for fread() indicate that the function will read only up to 8192 bytes of data at a time from anything that is not a regular file. Therefore, 8192 may be a good choice of buffer size.
[editorial note] 8192 is almost certainly a platform dependent value -- on most platforms, fread() will read data until the operating system's internal buffer is empty, at which point it will return, allowing the os to fill the buffer again asynchronously. 8192 is the size of the default buffer on many popular operating systems.
There are other circumstances that can cause fread to return even less than 8192 bytes -- for example, the "remote" client (or process) is slow to fill the buffer - in most cases, fread() will return the contents of the input buffer as-is without waiting for it to get full. This could mean anywhere from 0..os_buffer_size bytes get returned.
The moral is: the value you pass to fread() as buffsize should be considered a "maximum" size -- never assume that you've received the number of bytes you asked for (or any other number for that matter).
2. According to comments on fread docs, a few caveats: magic quotes may interfere and must be turned off.
3. Setting mb_http_output('pass') (docs) may be a good idea. Though 'pass' is already the default setting, you may need to specify it explicitly if your code or config has previously changed it to something else.
4. If you're creating a zip (as opposed to gzip), you'd want to use the content type header:
Content-type: application/zip
or... 'application/octet-stream' can be used instead. (it's a generic content type used for binary downloads of all different kinds):
Content-type: application/octet-stream
and if you want the user to be prompted to download and save the file to disk (rather than potentially having the browser try to display the file as text), then you'll need the content-disposition header. (where filename indicates the name that should be suggested in the save dialog):
Content-disposition: attachment; filename="file.zip"
One should also send the Content-length header, but this is hard with this technique as you don’t know the zip’s exact size in advance. Is there a header that can be set to indicate that the content is "streaming" or is of unknown length? Does anybody know?
Finally, here's a revised example that uses all of #Benji's suggestions (and that creates a ZIP file instead of a TAR.GZIP file):
<?php
// make sure to send all headers first
// Content-Type is the most important one (probably)
//
header('Content-Type: application/octet-stream');
header('Content-disposition: attachment; filename="file.zip"');
// use popen to execute a unix command pipeline
// and grab the stdout as a php stream
// (you can use proc_open instead if you need to
// control the input of the pipeline too)
//
$fp = popen('zip -r - file1 file2 file3', 'r');
// pick a bufsize that makes you happy (8192 has been suggested).
$bufsize = 8192;
$buff = '';
while( !feof($fp) ) {
$buff = fread($fp, $bufsize);
echo $buff;
}
pclose($fp);
Update: (2012-11-23) I have discovered that calling flush() within the read/echo loop can cause problems when working with very large files and/or very slow networks. At least, this is true when running PHP as cgi/fastcgi behind Apache, and it seems likely that the same problem would occur when running in other configurations too. The problem appears to result when PHP flushes output to Apache faster than Apache can actually send it over the socket. For very large files (or slow connections), this eventually causes in an overrun of Apache's internal output buffer. This causes Apache to kill the PHP process, which of course causes the download to hang, or complete prematurely, with only a partial transfer having taken place.
The solution is not to call flush() at all. I have updated the code examples above to reflect this, and I placed a note in the text at the top of the answer.
Another solution is my mod_zip module for Nginx, written specifically for this purpose:
https://github.com/evanmiller/mod_zip
It is extremely lightweight and does not invoke a separate "zip" process or communicate via pipes. You simply point to a script that lists the locations of files to be included, and mod_zip does the rest.
Trying to implement a dynamic generated download with lots of files with different sizes i came across this solution but i run into various memory errors like "Allowed memory size of 134217728 bytes exhausted at ...".
After adding ob_flush(); right before the flush(); the memory errors disappear.
Together with sending the headers, my final solution looks like this (Just storing the files inside the zip without directory structure):
<?php
// Sending headers
header('Content-Type: application/zip');
header('Content-Disposition: attachment; filename="download.zip"');
header('Content-Transfer-Encoding: binary');
ob_clean();
flush();
// On the fly zip creation
$fp = popen('zip -0 -j -q -r - file1 file2 file3', 'r');
while (!feof($fp)) {
echo fread($fp, 8192);
ob_flush();
flush();
}
pclose($fp);
I wrote this s3 steaming file zipper microservice last weekend - might be useful: http://engineroom.teamwork.com/how-to-securely-provide-a-zip-download-of-a-s3-file-bundle/
According to the PHP manual, the ZIP extension provides a zip: wrapper.
I have never used it and I don't know its internals, but logically it should be able to do what you're looking for, assuming that ZIP archives can be streamed, which I'm not entirely sure of.
As for your question about the "LAMP stack" it shouldn't be a problem as long as PHP is not configured to buffer output.
Edit: I'm trying to put a proof-of-concept together, but it seems not-trivial. If you're not experienced with PHP's streams, it might prove too complicated, if it's even possible.
Edit(2): rereading your question after taking a look at ZipStream, I found what's going to be your main problem here when you say (emphasis added)
the operative Zipping should operate in streaming mode, ie processing files and providing data at the rate of the download.
That part will be extremely hard to implement because I don't think PHP provides a way to determine how full Apache's buffer is. So, the answer to your question is no, you probably won't be able to do that in PHP.
It seems, you can eliminate any output-buffer related problems by using fpassthru(). I also use -0 to save CPU time since my data is compact already. I use this code to serve a whole folder, zipped on-the-fly:
chdir($folder);
$fp = popen('zip -0 -r - .', 'r');
header('Content-Type: application/octet-stream');
header('Content-disposition: attachment; filename="'.basename($folder).'.zip"');
fpassthru($fp);
I just released a ZipStreamWriter class written in pure PHP userland here:
https://github.com/cubiclesoft/php-zipstreamwriter
Instead of using external applications (e.g. zip) or extensions like ZipArchive, it supports streaming data into and out of the class by implementing a full-blown ZIP writer.
How the streaming aspect works is by using the ZIP file format's "Data Descriptors" as described by section 4.3.5 of the PKWARE ZIP file specification:
4.3.5 File data MAY be followed by a "data descriptor" for the file.
Data descriptors are used to facilitate ZIP file streaming.
There are some possible limitations to be aware of though. Not every tool can read streaming ZIP files. Also, support for Zip64 streaming ZIP files may have even less support but that's only of concern for files over 2GB with this class. However, both 7-Zip and the Windows 10 built-in ZIP file reader seem to be fine with handling all of crazy files that the ZipStreamWriter class threw at them. The hex editor I use got a good workout too.
When using the ZipStreamWriter class, I recommend allowing a buffer to build up to at least 4KB but no more than 65KB at a time before sending it on to the web server. Otherwise, for lots of really tiny files, you'll be flushing out tiny bits of piecemeal data and waste a bunch of extra CPU cycles on the Apache callback end of things.
When something doesn't exist or I don't like the existing options, I find both official and unofficial specifications, some examples to work with, and then I build it from scratch. It's a fairly solid approach to problem solving, if just a tad overkill.

Hacked, what does this piece of code do?

WARNING: This is a possible exploit. Do not run directly on your server if you're not sure what to do with this.
http://pastehtml.com/view/1b1m2r6.txt
I believe this was uploaded via an insecure upload script. How do I decode and uncompress this code? Running it in the browser might execute it as a shell script, open up a port or something.
I can do a base64 decode online but i couldn't really decompress it.
So there's a string. It's gzipped and base64 encoded, and the code decodes the base64 and then uncompresses it.
When that's done, I am resulted with this:
<? eval(base64_decode('...')); ?>
Another layer of base64, which is 720440 bytes long.
Now, base64 decoding that, we have 506961 bytes of exploit code.
I'm still examining the code, and will update this answer when I have more understanding. The code is huge.
Still reading through the code, and the (very well-done) exploit allows these tools to be exposed to the hacker:
TCP backdoor setup
unauthorised shell access
reading of all htpasswd, htaccess, password and configuration files
log wiping
MySQL access (read, write)
append code to all files matching a name pattern (mass exploit)
RFI/LFI scanner
UDP flooding
kernel information
This is probably a professional PHP-based server-wide exploit toolkit, and seeing as it's got a nice HTML interface and the whole lot, it could be easily used by a pro hacker, or even a script kiddie.
This exploit is called c99shell (thanks Yi Jiang) and it turns out to have been quite popular, being talked about and running for a few years already. There are many results on Google for this exploit.
Looking at Delan's decoded source, it appears to be a full-fledged backdoor providing a web interface that can be used to control the server in various ways. Telling fragments from the source:
echo '<center>Are you sure you want to install an IP:Port proxy on this
website/server?<br />
or
<b>Mass Code Injection:</b><br><br>
Use this to add PHP to the end of every .php page in the directory specified.
or
echo "<br><b>UDP Flood</b><br>Completed with $pakits (" .
round(($pakits*65)/1024, 2) . " MB) packets averaging ".
round($pakits/$exec_time, 2) . " packets per second \n";
or
if (!$fp) {echo "Can't get /etc/passwd for password-list.";}
I'd advise you to scrub that server and reinstall everything from scratch.
I know Delan Azabani has done this, but just so you actually know how he got the data out:
Just in case you're wondering how to decompress this, use base64 -d filename > output to parse base64 strings and gunzip file.name.gz to parse gzipped data.
The trick is in recognising that what you've got is base64 or gunzip and decompressing the right bits.
This way it goes absolutely nowhere near a JS parser or PHP parser.
First, replace the eval with an echo to see what code it would execute if you'd let it.
Send the output of that script to another file, say, test2.php.
In that file, do the same trick again. Run it, and it will output the complete malicious program (it's quite a beast), ~4k lines of hacker's delight.
This is code for php shell.
to decode this
replace replace eval("?>". with print(
run this
php5 file.php > file2.php
then replace eval with print and run in browser. http://loclhost/file2.php

PHP gzcompress vs gzopen/gzwrite

I'm writing a PHP script that generates gzipped files. The approach I've been using is to build up a string in PHP and gzcompress() the string before writing it out to a file at the end of the script.
Now I'm testing my script with larger files and running into memory allocation errors. It seems that the result string is becoming too large to hold in memory at one time.
To solve this I've tried to use gzopen() and gzwrite() to avoid allocating a large string in PHP. However, the gzipped file generated with gzwrite() is very different from when I use gzcompress(). I've experimented with different zip levels but it doesn't help. I've also tried using gzdeflate() and end up with the same results as gzwrite(), but still not similar to gzcompress(). It's not just the first two bytes (zlib header) that are different, it's the entire file.
What does gzcompress() do differently from these other gzip functions in PHP? Is there a way I can emulate the results of gzcompress() while incrementally producing the result?
The primary difference is that the gzwrite function initiates zlib with the SYNC_FLUSH option, which will pad the output to a 4 byte boundary (or is it 2), and then a little extra (0x00 0x00 0xff 0xff 0x03).
If you are using these to create Zip files, beware that the default Mac Archive utility does NOT accept this format.
From what I can tell, SYNC_FLUSH is a gzip option, and is not allowed in the PKZip/Info-ZIP format, all .zip files and their derivatives come from.
If you deflate a small file/text, resulting in a single deflate block, and compare it to the same text written with gzwrite, you'll see 2 differences, one of the bytes in the header of the deflate block is different by 1, and the end is padded with the above bytes. If the result is larger than one deflate block, the differences start piling up. It is hard to fix this, as the deflate stream block headers aren't even byte aligned. There is a reason everybody uses the zlib. Few people are brave enough to even attempt to rewrite that format!
I am not 100% certain, but my guess is that gzcompress uses GZIP format, and gzopen/gzwrite use ZLIB. Honestly, I can't tell you what the difference between the two is, but I do know that GZIP uses ZLIB for the actual compression.
It is possible that none of that will matter though. Try creating a gzip file with gzopen/gzwrite and then decompress it using the command-line gzip program. If it works, then using gzopen/gzwrite will work for you.
I ran into a similar problem once - basically there wasn't enough ram allocated to php to do the business.
I ended up saving the string as a text file, then using exec() to gzip the file using the filesystem. Its not an ideal solution but it worked for my situation.
try increasing the memory_limit parameter in your php.ini file
Both gzcompress() and gzopen() use the DEFLATE method for compressing blocks.
But they have different header/trailer.

Categories