I try to decompress blocks of data which were compressed with zlib and author made remarks that for decompress i must use inflate_init and inflate with Z_SYNC_FLUSH. I sure that this must work because that works on php in this way :
$temp = substr($temp, 2, -4);
$temp{0} = chr(ord($temp{0}) | 1);
$temp = gzinflate($temp);
but i ckecked many method for decompress this on C++ and every time fail.
Here is one of them :
char compressedblockbuffer[3371];
char uncompressedblockbuffer[8192];
is.read(compressedblockbuffer, 3371);
z_stream strm;
strm.zalloc = Z_NULL;
strm.zfree = Z_NULL;
strm.opaque = Z_NULL;
strm.avail_in = 3371;
strm.next_in = (Bytef *)compressedblockbuffer;
strm.avail_out = 8192;
strm.next_out = (Bytef *)uncompressedblockbuffer;
inflateInit(&strm);
inflate(&strm, Z_SYNC_FLUSH);
inflateEnd(&strm);
It's not full code, just example to show problem and thats why i specified already known sizes.
I use last zlib realize so may be something change in the zlib inflate since 2003-2004 years?
So the result is :
So seems that uncompressedblockbuffer contains '\0' at the 2,3,4 indexes and many other and if i print this to console i just see two first elements.
UPD:
If gzinflate() in PHP works on the data, then your code won't. gzinflate() expects raw deflate data. Your code is looking for zlib-wrapped deflate data. If you want to decode raw deflate data, you need to use inflateInit2(&strm, -15) instead.
Your call to inflate() is likely returning an error that you are not checking for. You need to always check the return codes of the zlib routines, or for that matter any function that has the potential to return an error.
What kind of data are you decompressing? Many binary formats are perfectly accepting of NUL bytes in their data, since it just reads as a value of 0. For example, inside of image data in many formats, it'd just represent a value of 0 in either that channel or pixel (depending on data size). Not to mention, binary formats don't necessarily read as bytes. A NUL byte may actually be a part of a 2- or 4-byte value.
This is the problem with trying to read binary data as a character string. Binary data needn't follow the rules of text. This is why usually the data boundary is a separate size value, because it can't terminate on NUL values like text.
If you have the original uncompressed data for comparison, either load that data into memory and compare the data, or save the decompressed data to a file and use a diff tool to do a binary comparison of the files.
Related
I need to split a big DBF file using php functions, this means that i have for example 1000 records, i have to create 2 files with 500 records each.
I do not have any dbase extension available nor i can install it so i have to work with basic php functions. Using basic fread function i'm able to correctly read and parse the file, but when i try to write a new dbf i have some problems.
As i have understood, the DBF file is structured in a 2 line file: the first line contains file info, header info and it's in binary. The second line contains the data and it's plain text. So i thought to simply write a new binary file replicating the first line and manually adding the first records in the first file, the other records in the other file.
That's the code i use to parse the file and it works nicely
$fdbf = fopen($_FILES['userfile']['tmp_name'],'r');
$fields = array();
$buf = fread($fdbf,32);
$header=unpack( "VRecordCount/vFirstRecord/vRecordLength", substr($buf,4,8));
$goon = true;
$unpackString='';
while ($goon && !feof($fdbf)) { // read fields:
$buf = fread($fdbf,32);
if (substr($buf,0,1)==chr(13)) {$goon=false;} // end of field list
else {
$field=unpack( "a11fieldname/A1fieldtype/Voffset/Cfieldlen/Cfielddec", substr($buf,0,18));
$unpackString.="A$field[fieldlen]$field[fieldname]/";
array_push($fields, $field);
}
}
fseek($fdbf, 0);
$first_line = fread($fdbf, $header['FirstRecord']+1);
fseek($fdbf, $header['FirstRecord']+1); // move back to the start of the first record (after the field definitions)
first_line is the variable the contains the header data, but when i try to write it in a new file something wrong happens and the row isn't written exactly as it was read. That's the code i use for writing:
$handle_log = fopen($new_filename, "wb");
fwrite($handle_log, $first_line, strlen($first_line) );
fwrite($handle_log, $string );
fclose($handle_log);
I've tried to add the b value to fopen mode parameter as suggested to open it in a binary way, i've also taken a suggestion to add exactly the length of the string to avoid the stripes of some characters but unsuccessfully since all the files written are not correctly in DBF format. What can i do to achieve my goal?
As i have understood, the DBF file is structured in a 2 line file: the
first line contains file info, header info and it's in binary. The
second line contains the data and it's plain text.
Well, it's a bit more complicated than that.
See here for a full description of the dbf file format.
So it would be best if you could use a library to read and write the dbf files.
If you really need to do this yourself, here are the most important parts:
Dbf is a binary file format, so you have to read and write it as binary. For example the number of records is stored in a 32 bit integer, which can contain zero bytes.
You can't use string functions on that binary data. For example strlen() will scan the data up to the first null byte, which is present in that 32 bit integer, and will return the wrong value.
If you split the file (the records), you'll have to adjust the record count in the header.
When splitting the records keep in mind that each record is preceded by an extra byte, a space 0x20 if the record is not deleted, an asterisk 0x2A if the record is deleted. (for example, if you have 4 fields of 10 bytes, the length of each record will be 41) - that value is also available in the header: bytes 10-11 - 16-bit number - Number of bytes in the record. (Least significant byte first)
The file could end with the end-of-file marker 0x1A, so you'll have to check for that as well.
I wrote a small WebSocket library a while back, and found adding gzip support surprisingly easy. I didn't fully realize at the time that the deflate_init() / deflate_add() / inflate_init() / inflate_add() functions were actually PHP 7-only, and now I'd like to be able to run my WebSocket server under PHP 5 environments.
My problem is, deflate_add() produces output that differs slightly from gzdeflate() - by one character in the testcase below.
The deflate_add()/inflate_add()-based approach works perfectly in-browser, so the output of gzdeflate() is the incorrect one. I'm guessing gzdeflate()/gzinflate() are using zlib with different underlying options - something related to stream state, maybe? - and that's causing everything to fall apart.
Ultimately I want to know if I can convince PHP 5-era zlib functions to output "correct" deflated data.
First of all, the deflate_init()/deflate_add()-based approach I used on PHP 7:
$data = "ABC";
$ctx = deflate_init(ZLIB_ENCODING_RAW);
// unfortunately I can't find the gigantic blog post with example code
// that I learned from :(, but it contained the Ruby equivalent of the
// the substr() below. I blinked at it a bit but apparently this is how
// it's done.
$deflated = substr(deflate_add($ctx, $data, ZLIB_SYNC_FLUSH), 0, -4);
// $deflated is now "rtr\6\0"
$ictx = inflate_init(ZLIB_ENCODING_RAW);
$data2 = inflate_add($ictx, $deflated, ZLIB_NO_FLUSH);
// $data2 is now "ABC"
Here's what happens if I use gzdeflate()/gzinflate():
$data = "ABC";
$deflated = gzdeflate($data, 9, ZLIB_ENCODING_RAW);
// $deflated is now "str\6\0"
$output = gzinflate($deflated);
// $output is now "ABC"
Trying to gzinflate() the output of inflate_add() produces a data error. As a TL;DR:
print gzinflate("rtr\6\0")."\n"; // will bomb out
print gzinflate("str\6\0")."\n"; // prints "ABC"
What you are calling correct is incorrect, and what you are calling incorrect is correct.
With deflate_add you are deliberately creating an unterminated, i.e. invalid, deflate stream. Why, I have no idea. (Nor, apparently, do you, since this came from a "gigantic blog post" that you cannot find.) This is being done with the ZLIB_SYNC_FLUSH which completes the current deflate block and appends an empty stored block. The substr(,,-4) is removing most of that empty stored block at the end, leaving you with an incomplete, invalid inflate stream, prematurely ending in the middle of a stored block.
gzdeflate on the other hand is creating a properly terminated deflate stream, with a single deflate block marked as the last block. The only difference between the two streams is the first (least significant) bit, which is a 1 to mark the last block.
You do not say how the properly terminated deflate stream is "causing everything to fall apart". In any case, you can make a properly terminated deflate stream with deflate_add by using ZLIB_FINISH instead of ZLIB_SYNC_FLUSH, and forgoing the substr.
There is no way to make an invalid deflate stream with gzdeflate, if that's what you're asking. You can't just change the first bit, since for a larger string, the last block may not be the first block.
Creating bzip2 archived data in PHP is very easy thanks to its implementation in bzcompress. In my present application I cannot in all reason simply read the input file into a string and then call bzcompress or bzwrite. The PHP documentation does not make it clear whether successive calls to bzwrite with relatively small amounts of data will yield the same result as when compressing the whole file in one single swoop. I mean something along the lines of
$data = file_get_contents('/path/to/bigfile');
$cdata = bzcompress($data);
I tried out a piecemeal bzcompression using the routines shown below
function makeBZFile($infile,$outfile)
{
$fp = fopen($infile,'r');
$bz = bzopen($outfile,'w');
while (!feof($fp))
{
$bytes = fread($fp,10240);
bzwrite($bz,$bytes);
}
bzclose($bz);
fclose($fp);
}
function unmakeBZFile($infile,$outfile)
{
$bz = bzopen($infile,'r');
while (!feof($bz))
{
$str = bzread($bz,10240);
file_put_contents($outfile,$str,FILE_APPEND);
}
}
set_time_limit(1200);
makeBZFile('/tmp/test.rnd','/tmp/test.bz');
unmakeBZFile('/tmp/test.bz','/tmp/btest.rnd');
To test this code I did two things
I used makeBZFile and unmakeBZFile to compress and then decompress a SQLite database - which is what I need to do eventually.
I created a 50Mb filled with random data dd if=/dev/urandom of='/tmp.test.rnd bs=50M count=1
In both cases I performed a diff original.file decompressed.file and found that the two were identical.
All very nice but it is not clear to me why this is working. The PHP docs state that bzread(bzpointer,length) reads a maximum length bytes of UNCOMPRESSED data. If my code below is woring it is because I am forcing the bzwite and bzread size to 10240 bytes.
What I cannot see is just how bzread knows how to fetch lenth bytes of UNCOMPRESSED data. I checked out the format of a bzip2 file. I cannot see tht there is anything there which helps easily establish the uncompressed data length for a chunk of the .bz file.
I suspect there is a gap in my understanding of how this works - or else the fact that my code below appears to perform a correct piecemeal compression is purely accidental.
I'd much appreciate a few explanations here.
To understand how the decompression get the length of bytes you have to understand first the compression. It seems that you don't know any thing about compression algorigthim.
BZIP2
Crucial algorithm of BZIP2 is the Burrows Wheeler transformation (BWT), that converts the original data into a suitable form for following coding. The current version applies a Huffman code. Compression algorithm processes the data in blocks totally independent from each block. Block sizes can be set in a range from 1-9 (100,000 - 900,000 bytes).
BZIP2 Data Structure
The first two character of compressed string start with letter 'BZ' and thereafter 1 byte for algorigthim used. Thereafter identification of the block size immediately follows, being valid for the entire file (h1, h2, h3 to h9). The parameter indicates the block size in units from 1-9 (100,000 - 900,000 bytes).
Actual original data are stored in blocks according to the selected size and will be protected individually with a CRC32 checksum. Additionally a 48 bit identifier introduces each block. This block structure allows a partial reconstruction of damaged files.
GZIP/BZIP
Gzip and bzip2 are functionally equivalent. One advantage of GZIP is that it can compress a stream, a sequence where you can't look behind. This makes it the official compressor of http streams. GZZIP DEFLATE RFC 1951 Compressed Data Format Specification and GUNZIP RFC 1952 File Format Specification are published documents.
GIP explained
I'm trying to use PHP to parse a custom gzip archive file format that was created in Delphi (not my code!). The format is basically:
4-byte integer: count of files in archive
for each compressed file:
4-byte integer: filename length [n]
[n] bytes: filename
4-byte integer: uncompressed file length [m]
[????] bytes: gzipped content
I can read the file and actually decode the first compressed file correctly by using zlib_decode() with a max uncompressed length of [m] bytes on the remainder of the file after I know the length ([m]), but then I'm stuck because I don't know how far into the substring I should go to find the next filename -- zlib_decode() doesn't return the number of compressed bytes that it processed before stopping. Since this is a custom format, it doesn't seem like I can use the normal gzopen()/gzread() functions because the entire file isn't compressed (I tried, it doesn't work).
This code works in Delphi because apparently you can pass a file handle back and forth between normal file reading functions and the System.ZLib decoding functions -- you can read [m] uncompressed bytes and the pointer will remain at the last compressed byte -- but PHP doesn't seem to support switching between read-as-normal and read-as-gzip on the fly that way.
Am I missing an obvious way in PHP to deal with a mixed-content file format like this, where metadata and compressed data are stacked together this way? Or am I out of luck without knowing the compressed data length?
A dirty workaround is to recompress the content of each file as I am able to parse it, use that to calculate the compressed length, and adjust the file pointer in the original file manually as follows:
$current_pos = ftell($handle);
$skip_length = strlen(gzencode($uncompressed_text,9,FORCE_DEFLATE));
fseek($handle, $skip_length+$current_pos);
This works, but feels very hack-ish. I'd still be open to any better approaches.
EDIT:
Just a note that this eventually failed. However, I was fortunate enough to know in advance the list of expected filenames and I was able to do the following (more reliable since zlib_decode() will decode as much as it can and discard the rest anyway):
foreach ($filenames as $thisFilename) {
$thisPos = strpos($rawData, $thisFilename);
$gzresult = zlib_decode(substr($rawData, $thisPos + strlen($table) + 8)); // skip 8 bytes for filename size and uncompressed data size, which are useless info.
}
I am developing a PHP application where large amounts of text needs to be stored in a MySQL database. Have come across PHP's gzcompress and MySQL's COMPRESS functions as possible ways of reducing the stored data size.
What is the difference, if any, between these two functions?
(My current thoughts are gzcompress seems more flexible in that it allows the compression level to be specified, whereas COMPRESS may be a bit simpler to implement and better decoupling? Performance is also a big consideration.)
The two methods are more or less the same thing, in fact you can mix them: compress in php and uncompress in MySQL and vice versa.
To compress in MySQL:
INSERT INTO table (data) VALUE(COMPRESS(data));
To compress in PHP:
$compressed_data = "\x1f\x8b\x08\x00".gzcompress($uncompressed_data);
To uncompress in MySQL:
SELECT UNCOMPRESS(data) FROM table;
To uncompress in PHP:
$uncompressed_data = gzuncompress(substr($compressed_data, 4));
Another option is to use MySQL table compression.
It only require configuration and then it is transparent.
This may be an old question, but it's important as a Google search destination. The results of MySQL's COMPRESS() vs PHP's gzcompress() are the same EXCEPT for MySQL puts a 4-byte header on the data, which indicates the uncompressed data length. You can easily ignore the first 4 bytes from MySQL's COMPRESS() and feed it to gzuncompress() and it will work, but you cannot take the results of PHP's gzcompress() and use MySQL's UNCOMPRESS() on it, unless you take specific care to add in that 4-byte length header, which of course requires having the uncompressed data already...
The accepted answer does not use the right 4 byte header.
The first 4 bytes are the LENGTH and not a static header.
I have no idea about the implications of using a wrong length but it can not be good and has the potential to crash the database or table content in the future (if not now)
The correct answer with POC example:
Output of mysql:
mysql : "select hex(compress('1234512345'))"
0A000000789C3334323631350411000AEB01FF
The php equivalent:
They both use zlib, so the compression will likely be about the same. Test it and see.
Adding this answer for reference, as I needed to use uncompress() to decompress data where the decompressed size was stored in a separate column to the blob.
As per the previous answers, uncompress() expects the first 4 bytes of the compressed data to be the length, stored in little-endian format. This can be prepended using concat e.g.
select uncompress(
concat(
char(size & 0x000000ff),
char((size & 0x0000ff00) >> 8),
char((size & 0x00ff0000) >> 16),
char((size & 0xff000000) >> 24),
compressed_data)) as decompressed
from my_blobs;
Johns answer is almost correct. The length must be computed by using strlen instead of mb_strlen as the latter will recognize multibyte characters as "1 character" although they span multiple bytes. Take the following example with a "▄" character that consists of 3 bytes:
$string="▄";
$compressed = gzcompress($string, 6);
echo "with strlen\n";
$len = strlen($string);
$head = pack('V', $len);
$base64 = base64_encode($head.$compressed);
echo "Length of string: $len\n";
echo $base64."\n";
echo `mysql -e "SELECT UNCOMPRESS(FROM_BASE64('$base64'))" -u root -proot -h mysql`;
echo "\n\nwith mb_strlen\n";
$len = mb_strlen($string);
$head = pack('V', $len);
$base64 = base64_encode($head.$compressed);
echo "Length of string: $len\n";
echo $base64."\n";
echo `mysql -e "SELECT UNCOMPRESS(FROM_BASE64('$base64'))" -u root -proot -h mysql`;
Output:
with strlen
Length of string: 3
AwAAAHicezStBQAEWQH9
UNCOMPRESS(FROM_BASE64('AwAAAHicezStBQAEWQH9'))
▄
with mb_strlen
Length of string: 1
AQAAAHicezStBQAEWQH9
UNCOMPRESS(FROM_BASE64('AQAAAHicezStBQAEWQH9'))
NULL