Is there any way to detect if a gz file is corrupt in PHP?
I'm currently using http://www.php.net/manual/de/function.gzread.php#110078 to determine the file size and read the whole* file via
$zd = gzopen ( $file, "r" );
$contents = gzread ( $zd, $fzip_size );
gzclose ( $zd );
Unfortunately some gz files are corrupted and the last 4 bytes do not represent the real length of the gz file. As long as the number is negativ I'm able to tell that something is wrong, but sometimes it's positive (and very large) which leads to an out of memory error. How can I check in advance if the file is corrupted?
I'm reading the whole file because I found no working way to read the file line-by-line without knowing the size of the longest line - which led (in some case) to lines that were not complete.
If you can use linux gzip command it will be very simply to find if file is wrong or not. gzip -t will display no message if file is valid.
if (`gzip -t $file 2>&1`) {
echo "An error occured";
}
Related
I'm attempting to read a zip file in PHP that I know has a CRC error. Unfortunately, it looks like I can only get 31 bytes of the file to read. Using just $zip->getFromName() gives no errors but just reads 31 bytes
$zip = new ZipArchive;
$zip->open("path/to/corrupted.zip");
// $contents will become just 31 bytes
$contents = $zip->getFromName("file/in/zip.txt");
Trying to read from the stream $zip->getStream() will give a CRC error, and again will only read 31 bytes
$fp = $zip->getStream("file/in/zip.txt");
$contents = "";
while (!feof($fp)) {
// Gives the error "fread(): Zip stream error: CRC error in ..."
// Using 1 instead of 2 still only reads 31 bytes but gives no error
$contents .= fread($fp, 2);
}
fclose($fp);
So, is there any way I could ignore this CRC error, and read the file anyways?
A little background: my website pre-processes .jar files users upload before they're downloaded by others. Users will occasionally upload a valid .jar file with CRC errors in an attempt to deter people from decompiling it. However, I still want to be able to pre-process these files for download.
I have a different pre-processor written in Python, and I was able to pretty easily disable the CRC check by modifying the zipfile library and commenting out the line that does the crc check. Is there any easy way I could do a similar thing in PHP without needing to make my own ZipFile library? I suppose worst case that's what I'll need to do.
You can switch to a pure php zip library, so you can remove the crc check without having to rewrite the whole library on your own, and you also haven't to compile native code. Here is an example:
https://www.phpclasses.org/package/3864-PHP-Create-and-extract-ZIP-archives-in-purely-in-PHP.html
I have a folder that will consist of hundreds of PNG files and I want to write a script to make them all interlaced. Now, images will be added to that folder over time and processing all the images in the folder (wether their interlaced or progressive) seems kinda silly.
So I was wondering, is there any way to use PHP to detect if an image is interlaced or not that way I can choose wether to process it or not.
Thanks heaps!
You can also take the low-level approach - no need of loading the full image, or to use extra tools or libraries. If we look at the spec, we see that the "interlaced" flag it's just the byte 13 of the iHDR chunk, so we have to skip 8 bytes from the signature, plus 8 bytes of the iHDR Chunk identifier+length, plus 12 bytes of the chunk... That gives 28 bytes to be skipped, and if the next byte is 0 then the image is not interlaced.
The implementation takes just 4 lines of code:
function isInterlaced( $filename ) {
$handle = fopen($filename, "r");
$contents = fread($handle, 32);
fclose($handle);
return( ord($contents[28]) != 0 );
}
BTW, are you sure you want to use interlaced PNG? (see eg)
I think ImageMagick could solve your problem.
http://php.net/manual/en/imagick.identifyimage.php
Don't know if all the attributes are returned, but if you look at the ImageMagick tool documentation you can find that they can spot if an image is interlaced or not.
http://www.imagemagick.org/script/identify.php
At worst you can run the command for ImageMagick via PHP if the ImageMagick extension is not installed and parse the output for the "Interlace" parameter.
I'm looking for the most efficient way to write the contents of the PHP input stream to disk, without using much of the memory that is granted to the PHP script. For example, if the max file size that can be uploaded is 1 GB but PHP only has 32 MB of memory.
define('MAX_FILE_LEN', 1073741824); // 1 GB in bytes
$hSource = fopen('php://input', 'r');
$hDest = fopen(UPLOADS_DIR.'/'.$MyTempName.'.tmp', 'w');
fwrite($hDest, fread($hSource, MAX_FILE_LEN));
fclose($hDest);
fclose($hSource);
Does fread inside an fwrite like the above code shows mean that the entire file will be loaded into memory?
For doing the opposite (writing a file to the output stream), PHP offers a function called fpassthru which I believe does not hold the contents of the file in the PHP script's memory.
I'm looking for something similar but in reverse (writing from input stream to file). Thank you for any assistance you can give.
Yep - fread used in that way would read up to 1 GB into a string first, and then write that back out via fwrite. PHP just isn't smart enough to create a memory-efficient pipe for you.
I would try something akin to the following:
$hSource = fopen('php://input', 'r');
$hDest = fopen(UPLOADS_DIR . '/' . $MyTempName . '.tmp', 'w');
while (!feof($hSource)) {
/*
* I'm going to read in 1K chunks. You could make this
* larger, but as a rule of thumb I'd keep it to 1/4 of
* your php memory_limit.
*/
$chunk = fread($hSource, 1024);
fwrite($hDest, $chunk);
}
fclose($hSource);
fclose($hDest);
If you wanted to be really picky, you could also unset($chunk); within the loop after fwrite to absolutely ensure that PHP frees up the memory - but that shouldn't be necessary, as the next loop will overwrite whatever memory is being used by $chunk at that time.
I want to make a .php file downloadable by my users.
Every file is different from an user to another:
at the line #20 I define a variable equal to the user ID.
To do so I tried this: Copy the original file. Read it until line 19 (fgets) then fputs a PHP line, and then offer the file to download.
Problem is, the line is not inserted after line 19 but at the end of the .php file. Here is the code:
if (is_writable($filename)) {
if (!$handle = fopen($filename, 'a+')) {
echo "Cannot open file ($filename)";
exit;
}
for ($i = 1; $i <= 19; $i++) {
$offset = fgets($handle);
}
if (fwrite($handle, $somecontent) === FALSE) {
exit;
}
fclose($handle);
}
What would you do ?
append mode +a in fopen() places the handle's pointer at the end of the file. Your fgets() loop will fail as there's nothing left to read at the end of the file. You're basically doing 19 no-ops. Your fwrite will then output your new value at the end of the file, as expected.
To do your insert, you'd need to rewind() the handle to the beginning, then do your fgets() loop.
However, if you're just wanting people to get this modified file, why bother doing the "open file, scan through, write change, serve up file"? This'd leave a multitude of near-duplicates on your system. A better method would be to split your file into two parts, and then you could do a simple:
readfile('first_part.txt');
echo "The value you want to insert";
readfile('last_part.txt');
which saves you having to save the 'new' file each time. This would also allow arbitrary length inserts. Your fwrite method could potentially trash later parts of the file. e.g. You scan to offset "10" and write out 4 bytes, which replaces the original 4 bytes at that location in the original file. At some point, maybe it turns into 5 bytes of output, and now you've trashed a byte in the original and maybe have a corrupted file.
The a+ mode means:
'a+' Open for reading and writing; place the file pointer at the end of the file. If the file does not exist, attempt to create it.
You probably want r+
'r+' Open for reading and writing; place the file pointer at the beginning of the file.
Put your desired code in one string variable. Where you will have %s at point where you want to customize your code. After that just respond with php MIME type.
eg;
$phpCode = "if (foo == blah) { lala lala + 4; %s = 5; }", $user_specific_variable;
header('Content-type: text/php');
echo $phpCode;
Voila.
NB: Maybe mime type is not correct, I am talking out of my ass here.
I think instead of opening the file in "a+" mode, you should open the file in "r+" mode, because "a" always appends to the file. But I think the write will anyways overwrite your current data. So, the idea is that you'll need to buffer the file, from the point where you intend to write to the EOF. Then add your line followed by what you had buffered.
Another approach might be to keep some pattern in your PHP file, like ######. You can then:
1. copy the original PHP script
2. read the complete PHP script into a single variable, say $fileContent, using file_get_contents()
3. use str_replace() function to replace ###### in $fileContent with desired User ID
4. open the copied PHP script in "a" mode and rewrite $fileContent to it.
I have a process that uploads files via PHP but the resulting files end up being 2 bytes larger than the source file. I'm not sure where these 2 bytes are coming from. (the actual process is a chunked upload where I slice up a file and upload the slices, each slice winds up arriving 2 bytes longer than it started, but I've tested with a single small file and it too arrives 2 bytes larger than the source).
I'm attaching my PHP... Is this a normal feature of PHP? I'm imagining some sort of null terminator or something (there does appear to be a \n at the end of each file that wasn't there initially). Do I need to read the file into a buffer and get rid of the last two bytes before reassembling my original? I have to imagine I'm doing something wrong, but I'm confounded as to what it would be.
If I do need to manually strip off those last two bytes what's the correct way to do that (it's a binary file) and then append the rest to the overall file I'm rebuilding?
EDIT
Each uploaded file is getting a 0D0A word added to the end as PHP saves it to the server. So... I guess the question is how to prevent this from happening.
<?PHP
$target_path = $_REQUEST[ 'path' ];
$originalFileName = $_REQUEST['original_file_name'];
$target_path = $target_path . basename( $_FILES[ 'Filedata' ][ 'name' ] );
if ( move_uploaded_file( $_FILES[ 'Filedata' ][ 'tmp_name' ], $target_path ) )
{
$newFilePath = $originalFileName; //this is the overall file being re-assembled
$fh = fopen($newFilePath, 'ab') or die("can't open file");
$nextSlice = file_get_contents($target_path); //this is the slice that's 2 bytes too big each time
fputs($fh, $nextSlice);
fclose($fh);
// unlink($target_path); //normally I'd delete the slice at this point, but I'm hanging on to it while I figure out where the heck the 2 extra bytes are coming from.
fclose($fh);
echo "SUCCESS";
}
else
{
echo "FAIL:There was an error uploading the file, please try again!";
}
?>
Is the file binary? I'm thinking that file_get_contents is causing problems because it's treating it like a string. Maybe you should try fread instead?
The solution turns out to be this:
fwrite($fh, $GLOBALS["HTTP_RAW_POST_DATA"]);
I may be doing something wrong in my request that when I use the method I described in the question, the file gets written with the extra 0D0A at the end, but the above method for extracting the data has it arriving intact and exactly the right length.