I am trying to open a ZIP file using PHP 5.6.24 on Windows Server 2008. As it is running XAMPP, Apache and PHP are 32bit.
The file in question is 2.5 GB and cannot be opened, ZipArchive::open() returns the error code 19 (Not a zip archive). However, the file checks out okay. Using the same method on Linux (64bit) with the same PHP version, there is no error on the same file.
I have removed entries from the Zip file to reduce the file size to 1.9 GB, and PHP has no issues processing that file.
The code is simple:
$za = new ZipArchive();
$err = $za->open('file.zip');
The only thing I can imagine is that for some reason, 32bit PHP cannot cope with archives > 2GB, but I have not found anything on this. I found mentioning of a 4 GB limit and of 64k files, but that doesn't apply in this case.
Is there anything I can do, maybe a configuration option, to be able to process this file?
I have an old code base that makes extensive use of GD (no, switching to imagemagick is not an option). It's been running for several years, through several versions. However, when I run it in my current development environment, I'm running into a mysterious gd-png error: cannot allocate image data errors when calling imagecreatefrompng(). The PNG file I'm using is the same as I've been using, so I know it works.
Current setup:
Ansible-provisioned vagrant box
Ubuntu 14.04
PHP 5.5.9
GD 2.1.1
libPNG 1.2.50
PHP's memory limit at the time the script is run is 650M, though it's ultimately the kernel itself that ends up killing the script, and changing PHP's memory limit doesn't seem to have an effect.
The image dimensions are 7200x6600 and are about 500KiB on disk. This hasn't been a problem in my other environments and is only newly-occurring in my development environment. Unfortunately, I don't have access to the other environments anymore to do a comparison, though the setup was similar in the last working one -- Ubuntu 14.04, PHP 5.5, sufficient memory allocations.
What could be happening in this setup that wasn't happening in my previous setups? How can I fix this?
I was browsing a bit through the PHP 5.5.9 source to try and find your specific error string "cannot allocate image data", but the closest I could find is "gd-png error: cannot allocate gdImage struct". The bundled version of GD for PHP 5.5.9 (all the way up to 7.0.0) is version 2.0.35, so it seems you're looking at a custom build(?). Bugging the PHP guys might not help here.
Looking at the GD source (2.1.2, can't seem to find 2.1.1), the only place this error occurs is:
https://github.com/libgd/libgd/blob/master/src/gd_png.c#L435
image_data = (png_bytep) gdMalloc (rowbytes * height);
if (!image_data) {
gd_error("gd-png error: cannot allocate image data\n");
png_destroy_read_struct (&png_ptr, &info_ptr, NULL);
if (im) {
gdImageDestroy(im);
}
if (palette_allocated) {
gdFree (palette);
}
return NULL;
}
Where gdMalloc is:
https://github.com/libgd/libgd/blob/GD-2.1/src/gdhelpers.c#L73
void *
gdMalloc (size_t size)
{
return malloc (size);
}
I'm afraid this is as far as my detective work goes. As far as I can tell the bundled 2.0.35 GD versions in PHP also just use malloc, so at first glance there's no real difference there. I also tried to find some equivalent piece of code in the bundled versions, but so far I haven't found it and it seems to be here:
https://github.com/php/php-src/blob/PHP-5.5.9/ext/gd/libgd/gd_png.c#L323
image_data = (png_bytep) safe_emalloc(rowbytes, height, 0);
I can't seem to find safe_emalloc, but it seems the old PHP bundled versions did use a different memory allocation here than the version your environment uses. Perhaps your best bet is to check with the GD devs?. To avoid a merry goose chase, I think trying another environment is your best bet -after confirming they are indeed using a non-strandard GD version.
After some more PHP source safari, it seems safe_emalloc is used throughout all extensions, so I guess (guess mind you) that this is the preferred way to allocate memory for PHP extensions (looks like it). If your environment is in fact using an unaltered GD 2.1.1, it's likely it is ignoring any PHP memory settings/limits. You might be able to find some other way of specifying the ('stand-alone') GD memory limit?
Edit: Looking at the official libgd faq, near the bottom it states that the PHP extension should indeed respect the memory limit and it specifies that an 8000x8000 pixel image would take about 256MB, so your 650MB limit should really be enough (unless you're working with multiple copies in the same run?) and the fact that it's not working corroborates that something fishy is going on with the memory allocation.
If I was you I would do the following:
Send messages to the php-devel list and see if it is a known issue. You'll also want to search their bug tracker. http://php.net/mailing-lists.php and https://bugs.php.net/
Use GD command line utilities to see if you can process the same image with the same version with cmd line stuff. This is trying to determine if it's a problem with GD and the image or if it is an issue with the PHP-gd lib or some combination.
1.a Create a PHP CLI program to resize the image and see if it works
Figure out exactly what happens when you call the php function in the underlying (probably c) code. This will involve downloading the source to the php-gd module and looking through it.
Code a minimal c application that does the same thing as the underlying PHP-gd library and see if you can reproduce the error
Something in there should tell you exactly what is going on, though it will take a bit of work. You should be able to get all the tools/sources for your environment. The beauty of open source, right?
The other option would be to try different versions of these applications in your environment and see if you find one that works. This is the "shotgun" approach and not the best, IMO.
From my experience:
Try a file with a different row size. I had issues with another image library that did not read images longer than 3000px/row. It was otherwise not restricted in absolute size.
You have quite an image there, if this is in RGBA in memory, you end up with 180M uncompressed image data, 140M on RGB, still 45M as 8Bit. You are sure, your process limits will not be exceeded when this is loaded?
You used the same image with the same libraries, it means libgd and libpng have no restrictions about the memory size (or at least not one for the size you're using), the problem may be php but you are sure the memory limit is 650M (if your php installation is using suhosin, you may want to check suhosin.memory_limit)
You tell the kernel is killing the script (but I can't find why you are telling so, is there a dmesg log about?) but the kernel kill a process only when is out of memory.
One problem could be memory fragmentation / contiguous alloc, nowadays it should be more a kernel space problem than user space, but the size your script is allocating is not an every day size.
Fragmentation could be a problem if your script is running on a server with a long uptime with a lot of process allocating and freeing memory, if is a virtual machine that you just start up and the script fail at the first launch, then the problem isn't fragmentation.
You can do a simple test, command and code below, it use the gcc compiler to create (in the directory where you launch the command) a a.out executable which allocate and set to 0 a memory chunk with size 191 x 1024 x 1024 (about ~190M)
echo '#include <stdlib.h>\n#include <string.h>\n#define SIZ 191*1024*1024\nint main() {void *p=malloc(SIZ);memset(p,0,SIZ);return (p==NULL?1:0);}' \
| gcc -O2 -xc -
run the executable with:
./a.out
If the problem is the system memory the kernel should kill the executable (I'm not sure 100%, the kernel have some policy about which process to kill when there's no memory, to be checked)
You should see the kill message though, with dmesg perhaps.
(I think this shouldn't happen, instead the malloc should just fail)
If the malloc succeed, the executable can allocate enough memory, it should return 0, if it fail should return 1
to verify the return code just use
./a.out
echo $?
(don't execute any other command in the middle)
If the malloc fail you probably just need to add physical or virtual memory to the system.
Keep in mind that ~190M is just the memory for one uncompressed image, depending on how php, libgd and libpng are working the memory for the whole process may be even twice (or more).
I did a simple test and profiled the memory usage, the peak with an image 7600x2200 (1.5M on disk) on my system is about ~342M, here's the test:
<?php
$im = imagecreatefrompng("input.png");
$string = "Sinker sucker socks pants";
$orange = imagecolorallocate($im, 220, 210, 60);
$px = (imagesx($im) - 7.5 * strlen($string)) / 2;
imagestring($im, 3, $px, 9, $string, $orange);
imagepng($im);
imagedestroy($im);
I tried xhprof at first but it returned values too low.
So I tried a simple script memusg
memusg /usr/bin/php -f test.php >output.png
(the test work btw)
I met this problem yesterday and fixed it today.
Yesterday's env:
php-7.0.12
libpng-1.6.26
libgd-2.1.1
This suite will crash when I resize a png image.
After memory check, I thought it might be a bug in the latest php or libpng.
So I changed the env to this:
php-5.6.27
libpng-1.2.56
libgd-2.1.1
I changed php and libpng to mature versions which are used longtime.
Recompile and re-install these - it works well for png and jpeg.
I have a background script which generates html files (ea 100-500KB in size) as a by-product and when it has accumulated 500 of them, it packs them up in a .tar.gz and archives them. It was running non-stop for several weeks and generated 131 .tar.gz files thus far until this morning when it threw the following exception:
Uncaught exception 'PharException' with message 'tar-based phar
"E:/xampp/.../archive/1394109645.tar" cannot be created, contents of file
"58836.html" could not be written' in E:/xampp/.../background.php:68
The code responsible for archiving
$name = $path_archive . $set . '.tar';
$archive = new PharData($name);
$archive->buildFromDirectory($path_input); // <--- line 68
$archive->compress(Phar::GZ);
unset($archive);
unlink($name);
array_map('unlink', glob($path_input . '*'));
What I've checked and made sure of so far
I couldn't find anything irregular in the html file itself,
nothing else was touching this file during the process,
scripts timeout and memory were unlimited
and enough spare memory and disk space
What could be causing the exception and/or is there a way to get a more detailed message back from PharData::buildFromDirectory?
Env: Virtual XP (in VirtualBox) running portable XAMPP (1.8.2, PHP 5.4.25) in a shared folder of a Win7 host
I solved similar problem after hours of bug-hunting today. It was caused by too little space on one partition of the disk. I had enough space in the partition where tar.gz archive was created but after removing some log files from another partition everything works again.
I think it's possible that object PharData stores some temporary data somewhere and that's why this is happening even if there is enough space on the disk where you create tar.gz archive.
I want to zip a large folder of 50K files on Windows Server. I'm currently using this code:
include_once("CreateZipFile.inc.php");
$createZipFile=new CreateZipFile;
$directoryToZip="repository";
$outputDir=".";
$zipName="CreateZipFileWithPHP.zip";
define("ZIP_DIR",1); //
if(ZIP_DIR)
{
//Code toZip a directory and all its files/subdirectories
$createZipFile->zipDirectory($directoryToZip,$outputDir);
}else
{
//?
}
$fd=fopen($zipName, "wb");
$out=fwrite($fd,$createZipFile->getZippedfile());
fclose($fd);
$createZipFile->forceDownload($zipName);
#unlink($zipName);
Everything works fine until around 2K image files. But this is not what I want to get. I'm willing to process to zip like 50K images at least. Meanwhile my script gets this error:
Fatal error: Maximum execution time of 360 seconds exceeded in C:\xampp\htdocs\filemanager\CreateZipFile.inc.php on line 92
$newOffset = strlen(implode("", $this->compressedData));
I'm searching for any solution to proceed such a huge amount of files. I currently use XAMPP on Windows Server 2008 Standard. Is there any possibility to make small parts of the zips, use a system command and maybe external tool to pack them and then send it to header to download?
http://pastebin.com/iHfT6x69 for CreateZipFile.inc.php
try this .. to increase execution time
ini_set('max_execution_time', 500);
500 is number os seconds change it to whatever you lilke
Do you need a smaller file or a fast served file?
for fast serving without compression and without memory leak you could try to use the system command with a zip software like gzip and turning the compression of.
the files would probably get huge but would be served fast as one file.
I have some code that copies a file to a temporary location where it is later included in a zip file.
I already have the source files stored in a local cache directory, and have also stored the SHA1 hash of the original files. The files in question are .png images, ranging from a few kb to around 500kb.
My problem is that at high server loads, the copy intermittently fails. Upon examining my logs, I see that even though a healthy file exists in the source location, the destination contains a file with zero bytes.
So, to try and figure out what was going on and to increase reliability, I implemented a SHA1 check of the destination file, so that if it fails, I can retry the copy using the shell.
99.9% of the time, the files copy with no issue. Occasionally, the first copy fails but then the second attempt succeeds. In a few number of cases (around 1 in 2,500; and always at high server load), both copies will fail. In nearly all these cases SHA1 of the destination file is da39a3ee5e6b4b0d3255bfef95601890afd80709 which is consistent with an empty file.
In all occasions, the script continues, and the created zip includes an empty image file. There is nothing in the Nginx, PHP or PHP-FPM error logs that indicates any problem. The script will copy the same file successfully when retried.
My stack is Debian Squeeze with the .deb PHP 5.4/PHP 5.4 FPM packages and Nginx 1.2.6 on an Amazon EBS backed AMI. The file system is XFS and I am not using APC or other caching. The problem is consistent and replicable at server loads >500 hits per second.
I cannot find any documentation of known issues that would explain this behaviour. Can anyone provide any insight into what may be causing this issue, or provide suggestions on how I can more reliably copy an image file to a temporary location for zipping?
For reference, here is an extract of the code used to copy / recopy the files.
$copy = copy ($cacheFile, $outputFile);
if ($copy && file_exists($outputFile) && sha1_file($outputFile) !== $storedHash) {
// Custom function to log debug messages
dbug(array($cacheFile, sha1_file($cacheFile),
$storedHash, $outputFile,
file_exists($outputFile), filesize($outputFile)),
'Corrupt Image File Copy from Cache 1 (native)');
// Try with exec
exec ("cp " . $cacheFile . " " . $outputFile);
if (file_exists($outputFile) && sha1_file($outputFile) !== $storedHash)
dbug(array($cacheFile, sha1_file($cacheFile),
$storedHash, $outputFile,
file_exists($outputFile), filesize($outputFile)),
'Corrupt Image File Copy from Cache 2 (shell exec)');
}