I have a php script where the user can upload images.
I want to make the script lower the image quality (jpeg) if the file size is bigger than 'X' kbytes.
Something like this:
if( $_FILES['uploaded_img']['size'] > $file_size_limit ){
// code that lowers the quality of the uploaded image but keeps the image width and height
}
What is the best approach for this?
ps: I don't want to change image width and height.
Sure you can. Do something like this.
$upload = $_FILES['uploaded_img'];
$uploadPath = 'new/path/for/upload/';
$uploadName = pathinfo($upload['name'], PATHINFO_FILENAME);
$restrainedQuality = 75; //0 = lowest, 100 = highest. ~75 = default
$sizeLimit = 2000;
if($upload['size'] > $sizeLimit) {
//open a stream for the uploaded image
$streamHandle = #fopen($upload['tmp_name'], 'r');
//create a image resource from the contents of the uploaded image
$resource = imagecreatefromstring(stream_get_contents($streamHandle));
if(!$resource)
die('Something wrong with the upload!');
//close our file stream
#fclose($streamHandle);
//move the uploaded file with a lesser quality
imagejpeg($resource, $uploadPath . $uploadName . '.jpg', $restrainedQuality);
//delete the temporary upload
#unlink($upload['tmp_name']);
} else {
//the file size is less than the limit, just move the temp file into its appropriate directory
move_uploaded_file($upload['tmp_name'], $uploadPath . $upload['name']);
}
This will accept any image format supported by PHP GD (Assuming that it's installed on your server. Most likely is). If the image is less than the limit, it will just upload the original image to the path you specify.
Your basic approach (which is implemented in Austin's answer) will work some of the time, but it's important to keep in mind that quality != file size. While they are generally correlated, it is perfectly possible (even common) that reducing the quality of a jpeg file will actually result in a LARGER file. This is because any JPEG uploaded to your system has already been run through the JPEG compression formula (often with a quality of 79 or 80). Depending on the original image, this process will create artifacts/alter the resulting image. When you run this already optimized image through the jpeg compression algorithm a second time it doesn't "know" what the original image looked like... so it treats the incoming jpeg as if it's a brand new lossless file and tries to copy it as closely as possible... including any artifacts created in the original process. Couple this with the fact that the original jpeg compression already took advantage of most of the "easy" compression tricks, it ends up being quite likely that compressing a second time results in a crappier looking image (copy of a copy problem) but not smaller file.
I did a few tests to see where the cutoff was, and unsurprisingly if the original image had a low compression ratio (q=99) a lot of space was saved re-compressing to q=75. If the original was compressed at q=75 (pretty common for graphic program defaults) then the secondary q=75 compression looked worse but resulted in virtually the same file-size as the original. If the original had a lower compression level (q=50) then the secondary q=75 compression resulted in a significantly larger file (for these tests I used three complex photos... obviously images with specific palates/compositions will have different performances going through these compressions). Note: I'm using Fireworks cs4 for this test... I realize that these quality indicators have no standardization between platforms
As noted in the comments below, moving from file formats like PNG to JPEG will usually end up significantly smaller (though without any transparency), but from JPEG -> JPEG (or GIF->JPEG, especially for simple or small-palate images) will often not help.
Regardless, you can still try using the compression method described by Austin, but make sure you compare the file-sizes of the two images when you're done. If there is only a small incremental gain or the new file is larger, then default back to the original image.
Related
I have an upload form which allows multiple files to be selected. The original files are quite heavy - approx. 10MB each.
Each uploaded file goes through the same resizing and saving process:
<?php
$file = $request->file('qqfile'); // Huge JPG file in the request
$image = Image::make($file); // Building an Intervention Image object
$sizes = array(2560, 1980, 1366, 1024, 768, 480, 320);
foreach ($sizes as $size) {
$image = $image->widen($size->width);
Storage::put('public/image_'.$size.'.jpg', $image->encode('jpg', 90));
}
Now, my understanding is that in the foreach loop it's the 10MB, original image object that's being reused. So, during each iteration, the 10MB image object gets resized to a smaller size - e.g. from 4200x2460 to 320x187 in the last iteration, and then saved on the disk.
If I'm correct (am I?) then it's not very efficient, as the GD library operations are quite expensive and the server CPU can get hit stronger than it should.
Is there any way I can optimize the resizing process? E.g. for each iteration use already-resized image object, instead of the huge original one?
I would avoid this, though: resize - save - read from disk - resize again, as the disk I/O can get hit as well. What I have in mind is this: resize the original - resize to the next size - ... - resize the last size, then save each. This means I need a way to store a resized Intervention object in a variable, but not save it yet (as it will be saved in the last step).
Does this make sense? Any ideas?
Thank you.
Im working on an application that allows users to upload images. We are setting a maximum file size of 1MB. I know there are a few options out there for compressing JPEGs (and we are dealing with jpegs here).
From what ive seen, all of these functions just allow you to define a compression ratio and scale it down. Im wondering if there is a function that allows you to define a maximum file size and have it calculate the compression ratio necessary to meet that size.
If not, I was thinking my best approach would be to use a while loop that looks at size and just keep hitting the image with imagejpeg() in 10% increments until the file is below the pre-defined max-size.
Am I on the right track here?
It depends on the data but with images you can take small small samples. Downsampling would change the result. Here is an example:PHP - Compress Image to Meet File Size Limit.
I completed this task using the following code:
$quality = 90;
while(filesize($full_path) > 1048576 && $quality > 20) {
$img = imagecreatefromjpeg($full_path);
imagejpeg($img,$full_path,$quality);
$quality = $quality - 10;
clearstatcache();
}
if(filesize($full_path) > 1048576) {echo "<p>File too large</p>"; unlink($full_path); exit;}
The $quality > 20 part of the statement is to keep the script from reducing the quality to a point I would consider unreasonable. Im sure there is more to be done here. I could add in a resolution re-size portion as well, but this works for my current needs. This is with a max file size of 1MB. If the file is still too large after maximum quality scale, it returns a file too large error and deletes the image from the server.
Take note that clearstatcache() is very important here. Without this, the server caches the image size and will not notice a change in file size.
This script only applies to JPEGs, but there are other php functions for gifs, pngs, etc.
I'm using Imagick and like to scale an image to a maximum file size of 2.5MB
I had a look to this SOF question: ImageMagick: scale JPEG image with a maximum file-size which is exactly what I want to do but the extent() method from Imagick does not have the size parameter: http://www.php.net/manual/en/imagick.extentimage.php
Anyone knows how I could to it? At the moment I'm trying to calculate a coefficient between the original file size and the target file size to calculate a new resolution but found out that the resolution is not proportional to the file size.
Update - The output format is always JPEG so if there is a way to calculate the size before to save it that would be great
Thanks in advance, Maxime
The extent function you're calling is just to set the size of an image.
The function to set the jpeg extent option is:
$imagick->setOption('jpeg:extent', '2500kb');
Interestingly, the function $imagick->getImageBlob() seems to crash after setting this option. You are forced to write the file to disk, rather than being able to get it's bytes directly.
The output format is always JPEG so if there is a way to calculate the size before to save it that would be great
There isn't. The amount of detail that is in each image determines what size the image will be after compression, for a given image quality. So it's not possible to calculate the quality level that will give a final size.
The C code from the underlying Image Magick library that limits the file size is:
maximum=101;
for (minimum=2; minimum < maximum; )
{
jpeg_image->quality=minimum+(maximum-minimum+1)/2;
status=WriteJPEGImage(jpeg_info,jpeg_image);
if (GetBlobSize(jpeg_image) <= extent)
minimum=jpeg_image->quality+1;
else
maximum=jpeg_image->quality-1;
}
}
I.e. it just recompresses the file at different image quality levels, until it finds the level that gives the acceptable file size for the given value.
I was using the firebug page speed utility and one of the suggestions given was to compress the images - So I wrote the following code to compress the image
$filename="http://localhost.com/snapshots/picture.png";
$img = imagecreatefrompng($filename);
$this->_response->setHeader('Content-Type', 'image/png');
imagepng($img,null,9);
imagedestroy($img);
Now the actual image size is 154K
So I experimented by giving different quality levels and here is what I found
imagepng($img,null,0); --> Size = 225K
imagepng($img,null,1); --> Size = 85.9K
imagepng($img,null,2); --> Size = 83.7K
imagepng($img,null,3); --> Size = 80.9K
imagepng($img,null,4); --> Size = 74.6K
imagepng($img,null,5); --> Size = 73.8K
imagepng($img,null,6); --> Size = 73K
imagepng($img,null,7); --> Size = 72.4K
imagepng($img,null,8); --> Size = 71K
imagepng($img,null,9); --> Size = 70.6K
Do these results look accurate - I'm not sure why with quality 0 - the image size is larger than the actual size.
Secondly is this the best way to go about in PHP to compress images before rendering them in the browser to improve performance.
Based on the suggestions, that its better to compress the image once at the time of saving - I digged up the code that is called by the flash program to generate the snap shot -
$video = $this->_getParam('video');
$imgContent = base64_decode($this->_getParam('snapshot'));
file_put_contents("snapshots/" . $video . ".png", $imgContent);
EDITED
Based on Alvaro's suggestion, I have made the following modification to the code which generates a much small jpg file
$video = $this->_getParam('video');
$imgContent = base64_decode($this->_getParam('snapshot'));
file_put_contents("snapshots/" . $video . ".png", $imgContent);
$filename="snapshots/".$video.".png";
$img = imagecreatefrompng($filename);
imagejpeg($img,'test.jpg',75);
So now this is a 3 step process
create the initial image using file_put_contents
Use imagecreatefrompng and imagejpeg to compress the file and generate a smaller image
Delete the orig image
Is this the best optimal way to go about it.
Since PNG uses lossless data compression, the only way to achieve a decent compression in a PNG image (edge cases apart) is to save it as palette (rather than true colour) and reduce the number of colours. You appear to be processing some sort of screenshots. You may obtain smaller file sizes is you use a lossy compression, i.e., save as JPEG. In either cases, you reduce both file size and picture quality. You could also try the GIF format, which tends to be smaller for small graphs.
Last but not least, you should compress images once (typically when they get uploaded), not every time they're served. I suppose your code is just a quick test but I mention just in case.
Answer to updated question:
I'm not familiar with PHP image functions but you should probably use a combination of imagecreatefrompng() and imagejpeg(). Also, consider whether you need to keep the original PNG for future reference or you can discard it.
You have not understood the last parameter, that's not quality that's the compression level so increasing it will decrease the image size. Anyway i've used that method before to compress png images and it works well so i think you should continue to use it.
1- The result seems accurate since 0 means no compression
quality
Compression level: from 0 (no
compression) to 9.
It's normal for the 0ed file to be larger than the original (that can be slightly compressed to begin with). You need to understand file compression and PHP GD image constructor.
2- IMHO, the wisest choice would be to compress your png file before uploading them on your server (of course, it states only if you have the choice : static, few files).
Help for that :
http://www.webreference.com/dev/graphics/compress.html
http://www.punypng.com/
http://omaralzabir.com/reduce_website_download_time_by_heavily_compressing_png_and_jpeg/
If it means to be dynamic, the php is the choice.
I have a site where users can upload images. I process these images directly and resize them into 5 additional formats using the CodeIgniter Image Manipulation class. I do this quite efficiently as follow:
I always resize from the previous format, instead of from the original
I resize using an image quality of 90% which about halves the file size of jpegs
The above way of doing things I implemented after advise I got from another question I asked. My test case is a 1.6MB JPEG in RGB mode with a high resolution of 3872 x 2592. For that image, which is kind of borderline case, the resize process in total takes about 2 secs, which is acceptable to me.
Now, only one challenge remains. I want the original file to be compressed using that 90% quality but without resizing it. The idea being that that file too will take half the file size. I figured I could simply resize it to its' current dimensions, but that doesn't seem to do anything to the file or its size. Here's my code, somewhat simplified:
$sourceimage = "test.jpg";
$resize_settings['image_library'] = 'gd2';
$resize_settings['source_image'] = $sourceimage;
$resize_settings['maintain_ratio'] = false;
$resize_settings['quality'] = '90%';
$this->load->library('image_lib', $resize_settings);
$resize_settings['width'] = $imagefile['width'];
$resize_settings['height'] = $imagefile['height'];
$resize_settings['new_image'] = $filename;
$this->image_lib->initialize($resize_settings);
$this->image_lib->resize();
The above code works fine for all formats except the original. I tried debugging into the CI class to see why nothing happens and I noticed that the script detects that the dimensions did not change. Next, it simply makes a copy of that file without processing it at all. I commented that piece of code to force it to resize but now still nothing happens.
Does anybody know how to compress an image (any image, not just jpegs) to 90% using the CI class without changing the dimensions?
I guess you could do something like this:
$original_size = getimagesize('/path/to/original.jpg');
And then set the following options like this:
$resize_settings['width'] = $original_size[0];
$resize_settings['height'] = $original_size[1];
Ok, so that doesn't work due to CI trying to be smart, the way I see it you've three possible options:
Rotate the Image by 360ยบ
Watermark the Image (with a 1x1 Transparent Image)
Do It Yourself
The DIY approach is really simple, I know you don't want to use "custom" functions but take a look:
ImageJPEG(ImageCreateFromString(file_get_contents('/path/to/original.jpg')), '/where/to/save/optimized.jpg', 90);
As you can see, it's even more simpler than using CI.
PS: The snippet above can open any type of image (GIF, PNG and JPEG) and it always saves the image as JPEG with 90% of quality, I believe this is what you're trying to archive.