I'm trying to find the fastest way to convert a multiple pdf in to jpeg images without quality loss. I have the following code:
$this->imagick = new \Imagick();
$this->imagick->setresolution(300,300);
$this->imagick->readimage($this->uploadPath . '/original/test.pdf');
$num_pages = $this->imagick->getNumberImages();
for($i = 0;$i < $num_pages; $i++) {
$this->imagick->setIteratorIndex($i);
$this->imagick->setImageFormat('jpeg');
$this->imagick->readimage($asset);
// i need the width, height and filename of each image to add
// to the DB
$origWidth = $this->imagick->getImageWidth();
$origHeight = $this->imagick->getImageHeight();
$filename = $this->imagick->getImageFilename();
$this->imagick->writeImage($this->uploadPath . '/large/preview-' . $this->imagick->getIteratorIndex() . '.jpg');
$this->imagick->scaleImage($origWidth / 2, $origHeight / 2);
$this->imagick->writeImage($this->uploadPath . '/normal/preview-' . $this->imagick->getIteratorIndex() . '.jpg');
//add asset to the DB with the given width, height and filename ...
}
This is very slow thou, partially because the resolution is so big but is i dont add it, the text on the images is of very poor quality. Also, the fact that i'm saving the image first, and then also saving a smaller version of the file is probably not very optimized.
So does anyone have a better method of doing this? Maybe with only ghostscript?
The minimum requirements are that i need 2 versions of the converted image. A real size version and a version at half size. And i need the width and height and filename to add to the database.
You can use Ghostscript, if you set "-sDEVICE=jpeg -sOutputFile=out%d.jpg" then each page will be written to a separate file.
Note that its not really possible to render a PDF to JPEG 'without quality loss' since JPEG is a lossy compression method. Have you considered using a lossless format instead, such as PNG or TIFF ?
To get the same images at different sizes you will need to render the file twice at different resolutions, you set resolution in Ghostscript with '-r'. The width and height can be read easily enough from the image file, or you can use pdf_info.ps to get each page size. Note that a PDF file can contain multiple pages of different sizes, I hope you aren't expecting them all to be the same....
Related
We have a server that accepts PSD files. It will load it into a new Imagick object and create 4 jpg thumbs for it.
Oddly the first thumb, the largest one looks great. Each thumb after that experiences some image distortion where a layer was using a stroke effect.
Code:
$image = new Imagick($fileName);
$image->mergeImageLayers(Imagick::LAYERMETHOD_FLATTEN);
foreach ($thumbSizes as $key => $size) { //largest to smallest
if($size>$longestSide){
$size = $longestSide;
}
$this->image->thumbnailImage($size,$size,true);
$this->image->writeImage($nameBase . '-' . $key . ".$extension");
}
$image->destroy();
I'm not sure how Imagick works internally, but my intuition tells me that if the largest thumb is accurate then each one after that should be.
NOTE: I expect some image distortion when resizing an image. But if you look at the example I posted it's different than your normal resizing artifacts. It's changing the color of some text. I assume it's a problem with it resizing the stroke effect. But I would have thought since I flattened the image first, the stroke effect wouldn't exist anymore. I can pass in a jpeg representation of the same file and it resizes them all perfectly.
I think the PSD conversion is just borked by the library/special FX on the PSD. This is the first image output by similar code to yours:
The color of the number is transparent rather than black - but similarly, not correct. If the issue is being caused by the FX on that layer and you can't find a version of the library that processes the PSD that works correctly, I would suggest not using Imagick/ImageMagick to do the processing, but instead install a copy of Photoshop on the server and use the CLI processing capabilities of Photoshop itself to do the processing: https://helpx.adobe.com/photoshop/using/processing-batch-files.html .
btw an observation - you are resizing the same image repeatedly, this is less than optimal. You can avoid this by resizing a clone of the original image:
foreach ($thumbSizes as $key => $size) { //largest to smallest
$temp = clone $image;
if($size>$longestSide){
$size = $longestSide;
}
$temp->thumbnailImage($size,$size,true);
$temp->writeImage($nameBase . '-' . $key . ".$extension");
}
This should give a slightly higher output image quality.
I am storing all the images in a folder. These images are uploaded by the user through his account. Hence, the images are of different sizes.
I want to display the images in 2 div elements with a fixed width and height (but different from each other) . The sizes may be of the order 40*40 pixels and 200*200 pixels. What would be the better way to do it -
1) Storing images of different sizes in the folder while uploading at the first place, or
2) Using the 'height' and 'width' attributes in img tag in HTML to display the image in the correct size.
Or is there some other way?
Thanks in advance.
Of course it's better to store images already resized and cropped.
If you want to have multiple images with different dimensions .
It will send less data across.
The better way is too store all sizes of all images.In that case it works much faster.
original/test.jpg
40x40/test.jpg
200x200/test.jpg
Also please read this article
http://selbie.wordpress.com/2011/01/23/scale-crop-and-center-an-image-with-correct-aspect-ratio-in-html-and-javascript/
The best way is to store the original image and request the correct sized image from the client. At that time if the correct size image is not found, create it and add it to a cache folder. Then, deliver the correct size image from the cache.
So if image.png of size 1200x800 is uploaded, store that in original_images folder.
Then, construct a php script sized_image.php and use it in your HTML like this
<img src="sized_image.php?img=image.png&height=200&width=200" />
In your sized_image.php script you would do the following
$fileName = "cached_image/{$_GET['width']}x{$_GET['height']}_{$_GET['img']}";
if (!file_exists($fileName))
{
//resize and store in $fileName
}
$type = 'image/png'; //Or whatever type the image is
header('Content-Type:' . $type);
header('Content-Length: ' . filesize($fileName));
readfile($fileName);
exit();
I have a php script where the user can upload images.
I want to make the script lower the image quality (jpeg) if the file size is bigger than 'X' kbytes.
Something like this:
if( $_FILES['uploaded_img']['size'] > $file_size_limit ){
// code that lowers the quality of the uploaded image but keeps the image width and height
}
What is the best approach for this?
ps: I don't want to change image width and height.
Sure you can. Do something like this.
$upload = $_FILES['uploaded_img'];
$uploadPath = 'new/path/for/upload/';
$uploadName = pathinfo($upload['name'], PATHINFO_FILENAME);
$restrainedQuality = 75; //0 = lowest, 100 = highest. ~75 = default
$sizeLimit = 2000;
if($upload['size'] > $sizeLimit) {
//open a stream for the uploaded image
$streamHandle = #fopen($upload['tmp_name'], 'r');
//create a image resource from the contents of the uploaded image
$resource = imagecreatefromstring(stream_get_contents($streamHandle));
if(!$resource)
die('Something wrong with the upload!');
//close our file stream
#fclose($streamHandle);
//move the uploaded file with a lesser quality
imagejpeg($resource, $uploadPath . $uploadName . '.jpg', $restrainedQuality);
//delete the temporary upload
#unlink($upload['tmp_name']);
} else {
//the file size is less than the limit, just move the temp file into its appropriate directory
move_uploaded_file($upload['tmp_name'], $uploadPath . $upload['name']);
}
This will accept any image format supported by PHP GD (Assuming that it's installed on your server. Most likely is). If the image is less than the limit, it will just upload the original image to the path you specify.
Your basic approach (which is implemented in Austin's answer) will work some of the time, but it's important to keep in mind that quality != file size. While they are generally correlated, it is perfectly possible (even common) that reducing the quality of a jpeg file will actually result in a LARGER file. This is because any JPEG uploaded to your system has already been run through the JPEG compression formula (often with a quality of 79 or 80). Depending on the original image, this process will create artifacts/alter the resulting image. When you run this already optimized image through the jpeg compression algorithm a second time it doesn't "know" what the original image looked like... so it treats the incoming jpeg as if it's a brand new lossless file and tries to copy it as closely as possible... including any artifacts created in the original process. Couple this with the fact that the original jpeg compression already took advantage of most of the "easy" compression tricks, it ends up being quite likely that compressing a second time results in a crappier looking image (copy of a copy problem) but not smaller file.
I did a few tests to see where the cutoff was, and unsurprisingly if the original image had a low compression ratio (q=99) a lot of space was saved re-compressing to q=75. If the original was compressed at q=75 (pretty common for graphic program defaults) then the secondary q=75 compression looked worse but resulted in virtually the same file-size as the original. If the original had a lower compression level (q=50) then the secondary q=75 compression resulted in a significantly larger file (for these tests I used three complex photos... obviously images with specific palates/compositions will have different performances going through these compressions). Note: I'm using Fireworks cs4 for this test... I realize that these quality indicators have no standardization between platforms
As noted in the comments below, moving from file formats like PNG to JPEG will usually end up significantly smaller (though without any transparency), but from JPEG -> JPEG (or GIF->JPEG, especially for simple or small-palate images) will often not help.
Regardless, you can still try using the compression method described by Austin, but make sure you compare the file-sizes of the two images when you're done. If there is only a small incremental gain or the new file is larger, then default back to the original image.
I have the following code. It's used to combine various image attachments (and pdfs) into one PDF. For some reason, when I take even a single PDF and put it through the code, the end result comes out looking very bad compared to the original PDF. In addition, I can select text in the source PDF, but in the generated one I cannot.
Any help would be greatly appreciated.
// PDF object
$pdf = new Imagick();
$max_resolution = array('x' => 100, 'y' => 100);
foreach($attachment_ids as $attachment_id) {
$attachment = DAO_Attachment::get($attachment_id);
$file = Storage_Attachments::get($attachment);
// Temporarily store our attachment
$im = new Imagick();
$im->readImageBlob($file);
// We need to reset the iterator otherwise only one page will be rotated
$im->resetIterator();
// Get the resolution
$resolution = $im->getImageResolution();
if($resolution['x'] > $max_resolution['x']) {
$max_resolution['x'] = $resolution['x'];
}
if($resolution['y'] > $max_resolution['y']) {
$max_resolution['y'] = $resolution['y'];
}
$num_pages = $im->getNumberImages();
$rotation = array_shift($rotations);
$degrees = $rotation > 0 ? 360 - $rotation : 0;
$pages = array();
if($degrees > 0) {
// Rotate each page
for($i = 1; $i <= $num_pages; $i++) {
$im->nextImage();
$im->rotateImage(new ImagickPixel(), $degrees);
}
}
// We need to reset the iterator again so all of our pages will be added to the pdf
$im->resetIterator();
// If the image format isn't a pdf, convert it to a png
if($im->getImageFormat !== 'pdf') {
$im->setImageFormat('png');
// Opacity
if(method_exists($im, 'setImageOpacity'))
$im->setImageOpacity(1.0);
}
$im->setImageCompression(imagick::COMPRESSION_LOSSLESSJPEG);
$im->setImageCompressionQuality(100);
$im->stripImage();
// Add the rotated attachment to the PDF
$pdf->addImage($im);
// Free
$im->destroy();
}
// Create a composite
$pdf->setImageFormat('pdf');
// Compress output
$pdf->setImageCompression(imagick::COMPRESSION_LOSSLESSJPEG);
$pdf->setImageCompressionQuality(100);
$pdf->stripImage();
// Set resolution
$pdf->setImageResolution($max_resolution['x'], $max_resolution['y']);
This may be obvious to you already but a low quality image will not result in a high quality pdf. I don't know how good Imagick's pdf generation capabilities are, but it seems from your code you are converting images? You could compare by doing the same thing with TcPDF, though if the image is low quality I doubt you will get better results.
Also, if you have access to higher DPI resolution images than the usual web-optimised format, I recommend you use those to build your PDF instead. The quality will be a lot better.
ImageMagick uses GhostScript to convert PDFs to various raster image formats. GhostScript is quite good at this, but you're hand-cuffing it by scaling the page down to a max of 100x100.
An 8.5x11 (inches) page at 72 dpi, is 612x792 pixels.
Perhaps you meant to restrict DPI rather than resolution? The output still won't scale all that well (vector formats vs pixel formats), but I suspect it would be a big improvement.
It turns out the answer to this is to set the DPI using setResolution(). We do this before using readImageBlob() to get read the file containing our image, as it will change the DPI of the image based on the current resolution (so setting it afterwards won't work).
You could also use some math and use resampleImage() to do it after the fact, but setResolution() seems to be working perfectly for us.
I was using the firebug page speed utility and one of the suggestions given was to compress the images - So I wrote the following code to compress the image
$filename="http://localhost.com/snapshots/picture.png";
$img = imagecreatefrompng($filename);
$this->_response->setHeader('Content-Type', 'image/png');
imagepng($img,null,9);
imagedestroy($img);
Now the actual image size is 154K
So I experimented by giving different quality levels and here is what I found
imagepng($img,null,0); --> Size = 225K
imagepng($img,null,1); --> Size = 85.9K
imagepng($img,null,2); --> Size = 83.7K
imagepng($img,null,3); --> Size = 80.9K
imagepng($img,null,4); --> Size = 74.6K
imagepng($img,null,5); --> Size = 73.8K
imagepng($img,null,6); --> Size = 73K
imagepng($img,null,7); --> Size = 72.4K
imagepng($img,null,8); --> Size = 71K
imagepng($img,null,9); --> Size = 70.6K
Do these results look accurate - I'm not sure why with quality 0 - the image size is larger than the actual size.
Secondly is this the best way to go about in PHP to compress images before rendering them in the browser to improve performance.
Based on the suggestions, that its better to compress the image once at the time of saving - I digged up the code that is called by the flash program to generate the snap shot -
$video = $this->_getParam('video');
$imgContent = base64_decode($this->_getParam('snapshot'));
file_put_contents("snapshots/" . $video . ".png", $imgContent);
EDITED
Based on Alvaro's suggestion, I have made the following modification to the code which generates a much small jpg file
$video = $this->_getParam('video');
$imgContent = base64_decode($this->_getParam('snapshot'));
file_put_contents("snapshots/" . $video . ".png", $imgContent);
$filename="snapshots/".$video.".png";
$img = imagecreatefrompng($filename);
imagejpeg($img,'test.jpg',75);
So now this is a 3 step process
create the initial image using file_put_contents
Use imagecreatefrompng and imagejpeg to compress the file and generate a smaller image
Delete the orig image
Is this the best optimal way to go about it.
Since PNG uses lossless data compression, the only way to achieve a decent compression in a PNG image (edge cases apart) is to save it as palette (rather than true colour) and reduce the number of colours. You appear to be processing some sort of screenshots. You may obtain smaller file sizes is you use a lossy compression, i.e., save as JPEG. In either cases, you reduce both file size and picture quality. You could also try the GIF format, which tends to be smaller for small graphs.
Last but not least, you should compress images once (typically when they get uploaded), not every time they're served. I suppose your code is just a quick test but I mention just in case.
Answer to updated question:
I'm not familiar with PHP image functions but you should probably use a combination of imagecreatefrompng() and imagejpeg(). Also, consider whether you need to keep the original PNG for future reference or you can discard it.
You have not understood the last parameter, that's not quality that's the compression level so increasing it will decrease the image size. Anyway i've used that method before to compress png images and it works well so i think you should continue to use it.
1- The result seems accurate since 0 means no compression
quality
Compression level: from 0 (no
compression) to 9.
It's normal for the 0ed file to be larger than the original (that can be slightly compressed to begin with). You need to understand file compression and PHP GD image constructor.
2- IMHO, the wisest choice would be to compress your png file before uploading them on your server (of course, it states only if you have the choice : static, few files).
Help for that :
http://www.webreference.com/dev/graphics/compress.html
http://www.punypng.com/
http://omaralzabir.com/reduce_website_download_time_by_heavily_compressing_png_and_jpeg/
If it means to be dynamic, the php is the choice.