I am currently working on a PHP application which is ran from the command line to optimize a folder of Images.
The PHP application is more of a wrapper for other Image Optimizer's and it simply iterates the directory and grabs all the images, it then runs the Image through the appropriate program to get the best result.
Below are the Programs that I will be using and what each will be used for...
imagemagick to determine file type and convert non-animated gif's to png
gifsicle to optimize Animated Gif images
jpegtran to optimize jpg images
pngcrush to optimize png images
pngquant to optimize png images to png8 format
pngout to optimize png images to png8 format
My problem: With 1-10 images, everything runs smooth and fairly fast however, once I run on a larger folder with 10 or more images, it becomes really slow. I do not really see a good solution around this but one thing that would help is to avoid re-processing images that have already been Optimized. So if I have a folder with 100 images and I optimize that folder and then add 5 new images, re-run the optimizer. It then has to optimize 105 images, my goal is to have it only optimize the 5 newer images since the previous 100 would have already been optimized. This alone would greatly improve performance when new images are added to the image folder.
I realize the simple solution would be to simply copy or move the images to a new folder after processing them, my problem with that simple solution is that these images are used for the web and websites, so the images are generally hard-linked into a websites source code and changing the path to the images would complicate that and possibly break it sometimes.
Some ideas I have had are: Write some kind of text file database to the image folders that will list all the images that have already been processed, so when the application is ran, it will only run on images that are not in that file already. Another idea was to cheange the file name to have some kind of identification in the name to show it has been optimized, a third idea is to move each optimized file to a final destination folder once it is optimized. Idea 2 and 3 are not good though because they will break all image path links in the websites source code.
So please if you can think of a decent/good solution to this problem, please share?
Meta data
You could put a flag in the meta info of each image after it is optimized. First check for that flag and only proceed if it's not there. You can use exif_read_data() to read the data. Writing it maybe like this.
The above is for JPGs. Metdata for PNGs is also possible take a look at this question, and this one.
I'm not sure about GIFs, but you could definitely convert them to PNGs and then add metadata... although I'm pretty sure they have their own meta info, since meta data extraction tools allow GIFs.
Database Support
Another solution would be to store information about the images in a MySQL database. This way, as you tweak your optimizations you could keep track of when and which optimization was tried on which image. You could pick which images to optimize according to any parameters of your choosing. You could build an admin panel for this. This method would allow easy experimentation.
You could also combine the above two methods.
Maximum File Size
Since this is for saving space, you could have the program only work on images that are larger than a certain file size. Ideally, after running the compressor once, all the images would be below this file size, and after that only newly added images that are too big would be touched. I don't know how practical this is in terms of implementation, since it would require that the compressor gets any image below some arbitrary files size. You could make the maximum file size dependent on image size.....
The easiest way would most likely be to look at the time of the last change for each image. If an image was changed after the last run of your script, you have to run it on this particular image.
The timestamp when the script was ran could be saved easily in a short text file.
A thought that comes to my head is to mix the simple solution with a more complicated one. When you optimize the image, move it to a separate folder. When an access is made into the original image folder, have your .htaccess file capture those links and route them to an area of which can see if that same image exists within the optimized folder section, if not, optimize, move, then proceed.
I know i said simple solution, this is a sightly complicated solution, but the nice part is that the solution will provide a scalable approach to your issue.
Edit: One more thing
I like the idea of a MySQL database because you can add a level security (not all images can be viewed by everyone) If thats a need of course. But it also makes your links problem (the hard coded one) not so much a problem. Since all links are a single file of which retrieves the images from the db and the only thing that changes are get variables which are generated. This way your project becomes significantly more scalable and easier to do a design change.
Sorry this is late, but since there is a way to address this issue without creating any files, storing any data of any kind or keeping track of anything. I thought I'd share my solution of how I address things like this.
Goal
Setup an idempotent solution that efficiently optimizes images without dependencies that require keeping track of its current status.
Why
This allows for a truly portable solution that can work in a new environment, an environment that somehow lost its tracker, or an environment that is sensitive as to what files you can actually save in there.
Diagnose
Although metadata might be the first source you'd think to check for this information, it's true that in some cases it will not be available and the nature of metadata itself is arbitrary, like comments, they can come and go and not affect the image in any way. We want something more concrete, something that is a definite descriptor of the asset at hand. Ideally you would want to "identify" if one has been optimized or not, and the way to do that is to review the image to see if it has been based on its characteristics.
Strategy
When you optimize an image, you are providing different options of all sorts in order to reach the final state of optimization. These are the very traits you will also check to come to the conclusion of whether or not it had been in fact optimized.
Example
Lets say we have a function in our script called optimize(path = ''), and let's assume that part of our optimization does the following:
$ convert /path/to/image.jpg -bit-depth=8 -quality=87% -colors=255 -colorspace sRGB ...
Note that these options are ones that you choose to specify, they will be applied to the image and are properties that can be reviewed later...
$ identify -verbose /path/to/image.jpg
Image: /path/to/image.jpg
Format: JPEG (Joint Photographic Experts Group JFIF format)
Mime type: image/jpeg
Geometry: 1250x703+0+0
Colorspace: sRGB <<<<<<
Depth: 8-bit <<<<<<
Channel depth:
Red: 8-bit
Green: 8-bit
Blue: 8-bit
Channel statistics:
Pixels: 878750
Red:
...
Green:
...
Blue:
...
Image statistics:
Overall:
...
Rendering intent: Perceptual
Gamma: 0.454545
Transparent color: none
Interlace: JPEG
Compose: Over
Page geometry: 1250x703+0+0
Dispose: Undefined
Iterations: 0
Compression: JPEG
Quality: 87 <<<<<<
Properties:
...
Artifacts:
...
Number pixels: 878750
As you can see here, the output quite literally has everything I would want to know to determine whether or not I should optimize this image or not, and it costs nothing in terms of a performance hit.
Conclusion
When you are iterating through a list of files in a folder, you can do so as many times as you like without worrying about over optimizing the images or keeping track of anything. You would simply filter out all the extensions you don't want to optimize (eg .bmp, .jpg, .png) then check their stats to see if they possess the attributes your function will apply to the image in the first place. If it has the same values, skip, if not, optimize.
Advanced
If you want to get extremely efficient, you would check each attribute of the image that you plan on optimizing and in your optimization execution you would only apply the options that have not been applied to the command.
Note
This technique is obviously meant to show an example of how you can accurately determine whether or not an image needs to be optimized. The actual options I have listed above are not the complete scope of elements that can be chosen. The are a variety of available options to choose from, and you can apply and check for as many as you want.
Related
I'm currently trying to speed up the websites we develop. The part I'm working on now is to optimise the images so that they are as small (filesize, not dimensions) as possible without losing quality.
Our customers can upload their own images to the website through our custom CMS, but images aren't being compressed or optimised at all. My superior explained this is because the customers can upload their own images, and these images could be optimised beforehand through Photoshop or tools like it. If you optimise already optimised images, the quality would get worse. ...right?
We're trying to find a solution that won't require us to install a module or anything. We already use imagejpeg(), imagepng() and imagegif(), but we don't use the $quality parameter because of reasons previously explained. I've seen some snippets, but they all use imagejpg() and the like.
That being said, is there a sure-fire way of optimising images without the risk of optimising previously optimised images? Or would it be no problem at all to use imagejpeg(), imagepng() and imagegif(), even if it would mean optimising already optimised images?
Thank you!
"If you optimise already optimised images, the quality would get worse. "
No if you use a method without loose.
I don't know for method directly in php but if you are on linux server you can use jpegtran or jpegoptim ( with --strip-all) for jpeg and OptiPNG or PNGOUT for png.
Going from your title, I am going to assume compression
So, lets say a normal jpg of 800x600 is uploaded by your customers.
The customers jpg is 272kb because it has full details and everything.
You need to set tresholds for filesizes at dimensions what is acceptable.
Like:
if $dimensions->equals(800,600) and file_type($image) =='jpg' and file_size($image) > 68kb
then schedule_for_compression($image)
and that way you set up parameters for what is acceptable as an upper limit of file size. If the dimensions match, and the filesize is bigger, then its not optimised.
But without knowing more details what exactly is understood about optimising, this is the only thing I can think of.
If you are using a low number of images to compress, you might find an external service, such as: https://tinypng.com/developers might be of assistance.
I've used their on-line tools for reducing filesize on both JPG and PNG file manually but they do appaear to offer a free API service for the first 500 images per month.
Apologies if this would be better as a comment than an answer, I'm fairly new to stackoverflow and haven't got enough points yet, but felt this may be a handy alternative solution.
Saving a JPEG with the same or higher quality setting will not result in a noticeable loss in quality. Just re-save with your desired quality setting. If the file ends up larger, just discard it and keep the original. Remove metadata using jpegran or jpegoptim before you optimize so it doesn't affect the file size when you compare to the original.
PNG and GIF wont't lose any quality unless you reduce the number of colors. Just use one of the optimizers Gyncoca mentioned.
I've been doing some speed optimization on my site using Page Speed and it gives recommendations like so:
Optimizing the following images could reduce their size by 35.3KiB (21% reduction).
Losslessly compressing http://example.com/.../some_image.jpg could save 25.3KiB (55% reduction).
How are the calculations done to get the file size reduction numbers? How can I determine if an image is optimized in PHP?
As I understand it, they seem to base this on the image quality (so saving at 60% in Photoshop or so is considered optimized?).
What I would like to do is once an image is uploaded, check if the image is fully optimized, if not, then optimize it using a PHP image library such as GD or ImageMagick. If I'm right about the number being based on quality, then I will just reduce the quality as needed.
How can I determine if an image is fully optimized in the standards that Page Speed uses?
Chances are they are simply using a standard compression or working on some very simple rules to calculate image compression/quality. It isn't exactly what you were after but what I often use on uploaded images etc etc dynamic content is a class called SimpleImage:
http://www.white-hat-web-design.co.uk/blog/resizing-images-with-php/
this will give you the options to resize & adjust compression, and I think even change image type (by which I mean .jpg to .png or .gif anything you like). I worked in seo and page optimization was a huge part of my Job I generally tried to make the images the size the needed to be no smaller or bigger. compress JS & CSS and that's really all most people need to worry about.
It looks like you could use the PageSpeed Insights API to get the compressions scores: https://developers.google.com/speed/docs/insights/v1/reference, though I'm guessing you'd want to run a quality check/compression locally, rather than sending everything through this API.
It also looks like they've got a standalone image optimizer available at http://code.google.com/p/page-speed/wiki/DownloadPageSpeed?tm=2, though this appears to be a Windows binary. They've got an SDK there as well, though I haven't looked into what it entails, or how easy it would be to integrate into an existing site.
What is the best way in PHP to determine if a PDF is filled out correctly? The source PDF is a faxed form that contains handwritten data. Is an image comparison an option? If the form is filled out on a computer, I know I can use pdftotext to verify that the fields are completed or not. I just don't know how to verify handwritten data.
For hand-written data an image comparison may definitely be an option. See for example the following answer for a basic idea how to start tackling this task:
Imagemagick : “Diff” an Image
However, the job may be much more difficult when faxed images come into play. (We all know how bad a quality you can get from faxes. Also, they frequently are skewed by a small degree. And they may be slightly scaled, compared to the original. Not to forget that their resolution is 204x196dpi, which adds a bit of a distortion. And lastly -- how do you get the faxed form back into PHP? This might involve another step of scanning in the paper, which again will not necessarily add quality to the result.
Still, ImageMagick may be able to handle all this: it can -deskew images, it can reduce or completly remove -noise, and it can -distort, -scale and -repage images and much more...
Let me explain what I am trying to do, I am building a command line tool with PHP that uses these programs to optimize images for the web...
imagemagick to determine file type and convert non-animated gif's to png
gifsicle to optimize Animated Gif images
jpegtran to optimize jpg images
pngcrush to optimize png image
pngquant to optimize png images to png8 format
pngout to optimize png images to png8 format
This is a pretty heavy process to be running, luckily it is done very infrequently but I would still like to optimize as much as I can.
Right now to process around 12 images takes roughly 76 seconds. So you can see it is a slow process, imagine 100 images.
I would really like to somehow, mark an image as optimized somehow, so when I am loading a batch of images, the first thing it does is run through ImageMagick to get the exact file type, would be nice if I could somehow embed a message that says this image is already optimized as much as it can be, and then when I am reading images in, if it detects a message it will know to not waste valuable time running that particular image through all the other programs, if this is possible, it could greatly increase speed.
Please help me, I am not used to working with images like this, is this possible even, if it is what is it called and how could I achieve it?
Thanks for any help
If you were to include a flag in the image itself then that would be served to the clients. It would add to the filesize of your images thus negating some of your optimisation.
Suggestions
Keep a reference of the status
Keep a catalog in a file in the same directory - much like the Windows Thumbs.db file.
Another option would be to keep the record in a database or datastore such as Redis or Memcached.
Move after processing
You could move the files to a different directory once they are processed (as #Jordan mentions).
Change the filename to indicate it is processed
Another option would be to append an extra "extension" onto the file name for example:
my_image.processed.jpg
Embedding data in images
Steganography
Usually this is used for attempting to hide covert data in an image and it is called Steganography. It is not really suited to this use case however.
EXIF data
You could write it into the EXIF data of an image, but this would be JPEG and TIFF only as far as I am aware. There is a PHP library available called PEL for writing and reading EXIF data.
You could use the Comment field to tag your image as already optimised, like this:
convert x.jpg -set comment "Optimised" x.jpg
Then, when you are processing, you can extract it like this:
identify -format "%c" x.jpg
Optimised
I have one basic question. I have project where I need more sizes of one picture.
Yes... During uploading you make thumbnails... and so on... I know this story ... performance vs. storing possibilities.
So I save original img, a make 2 thumbnails copies for example max width 100px and maxwidht 200px with respect to ratio.
Now I need show image in 150px max width so I take saved img(200px) and .....
I use getimagesize() for calculating showing width and height respected to ratio,
or I set max-widht and max-height and I leave it for browser (browser make it for me),
or I set width a keep height: auto (but I want also limit max height)
So actualy I use php and getimagesize() but this function every time work with file and I am little scared. When you process 1 img it is OK but what about 20 or 100.
And... another idea, while uploading I save to DB also size information, for this I have to save data for 3 img (now only original one) this complicate everything.
So ... any ideas? What is your practice? THX.
Two images, at a maximum: A thumbnail, and the original image are sufficient. Make sure that your upload page is well-secured, because I've seen a website taken down through DoS (abusing an unprotected image-resizing page). Also limit the maximum upload size, to prevent abuse.
You can use the max-width and max-height CSS properties to limit the size of your images.
My approach
I wrote a pretty simple gallery application in php a while ago and this is how it works:
The images are stored in a folder with subfolders representing albums (and subalbums). They are uploaded via FTP and the webserver only has read-permissions on them.
For each image there are three versions:
a full one (the original)
a "mid" one (1024x768px max)
a "thumb" one (250x250px max)
All requests for images by the browser are served by php, and not-yet-existing versions are generated on the fly. The actual data is served through X-Sendfile, but that's an implementation detail.
I store the smaller versions in separate directories. When given a path to an original image, it is trivial to find the corresponding downscaled files (and check for existence and modification times).
Thoughts on your problem
Scaling images using HTML / CSS is considered bad practice for two simple reasons: if you are scaling up, you have a blurred image. If you are scaling down, you waste bandwidth and make your page slower for no good reason. So don't do it.
It should be possible to determine a pretty small set of required versions of each file (for example those used in a layout as in my case). Depending on the size and requirements of your project there are a few possibilities for creating those versions:
on the fly: generate / update them, when they are requested
during upload: have the routine that is called during the upload-process do the work
in the background: have the upload-routine add a job to a queue that is worked on in the background (probably most scalable but also fairly complex to implement and deploy)
Scaling down large images is a pretty slow operation (taking a few seconds usually). You might want to throttle it somehow to prevent abuse / DoS. Also limit dimensions and file size. A 100 MP (or even bigger) plain white (or any color) JPG might be very small when compressed, but will use an awful lot of RAM during scaling. Also big PNGs take really long to decompress (and even more to compress).
For a small website it doesn't matter, which approach you choose. Something that works (even if it doesn't scale) will do. If you plan on getting a good amount of traffic and a steady stream of uploads, then choose wisely and benchmark carefully.