How to speed up a complex image processing? - php

Every user will be able to upload 100 TIFF (black and white) images.
The process requires:
Convert tif to jpg.
Resize image to xx.
Crop image to 200px.
Add a text watermark.
Here is my PHP code:
move_uploaded_file($image_temp,$destination_folder.$image_name);
$image_name_only = strtolower($image_info["filename"]);
$name=$destination_folder.$image_name_only.".jpg";
$thumb=$destination_folder."thumb_".$image_name_only.".jpg";
$exec = '"C:\Program Files\ImageMagick-6.9.0-Q16\convert.exe" '.$destination_folder.$image_name. ' '.$name.' 2>&1';
exec($exec, $exec_output, $exec_retval);
$exec = '"C:\Program Files\ImageMagick-6.9.0-Q16\convert.exe" '.$name. ' -resize 1024x '.$name;
exec($exec, $exec_output, $exec_retval);
$exec = '"C:\Program Files\ImageMagick-6.9.0-Q16\convert.exe" '.$name. ' -thumbnail 200x200! '.$thumb;
exec($exec, $exec_output, $exec_retval);
$exec = '"C:\Program Files\ImageMagick-6.9.0-Q16\convert.exe" '.$name. " -background White label:ش.پ12355 -append ".$name;
exec($exec, $exec_output, $exec_retval);
This code works. But the average processing time for every image is 1 second.
So for 100 images it will probably take around 100 seconds.
How can I speed up this whole process (convert, resize, crop, watermark)?
EDIT
I have a Server G8:Ram:32G,CPU:Intel Xeon E5-2650(4 Process)
version:ImageMagick 6.9.0-3 Q16 x64
FEATURES:OpenMP
convert logo: -resize 500% -bench 10 1.png
Performance[1]: 10i 0.770ips 1.000e 28.735u 0:12.992
Performance[2]: 10i 0.893ips 0.537e 26.848u 0:11.198
Performance[3]: 10i 0.851ips 0.525e 27.285u 0:11.756
Performance[4]: 10i 0.914ips 0.543e 26.489u 0:10.941
Performance[5]: 10i 0.967ips 0.557e 25.803u 0:10.341
Performance[6]: 10i 0.797ips 0.509e 27.737u 0:12.554
Performance[7]: 10i 0.963ips 0.556e 25.912u 0:10.389
Performance[8]: 10i 0.863ips 0.529e 26.707u 0:11.586
Resource limits:
Width: 100MP;Height: 100MP;Area: 17.16GP;Memory: 7.9908GiB;Map: 15.982GiB;Disk: unlimited;File: 1536;Thread: 8;Throttle: 0;Time: unlimited

0. Two approaches
Basically, this challenge can be tackled in two different ways, or a combination of the two:
Construct your commands as clever as possible.
Trade speed-up gains for quality losses.
The next few sections discuss the both approaches.
1. Check which ImageMagick you've got: 'Q8', 'Q16', 'Q32' or 'Q64'?
First, check for your exact ImageMagick version and run:
convert -version
In case your ImageMagick has a Q16 (or even Q32 or Q64, which is possible, but overkill!) in its version string:
This means, all of ImageMagick's internal functions treat all images as having 16 bit (or 32 or 64 bit) channel depths.
This gives you a better quality in image processing.
But it also requires double memory as compared to Q8.
So at the same time it means a performance degradation.
Hence: you could test what performance benefits you'll achieve by switching to a Q8-build.
(The Q is symbol for the 'quantum depth' supported by a ImageMagick build.)
You'll pay your possible Q8-performance gains with quality loss, though.
Just check what speed up you achieve with Q8 over Q16, and what quality losses you suffer.
Then decide whether you can live with the drawbacks or not...
In any case Q16 will use twice as much RAM per image to process, and Q32 will again use twice the amount of Q16.
This is independent from the actual bits-per-pixels seen in the input files.
16-bit image files, when saved, will also consume more disk space than 8-bit ones.
With Q16 or Q32 requiring more memory, you always have to ensure that you have enough of it.
Because exceeding your physical memory would be very bad news.
If a larger Q makes a process swap to disk, performance will plummet.
A 1074 x 768 pixel image (width x height) will require the following amounts of virtual memory, depending on the quantum depth:
Quantum Virtual Memory
Depth (consumed by 1 image 1024x768)
------- ------------------------------
8 3.840 kiB (=~ 3,75 MiB)
16 7.680 kiB (=~ 7,50 MiB)
32 15.360 kiB (=~ 14,00 MiB)
Also keep in mind, that some 'optimized' processing pipelines (see below) will need to keep several copies of an image in virtual memory!
Once virtual memory cannot be satisfied by available RAM, the system will start to swap and claim "memory" from the disk.
In that case, all clever command pipeline optimization is of course gone, and starts to knock over to the very reverse.
ImageMagick's birthday was in the aera when CPUs could handle only 1 bit at a time.
That was decades ago.
Since then CPU architecture has changed a lot.
16-bit operations used to take twice as long as 8-bit operations, or even longer.
Then 16-bit processors arrived.
16-bit ops became standard.
CPUs were optimised for 16-bit:
Suddenly some 8-bit operations could take even longer than 16-bit equivalents.
Nowadays, 64bit CPUs are common.
So the Q8 vs. Q16 vs. Q32 argument in real terms may even be void.
Who knows?
I'm not aware of any serious benchmarking about this.
It would be interesting if someone (with really deep knowhow about CPUs and about benchmarking real world programs) would run with such a project one day.
Yes, I see you are using Q16 on Windows.
But I still wanted to mention it, for completeness' sake...
In the future there will be other users reading this question and the answers given.
Very likely, since your input TIFFs are black+white only, the image quality output of a Q8 build will be good enough for your workflow.
(I just don't know if it would also be significantly faster:
this largely also depends on the hardware resources you are running this on...)
In addition, if your installation sports support HDRI (high dynamic resolution images), this may also cause some speed penalty.
Who knows?
So building IM with configure options --disable-hdri --with-quantum-depth=8 may or may not lead to speed improvements.
Nobody has ever tested this in a serious way...
The only thing we know about this:
these options will decrease image quality.
However most people will not even notice this, unless they take really close looks and make direct image-by-image comparisons...
 
2. Check your ImageMagick's capabilities
Next, check if your ImageMagick installation comes with OpenCL and/or OpenMP support:
convert -list configure | grep FEATURES
If it does (like mine), you should see something like this:
FEATURES DPC HDRI OpenCL OpenMP Modules
OpenCL (for C omputing L anguage) utilizes ImageMagick's parallel computing features (if compiled-in).
This will make use of your computer's GPU additionally to the CPU for image processing operations.
OpenMP (for M ulti-P rocessing) does something similar:
it allows ImageMagick to execute in parallel on all the cores of your system.
So if you have a quad-core system, and resize an image, the resizing happens on 4 cores (or even 8 if you have hyperthreading).
The command
convert -version
prints some basic info about supported features.
If OpenCL/OpenMP are available, you will see one of them (or both) in the output.
If none of the two show up:
look into getting the most recent version of ImageMagick that has OpenCL and/or OpenMP support compiled in.
If you build the package yourself from the sources, make sure OpenCL/OpenMP are used.
Do this by including the appropriate parameters into your 'configure' step:
./configure [...other options-] --enable-openmp --enable-opencl
ImageMagick's documentation about OpenMP and OpenCL is here:
Parallel Execution With OpenMP.
Read it carefully.
Because OpenMP is not a silver bullet, and it does not work under all circumstances...
Parallel Execution With OpenCL.
The same as above applies here.
Additionally, not all ImageMagick operations are OpenCL-enabled.
The link here has a list of those which are.
-resize is one of them.
Hints and instructions to build ImageMagick from sources and configure the build, explaining various options, are here:
ImageMagick Advanced Unix Installation
This page also includes a short discussion of the --with-quantum-depth configure option.
3. Benchmark your ImageMagick
You can now also use the builtin -bench option to make ImageMagick run a benchmark for your command.
For example:
convert logo: -resize 500% -bench 10 logo.png
[....]
Performance[4]: 10i 1.489ips 1.000e 6.420u 0:06.510
Above command with -resize 500% tells ImageMagick to run the convert command and scale the built-in IM logo: image by 500% in each direction.
The -bench 10 part tells it to run that same command 10 times in a loop and then print the performance results:
Since I have OpenMP enabled, I have 4 threads (Performance[4]:).
It reports that it ran 10 iterations (10i).
The speed was nearly 1.5 iterations per second (1.489ips).
Total user-alotted time was 6.420 seconds.
If your result includes Performance[1]:, and only one line, then your system does not have OpenMP enabled.
(You may be able to switch it on, if your build does support it: run convert -limit thread 2.)
4. Tweak your ImageMagick's resource limits
Find out how your system's ImageMagick is set up regarding resource limits.
Use this command:
identify -list resource
File Area Memory Map Disk Thread Time
--------------------------------------------------------------------
384 8.590GB 4GiB 8GiB unlimited 4 unlimited
Above shows my current system's settings (not the defaults -- I did tweak them in the past).
The numbers are the maximum amount of each resource ImageMagick will use.
You can use each of the keywords in the column headers to pimp your system.
For this, use convert -limit <resource> <number> to set it to a new limit.
Maybe your result looks more like this:
identify -list resource
File Area Memory Map Disk Thread Time
--------------------------------------------------------------------
192 4.295GB 2GiB 4GiB unlimited 1 unlimited
The files defines the max concurrently opened files which ImageMagick can use.
The memory, map, area and disk resource limits are defined in Bytes.
For setting them to different values you can use SI prefixes, .e.g 500MB).
When you do have OpenMP for ImageMagick on your system, you can run.
convert -limit thread 2
This enable 2 parallel threads as a first step.
Then re-run the benchmark and see if it really makes a difference, and if so how much.
After that you could set the limit to 4 or even 8 and repeat the excercise....
5. Use Magick Pixel Cache (MPC) and/or Magick Persistent Registry (MPR)
Finally, you can experiment with a special internal format of ImageMagick's pixel cache.
This format is called MPC (Magick Pixel Cache).
It only exists in memory.
When MPC is created, the processed input image is kept in RAM as an uncompressed raster format.
So basically, MPC is the native in-memory uncompressed file format of ImageMagick.
It is simply a direct memory dump to disk.
A read is a fast memory map from disk to memory as needed (similar to memory page swapping).
But no image decoding is needed.
(More technical details: MPC as a format is not portable.
It also isn't suitable as a long-term archive format.
Its only suitability is as an intermediate format for high-performance image processing.
It requires two files to support one image.)
If you still want to save this format to disk, be aware of this:
Image attributes are written to a file with the extension .mpc.
Image pixels are written to a file with the extension .cache.
Its main advantage is experienced when...
...processing very large images, or when
...applying several operations on one and the same image in "opertion pipelines".
MPC was designed especially for workflow patterns which match the criteria "read many times, write once".
Some people say that for such operations the performance improves here, but I have no personal experience with it.
Convert your base picture to MPC first:
convert input.jpeg input.mpc
and only then run:
convert input.mpc [...your long-long-long list of crops and operations...]
Then see if this saves you significantly on time.
Most likely you can use this MPC format even "inline" (using the special mpc: notation, see below).
The MPR format (memory persistent register) does something similar.
It reads the image into a named memory register.
Your process pipeline can also read the image again from that register, should it need to access it multiple times.
The image persists in the register the current command pipeline exits.
But I've never applied this technique to a real world problem, so I can't say how it works out in real life.
6. Construct a suitable IM processing pipeline to do all tasks in one go
As you describe your process, it is composed of 4 distinguished steps:
Convert a TIFF to a JPEG.
Resize the JPEG image to xx (?? what value ??)
Crop the JPEG to 200px.
Add a text watermark.
Please tell if I understand correctly your intentions from reading your code snippets:
You have 1 input file, a TIFF.
You want 2 final output files:
1 thumbnail JPEG, sized 200x200 pixels;
1 labelled JPEG, with a width of 1024 pixels (height keeping aspect ratio of input TIFF);
1 (unlabelled) JPEG is only an intermediate file which you do not really want to keep.
Basically, each step uses its own command -- 4 different commands in total.
This can be sped up considerably by using a single command pipeline which performs all the steps on its own.
Moreover, you seem to not really need to keep the unlabelled JPEG as an end result -- yet your one command to generate it as an intermediate temporary file saves it to disk. We can try to skip this step altogether then, and try to achieve the final result without this extra write to disk.
There are different approaches possible to this change.
I'll show you (and other readers) only one for now -- and only for the CLI, not for PHP.
I'm not a PHP guy -- it's your own job to 'translate' my CLI method into appropriate PHP calls.
(But by all means: please test with my commands first, really using the CLI, to see if the effort is worth while translating the approach to PHP!)
But please make first sure that you really understand the architecture and structure of more complex ImageMagick's command lines!
For this goal, please refer to this other answer of mine:
ImageMagick Command-Line Option Order (and Categories of Command-Line Parameters)
Your 4 steps translate into the following individual ImageMagick commands:
convert image.tiff image.jpg
convert image.jpg -resize 1024x image-1024.jpg
convert image-1024.jpg -thumbnail 200x200 image-thumb.jpg
convert -background white image-1024.jpg label:12345 -append image-labelled.jpg
Now to transform this workflow into one single pipeline command...
The following command does this.
It should execute faster (regardless of what your results are when following my above steps 0.--4.):
convert image.tiff \
-respect-parentheses \
+write mpr:XY \
\( mpr:XY +write image-1024.jpg \) \
\( mpr:XY -thumbnail 200x200 +write image-thumb.jpg \) \
\( mpr:XY -background white label:12345 -append +write image-labelled.jpg \) \
null:
Explanations:
-respect-parentheses :
required to really make independent from each other the sub-commands executed inside the \( .... \) parentheses.
+write mpr:XY :
used to write the input file to an MPR memory register.
XY is just a label (you can use anything), needed to later re-call the same image.
+write image-1024.jpg :
writes result of subcommand executed inside the first parentheses pair to disk.
+write image-thumb.jpg :
writes result of subcommand executed inside the second parentheses pair to disk.
+write image-labelled.jpg :
writes result of subcommand executed inside the third parentheses pair to disk.
null: :
terminates the command pipeline.
Required because we otherwise would end with the last subcommand's closing parenthesis.
7. Benchmarking 4 individual commands vs. the single pipeline
In order to get a rough feeling about my suggestion, I did run the commands below.
The first one runs the sequence of the 4 individual commands 100 times (and saves all resulting images under different file names).
time for i in $(seq -w 1 100); do
convert image.tiff \
image-indiv-run-${i}.jpg
convert image-indiv-run-${i}.jpg -sample 1024x \
image-1024-indiv-run-${i}.jpg
convert image-1024-indiv-run-${i}.jpg -thumbnail 200x200 \
image-thumb-indiv-run-${i}.jpg
convert -background white image-1024-indiv-run-${i}.jpg label:12345 -append \
image-labelled-indiv-run-${i}.jpg
echo "DONE: run indiv $i ..."
done
My result for 4 individual commands (repeated 100 times!) is this:
real 0m49.165s
user 0m39.004s
sys 0m6.661s
The second command times the single pipeline:
time for i in $(seq -w 1 100); do
convert image.tiff \
-respect-parentheses \
+write mpr:XY \
\( mpr:XY -resize 1024x \
+write image-1024-pipel-run-${i}.jpg \) \
\( mpr:XY -thumbnail 200x200 \
+write image-thumb-pipel-run-${i}.jpg \) \
\( mpr:XY -resize 1024x \
-background white label:12345 -append \
+write image-labelled-pipel-run-${i}.jpg \) \
null:
echo "DONE: run pipeline $i ..."
done
The result for single pipeline (repeated 100 times!) is this:
real 0m29.128s
user 0m28.450s
sys 0m2.897s
As you can see, the single pipeline is about 40% faster than the 4 individual commands!
Now you can also invest in multi-CPU, much RAM, fast SSD hardware to speed things up even more :-)
But first translate this CLI approach into PHP code...
There are a few more things to be said about this topic.
But my time runs out for now.
I'll probably return to this answer in a few days and update it some more...
Update: I had to update this answer with new numbers for the benchmarking:
initially I had forgotten to include the -resize 1024x operation (stupid me!) into the pipelined version.
Having included it, the performance gain is still there, but not as big any more.
8. Use -clone 0 to copy image within memory
Here is another alternative to try instead of the mpr: approach with a named memory register as suggested above.
It uses (again within 'side processing inside parentheses') the -clone 0 operation.
The way this works is this:
convert reads the input TIFF from disk once and loads it into memory.
Each -clone 0 operator makes a copy of the first loaded image (because it has index 0 in the current image stack).
Each "within-parenthesis" sub-pipeline of the total command pipeline performs some operation on the clone.
Each +write operation saves the respective result to disk.
So here is the command to benchmark this:
time for i in $(seq -w 1 100); do
convert image.tiff \
-respect-parentheses \
\( -clone 0 -thumbnail 200x200 \
+write image-thumb-pipel-run-${i}.jpg \) \
\( -clone 0 -resize 1024x \
-background white label:12345 -append \
+write image-labelled-pipel-run-${i}.jpg \) \
null:
echo "DONE: run pipeline $i ..."
done
My result:
real 0m19.432s
user 0m18.214s
sys 0m1.897s
To my surprise, this is faster than the version which used mpr: !
9. Use -scale or -sample instead of -resize
This alternative will most likely speed up your resizing sub-operation.
But it will likely lead to a somewhat worse image quality (you'll have to verify, if this difference is noticeable).
For some background info about the difference between -resize, -sample and -scale see the following answer:
What is the difference between sample/resample/scale/resize/adaptive-resize/thumbnail in ImageMagick convert?
I tried it too:
time for i in $(seq -w 1 100); do
convert image.tiff \
-respect-parentheses \
\( -clone 0 -thumbnail 200x200 \
+write image-thumb-pipel-run-${i}.jpg \) \
\( -clone 0 -scale 1024x \
-background white label:12345 -append \
+write image-labelled-pipel-run-${i}.jpg \) \
null:
echo "DONE: run pipeline $i ..."
done
My result:
real 0m16.551s
user 0m16.124s
sys 0m1.567s
This is the fastest result so far (I combined it with the +clone variant).
Of course, this modification can also be applied to your initial method running 4 different commands.
10. Emulate the Q8 build by adding -depth 8 to the commands.
I did not actually run and measure this, but the complete command would be.
time for i in $(seq -w 1 100); do
convert image.tiff \
-respect-parentheses \
\( -clone 0 -thumbnail 200x200 -depth 8 \
+write d08-image-thumb-pipel-run-${i}.jpg \) \
\( -clone 0 -scale 1024x -depth 8 \
-background white label:12345 -append \
+write d08-image-labelled-pipel-run-${i}.jpg \) \
null:
echo "DONE: run pipeline $i ..."
done
This modification is also applicable to your initial "I run 4 different commands"-method.
11. Combine it with GNU parallel, as suggested by Mark Setchell
This of course is only applicable and reasonable for you, if your overall work process allows for such parallelization.
For my little benchmark testing it is applicable.
For your web service, it may be that you know of only one job at a time...
time for i in $(seq -w 1 100); do \
cat <<EOF
convert image.tiff \
\( -clone 0 -scale 1024x -depth 8 \
-background white label:12345 -append \
+write d08-image-labelled-pipel-run-${i}.jpg \) \
\( -clone 0 -thumbnail 200x200 -depth 8 \
+write d08-image-thumb-pipel-run-${i}.jpg \) \
null:
echo "DONE: run pipeline $i ..."
EOF
done | parallel --will-cite
Results:
real 0m6.806s
user 0m37.582s
sys 0m6.642s
The apparent contradiction between user and real time can be explained:
the user time represents the sum of all time ticks which where clocked on 8 different CPU cores.
From the point of view of the user looking onto his watch, it was much faster: less than 10 seconds.
12. Summary
Pick your own preferences -- combine different methods:
Some speedup can be gained (with identical image quality as currently) by constructing a more clever command pipeline.
Avoid running various commands (where each convert leads to a new process, and has to read its input from disk).
Pack all image manipulations into one single process.
Make use of the "parenthesized side processing".
Make use of -clone or mbr: or mbc: or even combine each of these.
Some speedups can be additionally be gained by trading image quality with performance:
Some of your choices are:
-depth 8 (has to be declared on the OP's system) vs. -depth 16 (the default on the OP's system)
-resize 1024 vs. -sample 1024x vs. -scale 1024x
Make use of GNU parallel if your workflow permits this.

As always, #KurtPfeifle has provided an excellently reasoned and explained answer, and everything he says is solid advice which you would do well to listen to and follow carefully.
There is a bit more that can be done though but it is more than I can add as a comment, so I am putting it as another answer, though it is only an enhancement on Kurt's...
I do not know what size of imput image Kurt used, so I made one of 3000x2000 and compared my run times with his to see if they were comparable since we have different hardware. The individual commands ran in 42 seconds on my machine and the pipelined ones ran in 36 seconds so I guess my image size and hardware are broadly similar.
I then used GNU Parallel to run the jobs in parallel - I think you will get a lot of benefit from that on a Xeon. Here is what I did...
time for i in $(seq -w 1 100); do
cat <<EOF
convert image.tiff \
-respect-parentheses \
+write mpr:XY \
\( mpr:XY -resize 1024x \
+write image-1024-pipel-run-${i}.jpg \) \
\( mpr:XY -thumbnail 200x200 \
+write image-thumb-pipel-run-${i}.jpg \) \
\( mpr:XY -background white label:12345 -append \
+write image-labelled-pipel-run-${i}.jpg \) \
null:
echo "DONE: run pipeline $i ..."
EOF
done | parallel
As you can see, all I did was echo the commands that need running onto stdout and piped them into GNU Parallel. Run that way, it takes just 10 seconds on my machine.
I also had a try at imitating the functionality using ffmpeg, and came up with this, which seems pretty similar on my test images - your mileage may vary.
#!/bin/bash
for i in $(seq -w 1 100); do
echo ffmpeg -y -loglevel panic -i image.tif ff-$i.jpg
echo ffmpeg -y -loglevel panic -i image.tif -vf scale=1024:682 ff-$i-1024.jpg
echo ffmpeg -y -loglevel panic -i image.tif -vf scale=200:200 ff-$i-200.jpg
done | parallel
That runs in 7 seconds on my iMac with a 3000x2000 image.tif input file.
I failed miserably to get libturbo-jpeg installed with ImageMagick under homebrew.

I keep hearing from some people that GraphicsMagick (a fork from quite a few years back, that branched off from ImageMagick) is significantly faster than ImageMagick.
So I took this opportunity to give it a spin. Hence my second answer.
I did run the following loop of 4 separate gm commands. This makes the results comparable to the 4 separate convert commands documented in my other answer. It happened on the same machine:
time for i in $(seq -w 1 100); do
gm convert image.tiff gm-${i}-image.jpg
gm convert gm-${i}-image.jpg -resize 1024x gm-${i}-image-1024.jpg
gm convert gm-${i}-image-1024.jpg -thumbnail 200x200 gm-${i}-image-thumb.jpg
gm convert -background white \
gm-${i}-image-1024.jpg label:12345 -append gm-${i}-image-labelled.jpg
echo "GraphicsMagick run no. $i ..."
done
Resulting times:
real 1m4.225s
user 0m51.577s
sys 0m8.247s
This means: for this particular job, and on this machine, my Q8 GraphicsMagick (version is 1.3.20 2014-08-16 Q8) is slower with 64 seconds needed than my Q16 ImageMagick (version is 6.9.0-0 Q16 x86_64 2014-12-06), which needed 50 seconds for 100 runs each.
Of course this short test and its results are by no means to be taken as a bullet-proof statement.
You may ask: What else were this machine and its OS doing while conducting each test? Which other apps were loaded into memory at the same time? etc.pp., and right you are. -- But you are now free to run your own tests. One thing you can do to provide almost identical conditions for both tests: run them at the same time in 2 different terminal windows!)

I couldn't resist trying this benchmark with libvips. I used these two scripts:
#!/bin/bash
convert image.tiff \
\( -clone 0 -scale 1024x -depth 8 label:12345 -append \
+write d08-image-labelled-pipel-run-$1.jpg \) \
\( -clone 0 -thumbnail 200x200 -depth 8 \
+write d08-image-thumb-pipel-run-$1.jpg \) \
null:
and libvips using the Python interface:
#!/usr/bin/python3
import sys
import pyvips
image = pyvips.Image.thumbnail("image.tiff", 1024)
txt = pyvips.Image.text("12345")
image = image.join(txt, "vertical")
image.write_to_file(f"{sys.argv[1]}-labelled.jpg")
image = pyvips.Image.thumbnail("image.tiff", 200)
image.write_to_file(f"{sys.argv[1]}-thumb.jpg")
Then on a 3000 x 2000 RGB tiff image with IM 6.9.10-86 and vips 8.11, both with libjpeg-turbo, I see:
$ /usr/bin/time -f %M:%e parallel ../im-bench.sh ::: {1..100}
78288:1.83
$ VIPS_CONCURRENCY=1 /usr/bin/time -f %M:%e parallel ../vips-bench.py ::: {1..100}
59512:0.84
So libvips is about twice the speed and uses less memory.
(I originally posted this in 2015. I've updated it in 2021 with code for current libvips and rerun on a modern PC, hence the huge improvement in processing speed compared to the originals above)

Related

How to stylize text with PHP using GD/Imagick?

I use imagettftext() function to add a caption to the image:
example
Background and captions are separate.
How could I apply the style similar to this one, is it possible with Imagick?
I need to create the 3D effect for the letters.
required style
I was trying to use and combine letters' images, but I think it's not a good solution.
Thanks!
If you are willing to use PHP exec(), you can call my Unix Bash Imagemagick script, texteffect2 at my web site http://www.fmwconcepts.com/imagemagick/index.html. It will do bevel and other effects. Here is the script command for doing a bevel:
texteffect2 -t "bevel" -e bevel -s plain -f Ubuntu-Bold -p 200 -c red -bg none result.png
-t is the text you want to use
-e is the effect you want
-s is whether to make it plain or add an outline
-f is the font name or font file
-p is the point size for the font
-c is the text color
-bg is the background color (none means transparent)
Here is the basic Imagemagick code to do the bevel effect:
convert -background none -font Ubuntu-Bold \
-pointsize 200 -fill "red" -gravity west label:"BEVEL" \
\( +clone -alpha Extract -write mpr:alpha -blur 0x8 -shade 135x30 \
-auto-level -function polynomial 3.5,-5.05,2.05,0.25 \
mpr:alpha -compose copy_opacity -composite \) \
-compose Hardlight -composite result.png
I do not know Imagick that well. So you might look here http://us3.php.net/manual/en/book.imagick.php for the Imagick equivalents or perhaps someone else who uses Imagick can convert it for you.
the method is called Imagick::embossImage, where the text should be a separate layer, before merging the layers for output. just try to manually produce the desired effect with GIMP2, then you know the steps required to get there. the gimp-imagemagick plugin also appears useful for testing.

ImageMagick single convert command performance

I have a few thousand images to be processed so each millisecond counts. Each image is ~2-3Mb in size.
Source file fed to the converter:
image.jpg
Files to be generated out of the source:
orig_image.jpg // original image
1024x768_image.jpg // large image
250x250_image.jpg // thumbnail 1
174x174_image.jpg // thumbnail 2
While browsing different topics on imagemagick convert performance I got a feeling that a single command should be way faster than individual converts for each image size. Also a memory utilization was mentioned as a performance boost. (ImageMagick batch resizing performance)
Multiple command conversion (each command run via php's exec() in a loop):
convert "image.jpg" \
-coalesce -resize "1024x768>" +repage "1024x768_image.jpg"
convert "1024x768_image.jpg" \
-coalesce \
-resize "250x250>" \
+repage \
-gravity center \
-extent "250x250" "250x250_image.jpg"
convert "1024x768_image.jpg" \
-coalesce \
-resize "174x174>" \
+repage \
-gravity center \
-extent "174x174" "174x174_image.jpg"
mv image.jpg orig_image.jpg
Single command conversion incorporating ImageMagicks mpr:
convert "image.jpg" -quality 85 -colorspace rgb -coalesce \
-resize "1024x768>" \'
-write "1024x768_image.jpg" \
-write mpr:myoriginal +delete \
mpr:myoriginal -coalesce \
-resize "250x250>" \
-gravity center \
-extent "250x250" \
-write "250x250_image.jpg" +delete \
mpr:myoriginal -coalesce \'
-resize "174x174>" \
-gravity center \
-extent "174x174" \
-write "174x174_image.jpg"
After performance testing the results are somewhat unexpected. Single command convert in a loop finishes in 62 seconds while multiple command conversion executes in just 16 seconds?
# convert -version
Version: ImageMagick 7.0.2-1 Q8 i686 2017-02-03 http://www.imagemagick.org
Copyright: Copyright (C) 1999-2016 ImageMagick Studio LLC
License: http://www.imagemagick.org/script/license.php
Features: Cipher DPC HDRI OpenMP
Delegates (built-in): bzlib freetype jng jpeg lzma png tiff wmf xml zlib
Also installed libjpeg-turbo jpg processing library but I cannot tell (don't know how to check) if ImageMagic is using it or the old libjpeg.
Any ideas how to speed up image converting process?
Edit:
Don't know how to format it properly here on stackoverflow, but I just noticed that single line command had an argument "-colorspace rgb" and multiple line commands did not which actually results in such strange results where multiple commands are processed faster.
Removed the "-colorspace rgb" argument after which the MPR convert version works the best and gave additional boost in performance.
To sum it all up I ended up using this command:
// MPR
convert "orig_image.jpg" -quality 80 -coalesce \
-resize "1024x768>" \
-write 1024x768_image.jpg \
-write mpr:myoriginal +delete \
mpr:myoriginal -resize "250x250>" \
+repage -gravity center -extent "250x250" \
-write "250x250_image.jpg" \
-write mpr:myoriginal +delete \
mpr:myoriginal -coalesce -resize "174x174>" \
+repage -gravity center -extent "174x174" \
-write "174x174_image.jpg"
You're not using jpeg shrink-on-load, that'll give an easy speedup.
The jpeg library has a neat feature where it'll let you decompress at full resolution, at 1/2, 1/4 or 1/8th. 1/8th resolution is especially quick because of the way jpg works internally.
To exploit this in convert you need to hint to the jpeg loader that you need an image of a particular size. To avoid aliasing you should ask for an image at least 200% larger than your target size.
On this machine, I see:
$ vipsheader image.jpg
image.jpg: 5112x3470 uchar, 3 bands, srgb, jpegload
$ time convert image.jpg -resize 1024x768 1024x768_image.jpg
real 0m0.405s
user 0m1.896s
sys 0m0.068s
If I set the shrink-on-load hint, it's about 2x faster:
$ time convert -define jpeg:size=2048x1536 image.jpg -resize 1024x768 1024x768_image.jpg
real 0m0.195s
user 0m0.604s
sys 0m0.016s
You'll see a dramatic speedup for very large jpg files.
You could also consider another thumbnailer. vipsthumbnail, for example, is quite a bit faster again:
$ time vipsthumbnail image.jpg -s 1024x768 -o 1024x768_image.jpg
real 0m0.111s
user 0m0.132s
sys 0m0.024s
Although real time is down by only a factor of 2, user time is down by a factor of 5 or so. This makes it useful to run with gnu parallel. For example:
parallel vipsthumbnail image.jpg -s {} -o {}_image.jpg ::: \
1024x768 250x250 174x174
Eric's and John's suggestions share much wisdom, and can be mixed in with my suggestion - which is to use GNU Parallel. It will REALLY count when you have thousands of images.
I created 100 images (actually using GNU Parallel, but that's not the point) called image-0.jpg through image-99.jpg. I then did a simple resize operation just to show how to do it without getting too hung up on the ImageMagick aspect. First I did it sequentially and it took 48 seconds to resize 100 images, then I did the exact same thing with GNU Parallel and came in under 10 seconds - so there are massive time savings to be made.
#!/bin/bash
# Create a function used by both sequential and parallel versions - it's only fair
doit(){
echo Converting $1 to $2
convert -define jpeg:size=2048x1536 "$1" -resize 1024x768 "$2"
}
export -f doit
# First do them all sequentially - 48 seconds on iMac
time for img in image*.jpg; do
doit $img "seq-$img"
done
# Now do them in parallel - 10 seconds on iMac
time parallel doit {} "par-{}" ::: image*.jpg
Just for kicks - watch the CPU meter (at top right corner of the movie) and the rate the files pop out of GNU Parallel in the last 1/6th of the movie.
It is funny as I did some conversions a while ago and found mpr was slower as well.
Anyway try this:
$cmd = " convert "image.jpg" -colorspace rgb -coalesce \( -clone 0 -resize 1024x768> -write 1024x768_image.jpg +delete \)".
" \( -clone 0 -resize 250x250> -gravity center -extent 250x250 -write 250_wide.jpg +delete \) ".
" -resize 174x174> -gravity center -extent 174x174 null: ";
exec("convert $cmd 174x174_image.jpg ");
I notice you do not have a background colour for you extent.
You can also add a -define to your loop method check it out in this list: https://www.imagemagick.org/script/command-line-options.php#define
jpeg:size=geometry only reads the amount of data you need to create the image without reading the whole image. You could probably also add it into your first line. -quality is for the output and will have no effect where you put it.
$cmd = " convert "image.jpg" jpeg:size=1024x768 -colorspace rgb -coalesce \( -clone 0 -resize 1024x768> -write 1024x768_image.jpg +delete \)"..
I can never remember if it comes before or after the file name though
Try Magick Persistent Cache image file format (.mpc) over Magick Persistent Registry (.mpr).
convert "image.jpg" -quality 85 -colorspace rgb myoriginal.mpc
convert myoriginal.mpc \
-resize "1024x768>" \
-write "1024x768_image.jpg" \
-resize "250x250>" \
-gravity center \
-extent "250x250" \
-write "250x250_image.jpg" \
-resize "174x174>" \
-gravity center \
-extent "174x174" \
"174x174_image.jpg"
Which results in the following times when tested with 1.8mb jpeg.
real 0m0.051s
user 0m0.133s
sys 0m0.013s
It's true that this will take two commands (although could be simplified to one with -write ... +delete), but very little I/O cost after .mpc is loaded back into the image stack.
Or
You can probably skip .mpc all together with ...
convert "image.jpg" -quality 85 -colorspace rgb \
-resize "1024x768>" \
-write "1024x768_image.jpg" \
-resize "250x250>" \
-gravity center \
-extent "250x250" \
-write "250x250_image.jpg" \
-resize "174x174>" \
-gravity center \
-extent "174x174" \
"174x174_image.jpg"
With results...
real 0m0.061s
user 0m0.163s
sys 0m0.012s
ImageMagick has a special resize operator variation named -thumbnail <geometry> for converting very large images to small thumbnails. Internally it uses
-sample to shrink the image down to 5 times the final height which is much faster than -resize if the thumbnail is much smaller than the original image. Because this operator uses a reduced filter set, the -filter operator is ignored!
-strip to remove all profiles from the image which are usually not required for thumbnails. This also further reduces the size of the resulting image file.
-resize to finally create the requested size and ratio
When it comes to creating thumbnails from JPEG images, then the special JPEG shrink-on-load option -define jpeg:size=<size> can be used instead, as stated out by user894763. Be aware that this option has to be specified immediately after convert, e.g.:
convert -define jpeg:size=<size> input-image.jpg ...
Anyway the -thumbnail operator can be specified additionaly then because it removes all profiles from the thumbnail image and thus reduces the file size.
Detailed information can be found in the ImageMagick usage documentation.
I get about half the time using one command line in ImageMagick 6.9.10.0 Q16 Mac OSX starting with a 3 MB input JPG image.
list="image.jpg"
time for img in $list; do
convert "image.jpg" \
-coalesce -resize "1024x768>" +repage "1024x768_image.jpg"
convert "1024x768_image.jpg" \
-coalesce \
-resize "250x250>" \
+repage \
-gravity center \
-extent "250x250" "250x250_image.jpg"
convert "1024x768_image.jpg" \
-coalesce \
-resize "174x174>" \
+repage \
-gravity center \
-extent "174x174" "174x174_image.jpg"
done
Time: 0m0.952s
time convert "image.jpg" \
-resize "1024x768>" \
+write "1024x768_image.jpg" \
-resize "250x250>" \
-gravity center \
-extent "250x250" \
+write "250x250_image.jpg" \
-resize "174x174>" \
-gravity center \
-extent "174x174" \
"174x174_image.jpg"
Time: 0m0.478s
The coalesce in your multiple commands is not needed, since JPG does not support a virtual canvas. So removing that reduces the multiply command line time to 0m0.738s
Multiple commands should be longer since you have to write and read intermediate images. Since your intermediate images are JPG, you will lose more visual quality each time you write and read the intermediate images. So the quality of one long command line should be better, also.

Generating a waveform using ffmpeg

I am trying to generate a waveform image using ffmpeg.
I have successfully made a waveform image, however it doesn't look very nice...
I have been looking around to try and style the image to make it look nicer, however I have been unable to find any information on this or any tutorials on this.
I am using PHP and shell_exec to create the waveform.
I am aware that there are php library that can do this but due to file format this is a lengthy process.
The code I am using is as follows:
$command = 'convertvid\bin\ffmpeg -i Temp\\'.$file.' -y -lavfi showwavespic=split_channels=0:s='.$width.'x50 Temp\\'.$PNGFileName;
shell_exec($command);
Basically I would like to add a line through the middle as there are blank spots at the moment and would like to be able to set the background and wave colour.
Default waveform
ffmpeg -i input.wav -filter_complex showwavespic -frames:v 1 output.png
Notes
Notice the segment of silent audio in the middle (see "Fancy waveform" below if you want to see how to add a line).
The background is transparent.
Default colors are red (left channel) and green (right channel) for a stereo input. The color is mixed where the channels overlap.
You can change the channel colors with the colors option, such as "showwavespic=colors=blue|yellow". See a list of valid color names or use hexadecimal notation, such as #ffcc99.
See the showwavespic filter documentation for additional options.
If you want a video instead of an image use the showwaves filter.
Fancy waveform
ffmpeg -i input.mp4 -filter_complex \
"[0:a]aformat=channel_layouts=mono, \
compand=gain=-6, \
showwavespic=s=600x120:colors=#9cf42f[fg]; \
color=s=600x120:color=#44582c, \
drawgrid=width=iw/10:height=ih/5:color=#9cf42f#0.1[bg]; \
[bg][fg]overlay=format=auto,drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=#9cf42f" \
-frames:v 1 output.png
Explanation of options
aformat downsamples the audio to mono. Otherwise, by default, a stereo input would result in a waveform with a different color for each channel (see Default waveform example above).
compand modifies the dynamic range of the audio to make the waveform look less flat. It makes a less accurate representation of the actual audio, but can be more visually appealing for some inputs.
showwavespic makes the actual waveform.
color source filter is used to make a colored background that is the same size as the waveform.
drawgrid adds a grid over the background. The grid does not represent anything, but is just for looks. The grid color is the same as the waveform color (#9cf42f), but opacity is set to 10% (#0.1).
overlay will place [bg] (what I named the filtergraph for the background) behind [fg] (the waveform).
Finally, drawbox will make the horizontal line so any silent areas are not blank.
Gradient example
Using gradients filter:
ffmpeg -i input.mp3 -filter_complex "gradients=s=1920x1080:c0=000000:c1=434343:x0=0:x1=0:y0=0:y1=1080,drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=#0000ff[bg];[0:a]aformat=channel_layouts=mono,showwavespic=s=1920x1080:colors=#0068ff[fg];[bg][fg]overlay=format=auto" -vframes:v 1 output.png
Color background
ffmpeg -i input.opus -filter_complex "color=c=blue[color];aformat=channel_layouts=mono,showwavespic=s=1280x720:colors=white[wave];[color][wave]scale2ref[bg][fg];[bg][fg]overlay=format=auto" -frames:v 1 output.png
The scale2ref filter automatically makes the background the same size as the waveform.
Image background
Of course you can use an image or video instead for the background:
ffmpeg -i audio.flac -i background.jpg -filter_complex \
"[1:v]scale=600:-1,crop=iw:120[bg]; \
[0:a]showwavespic=s=600x120:colors=cyan|aqua[fg]; \
[bg][fg]overlay=format=auto" \
-q:v 3 showwavespic_bg.jpg
Getting waveform stats and data
Use the astats filter. Many stats are available: RMS, peak, min, max, difference, etc.
RMS level per audio frame
Example to get standard RMS level measured in dBFS per audio frame:
ffprobe -v error -f lavfi -i "amovie=input.wav,astats=metadata=1:reset=1" -show_entries frame_tags=lavfi.astats.Overall.RMS_level -of csv=p=0 > rms.log
Peak level per second
Add the asetnsamples filter.
ffprobe -v error -f lavfi -i "amovie=input.wav,asetnsamples=44100,astats=metadata=1:reset=1" -show_entries frame_tags=lavfi.astats.Overall.Peak_level -of csv=p=0
Same as above but with timestamps
ffprobe -v error -f lavfi -i "amovie=input.wav,asetnsamples=44100,astats=metadata=1:reset=1" -show_entries frame=pkt_pts_time:frame_tags=lavfi.astats.Overall.Peak_level -of csv=p=0
Output to file
Just append > output.log to the end of your command:
ffprobe -v error -f lavfi -i "amovie=input.wav,asetnsamples=44100,astats=metadata=1:reset=1" -show_entries frame_tags=lavfi.astats.Overall.RMS_level -of csv=p=0 > output.log
JSON
ffprobe -v error -f lavfi -i "amovie=input.wav,asetnsamples=44100,astats=metadata=1:reset=1" -show_entries frame_tags=lavfi.astats.Overall.RMS_level -of json > output.json

Conversion PDF to PNG or JPEG is very very slow using ImageMagick

I have a working PDF to PNG conversion script using PHP and ImageMagick but I am having a problem with the speed of the conversion.
I know it works because with a very small PDF the time taken to convert is not that great, but with a 250kb file (still not that large really) it takes in excess of 20 minutes to convert.
Here's the PHP:
//***** GET PATH TO IMAGEMAGICK *****
$path_to_imagemagick = trim(`which convert`);
//***** PATH TO PDF TO CONVERT *****
$path_to_pdf = getcwd() . "/pdf/myfile.pdf[0]";
//***** PATH TO OUTPUT TO *****
$output_path = getcwd() . "/pdfimage/test_converted.png";
#exec($path_to_imagemagick . " -density 72 -quality 60 -resize 150x " . $path_to_pdf . " " . $output_path);
Are there any settings I can change to make this quicker?
If it helps, the image does not need to be a PNG. If JPEG is going to be quicker I'm happy to go with that.
ImageMagick cannot convert PDF to raster images by itself at all.
ImageMagick uses a delegate for this job: that delegate is Ghostscript. If you hadn't installed Ghostscript on the same system as ImageMagick, the PDF conversion by convert wouldn't work.
To gain speed, don't use ImageMagick for PDF -> raster image conversion. Instead, rather use Ghostscript directly (also possible via PHP).
Command line for JPEG output:
gs \
-o ./pdfimage/test_converted.jpg \
-sDEVICE=jpeg \
-dJPEGQ=60 \
-r72 \
-dLastPage=1 \
pdf/myfile.pdf
Command line for PNG output:
gs \
-o ./pdfimage/test_converted.png \
-sDEVICE=pngalpha \
-dLastPage=1 \
-r72 \
pdf/myfile.pdf
Both of these commands will give you unscaled output.
To scale the output down, you may use something like
gs \
-o ./pdfimage/test_converted.png \
-sDEVICE=pngalpha \
-dLastPage=1 \
-r72 \
-dDEVICEWIDTHPOINTS=150 \
-dDEVICEHEIGHTPOINTS=150 \
-dPDFFitPage \
pdf/myfile.pdf
Also please note: You used a -quality 60 setting for your PNG outputting command. But -quality for JPEG and -quality for PNG output do have a completely different meaning with ImageMagick (and you may not be aware of it). See also this answer for some details about this.

How to create videos from images with php?

Let's say I have 10 images and I want to combine those images in a video like a slideshow.
For example I want to show each image for 5 seconds and then continue with next image for another 5 seconds.
If it's possible, it will be perfect to include music and some descriptive text too.
Is there a sample code for this may be with ffmpeg library ?
My first thought was to shell out to the ffmpeg command with something like this.
Creating a Video from Images
ffmpeg can be used to stitch several images together into a video.
There are many options, but the following example should be enough to
get started. It takes all images that have filenames of
XXXXX.morph.jpg, where X is numerical, and creates a video called
"output.mp4". The qscale option specifies the picture quality (1 is
the highest, and 32 is the lowest), and the "-r" option is used to
specify the number of frames per second.
ffmpeg -r 25 -qscale 2 -i %05d.morph.jpg output.mp4
(The website that this blurb was taken from is gone. Link
has been removed.)
Where 25 means 25 images per second. You could set this to 1 for a slight (1 sec) delay or use decimals, IE: 0.5 for a 2 second delay.
You can then combine a video and audio stream with something like this.
ffmpeg -i video.mp4 -i audio.mp3 -c:v copy -c:a aac -b:a 128k final.mp4
Of course choose your appropriate codecs. If you want an mp4 use libx264 for video and aac (built into ffmpeg and no longer "experimental") for audio.
Just remember that if you choose to use a method like this that ffmpeg output goes, by default, to stderr for when you try to read it. It can be redirected to stdout if you prefer.
The first thing that came to mind for me was imagemagick. I've used it with PHP for a lot of image manipulation and I know it supports reading a decent amount of video formats and according to that link it supports writing to some too.
yes, ffmpeg is the right solution for you. i just recently made something similar - a video site with animated thumbnails. i used ffmpeg to put together images in an aminated gif. however, the output can be whatever you need... unfortunately, in my searches into this topic i have not found any sample code that would combine all the points you are after, so i suppose you will have to try manually with ffmpeg... in my project i used php video toolkit http://sourceforge.net/projects/phpvideotoolkit/ in some parts to make it a bit easier...
You can use blend effect with ffmpeg:
ffmpeg -framerate 20 \
-loop 1 -t 0.5 -i 1.jpg \
-loop 1 -t 0.5 -i 2.jpg \
-loop 1 -t 0.5 -i 3.jpg \
-loop 1 -t 0.5 -i 4.jpg \
-c:v libx264 \
-filter_complex " \
[1:v][0:v]blend=all_expr='A*(if(gte(T,0.5),1,T/0.5))+B*(1-(if(gte(T,0.5),1,T/0.5)))'[b1v]; \
[2:v][1:v]blend=all_expr='A*(if(gte(T,0.5),1,T/0.5))+B*(1-(if(gte(T,0.5),1,T/0.5)))'[b2v]; \
[3:v][2:v]blend=all_expr='A*(if(gte(T,0.5),1,T/0.5))+B*(1-(if(gte(T,0.5),1,T/0.5)))'[b3v]; \
[0:v][b1v][1:v][b2v][2:v][b3v][3:v]concat=n=7:v=1:a=0,format=yuv420p[v]" -map "[v]" out.mp4
You should check bellow link for more effect of ffmpeg :D
https://github.com/letungit90/ffmpeg_memo

Categories