I am creating business card maker online, now so far everything was easy until i came to card rendering. In my html document i have:
width = 517px which is equals to 90mm
height = 287px which is equals to 50mm
And according to Photoshop 90mm -> 255px so I need to somehow convert 517px to 255px.
After some googling best solution i came up with so far is (517 / 5.7) * 3 which gives me 272 and it's far from correct answer.
Any suggestions would help a lot :)
P.S I am using php GD.
The correlation between pixels and physical measurements depends entirely on the display and how many physical pixels there are per (square) millimeter. 72ppi (pixels per inch) used to be a typical resolution, but with pixel density increasing across many devices, that's not a given anymore.
There's simply no standard formula.
The resulting pixel number depends on the DPI (resolution) of the output media.
Starting with 90mm equaling ~3,54in you get:
72dpi: 255px
96dpi: 340px
120dpi: 425px
146dpi: 517px
If you don't want to do all the calculation there is a PHP Class which can do this for you.
Here are some informatiosn what the Library can do:
When layouting things one often has to compute various positions, zoom factors, scalings and the like. When the specific sizes are of dynamical nature the layout code quicky gets a confusing endless series of mathematical equations. You name it: spaghetti code. This library tries to offer two approaches to solve this problem:
by taking handling of units from explicit recomputation to implicit
consideration, so all you have to do is name what unit a specific
value is to be interpreted as
by offering well known arithmetical concepts as types which allows to
think in a more high level way about layouting instead of just
simple numeric values.
https://github.com/arkascha/php-urithmetic
I think px divided by 2.02 works perfect in my case at any size of page. In php code i use :
<?php $mm = $px/2.02 ?>
Related
I'm trying to work on an algorithm that will morph one "shape" into another "shape". Both shapes are arbitrary, and may even have smaller, disjointed shapes too.
The basic idea I have so far is as follows: locate the edges of the shape, place points all along those edges, then do the same with the target image, then move the points to their targets.
Here's an illustration:
I just don't know where to start. The image above is a simplification, actual use case has more complex shapes/outlines. My main problem is: How do I handle disjoint shapes? The best I can come up with is to figure out the closest point between the two pieces, and join them together as part of the path. But how would I implement this?
I don't have any code yet, I'm still at the planning phase for this. I guess what I'm asking for is if anyone can link me to any resources that may help, or give any pointers. Searching Google has yielded some interesting morph algorithms, but they all deal with full images and involve breaking the image into pieces to reshape them, which is not what I'm looking for.
Note that this will be used in JavaScript, but could be precomputed in PHP instead if it's easier.
It's best to break the problem into multiple smaller problems which can be solved independently. That way you also own independent functionalities after solving this problem, which can be added to some global module collection.
First we need to figure out which pixel in the from_shape goes to which pixel in the to_shape. We can figure that out with the following method:
Place to_shape over from_shape.
For every pixel in from_shape, find its closest to_shape pixel.
Every pixel in a shape must have a unique id, that id can be for instance, its xy location.
Now you can record each unique pixel in from_shape, and which unique pixel it goes to in to_shape.
Delete the overlapped shapes and go back to the original ones,
just now each pixel in from_shape knows its destination in to_shape.
We also need to know which 'siblings' each pixel has.
A sibling is a pixel that lies right next to another pixel.
To find it, go to a given pixel, collect all pixels in radius one from it, all of them which are black.. are the from-pixel's siblings. This information is necessary to keep the shape as a single unit when the pixels travel to their destination. Skipping the siblings would substantially speed up and simplify the morph, but without them the shape might become fragmented during morph. Might wanna begin with a siblingless version, see how that goes.
And finally we implement the morph:
There is morph_time_duration.
For each pixel in from_shape, find the distance to it's destination in to_shape.
That distance, divided by morph_time_duration, is the speed of the pixel during the morph.
Also, the angle towards destination is the angle to travel in.
So now you have speed and angle.
So at each frame in the morphing procedure, a given from_pixel now knows which direction to travel in, speed, and it also knows its siblings. So in each frame just draw the pixel in its new location, after having traveled at its speed in its direction. And then draw a line to all of that pixels siblings.
And that will display your morph.
I've found a demonstration (using Raphael.js) of outline morphing and motion tweening in JavaScript, showing how Raphael.js can be used to morph one curve into another curve.
Also, this related question (about shape tweening in JavaScript) may contain some answers that are relevant to this question.
The MorpherJS library may also be suitable for this purpose. Some demonstrations of outline morphing with MorpherJS can be found here.
Doing that wont be very easy, but I can give you a couple of starting points. If you want a plain javascript implementation a great starting point would be:
http://raphaeljs.com/animation.html
which is doing exactly what you want. So you can check what methods are invoked and browse through the library source for those methods to see the implementation.
If you instead need to morph 2 images in PHP, I would suggest you use some sort of an extension and not do that in plain PHP. Here is an example using ImageMagick to do it:
http://www.fmwconcepts.com/imagemagick/shapemorph2/index.php
If you want to know more about the internals of it:
http://web.mit.edu/manoli/www/ecimorph/ecimorph.html#algo
Hope one of those helps.
The short answer, if you're trying to roll your own, it's not a straightforward task. There's plenty of math out there on these topics that perform these very transformations (the most common you're find deal with the most common shapes, obviously), but that may or may not be accessible to you and it won't be as easy to figure out how to do the non-standard transformations.
If you're just looking for a logical approach, here's where I'd start (not having done the math in years, and not having studied the inner workings of the graphics libraries linked):
Choose a distance, measured in whatever units make sense, pixels perhaps.
Identify each continuous edge in each shape. Pick an arbitrary point on one edge for each shape (say, on a plane where (0,0) represents the upper left corner, the edge point on each shape closest to (0,0)), and align your separate shapes on that point. For the purposes of your transformation, that point will remain static and all other points will conform to it.
If your shape has two or more distinct edges, order them by perimeter length. Consider the shorter lengths to be subordinate to the longer lengths. Use a similar process as in step 2 to pick an arbitrary point to connect these two edges together.
At each of your chosen points, count points around your edges at the interval of the distance you selected in step 1.
(left as an exercise for the reader) conform your points on your disparate edges together and into the target shape, aligning, reducing or adding points on the edges as necessary.
Alternatively, you could select an arbitrary number of points instead of an arbitrary distance, and just spread them appropriately along the edges at whatever distance they will fit, and then conform those points together.
Just throwing some ideas out there, I don't honestly know how deep the problem goes.
I run a website with thousands of user-contributed photos on it. What I'd like is a script to help me weed out poor photos from good photos. Obviously this isn't 100% possible, but it should be possible to determine if an image has no discernable focussed area? I think?
I did a bit of googling and couldn't find much on the subject.
I've written a very simple script that iterates over the pixels, and sums the difference in brightness between neighbouring pixels. This gives a high value for sharp contrasty images, and a low value for blurred/out of focus images. It's far from ideal though, as if there's a perfectly focussed small subject in the frame, and a nice bokeh background, it'll give a low value.
So I think what I want is a script that can determine if a part of an image is well-focussed, and if none is then to alert me?
Any bright ideas? Am I wasting my time?
I'd be interested in any code that can determine other sorts of "bad" photos too - too dark, too light, too flat, that sort of thing.
Too dark and too light are easy - calculate a colour average as you iterate through every pixel.
For your focus issue, I think you're going to run into a lot of problems with this one. I would strongly recommend looking up kernel convolution, as I have a sinking feeling that you'll need it. This allows you to perform more complex operations on pixels based on neighbors - and is how most Photoshop filters are done!
Once you've got the maths background to do it, what I would do is to convert your image to an array of unique values (as opposed to RGB) representing brightness. From there, use an edge-finder kernel (Sobel operator should do the trick) and find the edges. Once that is done, iterate over again, mapping the bits with no edge, and calculate the largest square area without an edge from this. It is probably the least CPU-intensive solution, though not the most esoteric.
My users are uploading images to my website and i would like first to offer them already uploaded images first. My idea is to
1. create some kind of image "hash" of every existing image
2. create a hash of newly uploaded image and compare it with the other in the database
i have found some interesting solutions like http://www.pureftpd.org/project/libpuzzle or or http://phash.org/ etc. but they got one or more problems
they need some nonstandard extension to PHP (or are not in PHP at all) - it would be OK for me, but I would like to create it as a plugin to my popular CMS, which is used on many hosting environments without my control.
they are comparing two images but i need to compare one to many (e.g. thousands) and doing it one by one would be very uneffective / slow ...
...
I would be OK to find only VERY similar images (so e.g. different size, resaved jpg or different jpg compression factor).
The only idea I got is to resize the image to e.g. 5px*5px* 256 colors, create a string representation of it and then find the same. But I guess that it may have create tiny differences in colors even with just two same images with different size, so finding just the 100 % same would be useless.
So I would need some good format of that string representation of image which than could be used with some SQL function to find similar, or some other nice way. E.g. phash create perceptional hashes, so when two numbers are close, the images should be close as well, so i just need to find closest distances. But it is again external library.
Is there any easy way?
I've had this exact same issue before.
Feel free to copy what I did, and hopefully it will help you / solve your problem.
How I solved it
My first idea that failed, similar to what you may be thinking, is I ended up making strings for every single image (no matter what size). But I quickly worked out this fills your database super fast, and wasn't effective.
Next option (that works) was a smaller image (like your 5px idea), and I did exactly that, but with 10px*10px images. The way I created the 'hash' for each image was the imagecolorat() function.
See php.net here.
When receiving the rgb colours for the image, I rounded them to the nearest 50, so that the colours were less specific. That number (50) is what you want to change depending on how specific you want your searches to be.
for example:
// Pixel RGB
rgb(105, 126, 225) // Original
rgb(100, 150, 250) // After rounding numbers to nearest 50
After doing this to every pixel (10px*10px will give you 100 rgb()'s back), I then turned them into an array, and stored them in the database as base64_encode() and serialize().
When doing the search for images that are similar, I did the exact same process to the image they wanted to upload, and then extracted image 'hashes' from the database to compare them all, and see what had matching rounded rgb's.
Tips
The Bigger that 50 is in the rgb rounding, the less specific your search will be (and vice versa).
If you want your SQL to be more specific, it may be better to store extra/specific info about the image in the database, so that you can limit the searches you get in the database. eg. if the aspect ratio is 4:3, only pull images around 4:3 from the database. (etc)
It can be difficult to get this perfectly 5px*5px, so a suggestion is phpthumb. I used it with the syntax:
phpthumb.php?src=IMAGE_NAME_HERE.png&w=10&h=10&zc=1
// &w= width of your image
// &h= height of your image
// &zc= zoom control. 0:Keep aspect ratio, 1:Change to suit your width+height
Good luck mate, hope I could help.
For an easy php implementation check out: https://github.com/kennethrapp/phasher
However - I wonder if there is a native mySql function for "compare" (see php class above)
I scale down image to 8x8 then I convert RGB to 1-byte HSV so result hash is 172 bytes string.
HSVHSVHSVHSVHSVHSVHSVHSV... (from 8x8 block, 172 bytes long)
0fff0f3ffff4373f346fff00...
It's not 100% accurate (some duplicates aren't found) but it works nice and looks like there is no false positive results.
Putting it down in an academical way, what you are looking for is a similarity function which takes in two images and returns an indicator how far/similar the two images are. This indicator could easily be a decimal number ranging from -1 to 1 (far apart to very close). Once you have this function you can set an image as a reference and compare all the images against it. Then finding the similar images to one is as simple as finding the closest similarity factor to it which is done with a simple search over a double field within an RDBMS like MySQL.
Now all that remains is how to define the similarity function. To be honest this is problem specific. It depends on what you call similar. But covariance is usually a good starting point, it just needs your two images to be of the same size which I think is of no big deal. Yet you can find lots of other ideas searching for 'similarity measures between two images'.
I have a whole bunch of images of illustrations that I would like to crop to a smaller preview size.
The problem is that I want to crop them to show an "interesting" part of the illustration (ie avoid areas of whitespace).
The images typically have a flat color or a subtle gradient for the background. They are mostly vector style artwork with fairly distinct shapes.
Here are some examples: link ;-)
I've been thinking about using some sort of image feature detection algorithm with a sliding window to find the area with the greatest number of features.
I'm implementing this in PHP, but I don't mind implementing it myself if there isn't a library or extension available.
Ideas?
ImageMagick has a trim operation. It's available as a library but I don't know how hard it is to use from PHP. There are some PHP interfaces.
OK, so here's what I would've done, after looking at the examples:
Sum all rows and all columns of each image. You'll get two arrays, both looking like this:
/-----\ /--\
_/ -- |
___- \_________
By looking at these arrays for a few images, find a suitable threshold (probably something just above zero). Then the leftmost and the rightmost crossing of this threshold is where you have to crop. I hope I've managed to make it clear enough, if not -- ask!
Here's a fairly simple approach using an edge-detection filter, and then cropping around the center-of-edginess of the image to generate a thumbnail. It works pretty well on most images, but not if there are more than one subject. I'm open to suggestions on other ways of identifying the "interesting" points in a source image.
Well, you might want to consider just using an edge detection algorithm. Pick the area with the largest number of edges. Give higher weight to edges that are not blurry (as they may be from the background).
ImageMagick for PHP has automated generation of thumbnails. This SO question has a link to an ImageMagick auto-crop operator, and I'm not sure, but I think this is the PHP interface to it.
From the link:
bool Imagick::trimImage ( float
$fuzz )
Remove edges that are
the background color from the image.
For more general "interestingness", maybe try an inverse of seam carving (to find the highest energy, rather than lowest energy areas).
A CLI program using http://pecl.php.net/package/imagick:
<?php
dl('imagick.so');
$img = new Imagick();
$img->readImage($argv[1]);
# (* 0.0: exact match; * 1.0: crop entire image)
$fuzz = current($img->getQuantumRange()) * 0.25;
$img->trimImage($fuzz);
$img->writeImage($argv[2]);
?>
It should work good enough, as long as the image doesn't have a frame around its border.
Drupal has a project called smartcrop, which has PHP code to find highest entropy and "interesting" areas in images. See the output examples.
You should be able to use the functions in the module and the library in none-Drupal projects too.
What's the best approach to comparing two images with php and the Graphic Draw (GD) Library?
This is the scenario:
I have an image, and I want to find which image of a given set is the most similar to it.
The most similar image is in fact the same image, not pixel perfect match but the same image.
I've dramatised the difference between the two images with the number one on the example just to ease the understanding of what I meant.
Even though it brought no consistent results, my approach was to reduce the images to 1px using the imagecopyresampled function and see how close the RGB values where between images.
The sum of the values of deducting each red, green and blue decimal equivalent value from the red, green and blue decimal equivalent value of the possible match gave me a dissimilarity index that, even though it didn't work as expected since not always the most RGB similar image was the target image, I could use to select an image from the available targets.
Here's a sample of the output when comparing 4 images against a target image, in this case the apple logo, that matches one of them but is not exactly the same:
Original image:
Red:222 Green:226 Blue:232
Compared against:
http://a1.twimg.com/profile_images/571171388/logo-twitter_normal.png
Red:183 Green:212 Blue:212 and an index of similarity of 56
Red:117 Green:028 Blue:028 and an index of dissimilarity 530
Red:218 Green:221 Blue:221 and an index of dissimilarity 13 Matched Correctly.
Red:061 Green:063 Blue:063 and an index of dissimilarity 491
May not even be doable better with better results than what I'm already getting and I'm wasting my time here but since there seems to be a lot of experienced php programmers I guess you can point me in the right directions on how to improve this.
I'm open to other image libraries such as iMagick, Gmagick or Cairo for php but I'd prefer to avoid using other languages than php.
Thanks in advance.
I'd have thought your approach seems reasonable, but reducing an entire image to 1x1 pixel in size is probably a step too far.
However, if you converted each image to the same size and then computed the average colour in each 16x16 (or 32x32, 64x64, etc. depending on how much processing time/power you wish to use) cell you should be able to form some kind of sensible(-ish) comparison.
I would suggest, like middaparka, that you do not downsample to a 1 pixel only image, because you loose all the spatial information. Downsampling to 16x16 (or 32x32, etc.) would certainly provide better results.
Then it also depends on whether color information is important or not to you. From what I understand you could actually do without it and compute a gray-level image starting from your color image (e.g. luma) and compute the cross-correlation. If, like you said, there is a couple of images that matches exactly (except for color information) this should give you a pretty good reliability.
I used the ideas of scaling, downsampling and gray-level mentioned in the question and answers, to apply a Mean Squared Error between the pixels channels values for 2 images, using GD Library.
The code is in this answer, including a test with those ideas.
Also I did some benckmarking and I think the downsampling could be not needed in those little images, cause the method is fast (being PHP), just a fraction of a second.
Using middparka's methods, you can transform each image into a sequence of numeric values and then use the Levenshtein algorithm to find the closest match.