Detecting islands in image with GD in PHP - php

I am working on a PHP image processing library that works with rendered images from a game, being processed on a PHP webhost. In order to process the image, I need to find islands of red(where the hue is within x amount of 0 or 360), and get a point within them(not necessarily the center, but preferably near the center). There are about 100 such islands, of all different sizes. They are trapezoids or near that. Since the image is a PNG, and is uncompressed without antialiasing, the edges are crisp to the pixel, but some areas may be darker than the others.
Currently, what I have tried is using imagecolorat( in GD and testing the HSV value after conversion, followed by trying points around it until I reach a non-red pixel, but that process seems take some time to complete, and appears to encounter PHP timeouts due to the 5 second limit, if I am processing a large image. Is there a more efficient way to detect said islands and get a point not necessarily at the center, but preferable near?
I've also tried, where I know the size of the trapezoids, assuming that none come within a certain distance and skipping that distance to save time.
I don't necessarily need code, just a pointer in the right direction.
My current code:
function RGBToHSL($RGB) {
$r = 0xFF & ($RGB >> 0x10);
$g = 0xFF & ($RGB >> 0x8);
//SNIP
}
$image=imagecreatefrompng($filename);
$redislands=[];
for($xpos=0; $xpos<=imagesx($image); $xpos++){
for($ypos=0; $ypos<=imagesy($image); $ypos++){
if (RGBToHSL(imagecolorat(xpos, ypos)->saturation<=20||RGBToHSL(imagecolorat(xpos, ypos)->saturation>=350){
$redislands[]=[xpos, ypos]
}
}
}

I haven't really ever used gd, but I just looked briefly and I see
imagefilter() w/ the IMG_FILTER_EDGEDETECT flag.
seems like after that, you could then identify all the objects with a single simple loop(if objects have a min size, you can make the loop use this stripe size and still be guaranteed you'll encounter each object). Just look for a pixel w/ the edge color, and when you find one, recursively explore neighbor pixels w/ the edge color. For example, if an object is guaranteed to be at least 7 pixels high, after edges are highlighted, you only need to loop over the pixels in rows 0, 7, 14, 21 etc...
Once you've extracted the component by using the edge around it, I think you could identify it's shape, color, and center fairly easily.
You might need to play with thresholding the image(gd can do this) if edges don't get detected robustly.
Another option to seriously consider is to not use php for this, but instead execute an external program using php's exec() function. This opens up the possibility of using all kinds of libraries(like opencv) and programs meant for this stuff. It also lets you write your code in something like c...you'd be amazed at how fast you can loop over a c array that holds a few million ints representing pixel colors. But, I have a feeling this isn't a viable option for you.
Something else you might investigate is getting access to a php array(or a string) of pixels. function call overhead is large, and calling a function like imagecolorat() millions of times does add up, compared to native php array access like $pixels[$i]. I don't know anything about image formats, but maybe you could save as bitmap and then load it into a string, and then just use string offsets. I bet that would fly if a bitmap has a suitable array-like binary representation.

Related

What is the effect of 'posterizing' an image?

I am working at the moment on an issue where we are seeing CPU Usage issues on a particular host when converting images using iMagick. The issue is pretty perfectly described here:
https://github.com/ResponsiveImagesCG/wp-tevko-responsive-images/issues/150 (I don't use that particular library, but I DO use the same responsive images classes they do, and I am timing out on that particular line, only for some images).
They seem to suggest that removing the call to ->posterizeImage() will fix their issue, and in my tests it does, I can't even tell any difference in the converted images. But this worries me because I wonder if there is a difference that I am not seeing, or one that only comes up in certain scenarios (I mean if posterizing an image didn't do anything there wouldn't be a method for it, right?). I see online that it 'Reduces the image to a limited number of color level' (136 levels in the case causing an issue for me, for what it's worth). I'm having some difficulty parsing that though, which I think is related to a poor grasp of the way various image formats store data (really it doesn't go past the idea that an image is broken up into pixels, which are broken up into proportions of red green and blue).
What actual visual differences could I expect to see if we stop posterizing images? Is it something that I would only expect in certain types of image (like, would it be more visible in transparent over non-transparent, or warmer coloured images)? Or that would be more evident in certain display styles (like print, or the warmer colour temp in iPhone displays)?
Basically I am looking for the info to make an informed choice on whether it's safe to comment out. I'm not worried if it means some images might be x Kb larger, but if it will make them look poor quality, or distort them in some way (even in corner cases) then I need to consider other options.
From the ImageMagick command line documentation:
-posterize levels
reduce the image to a limited number of color levels per channel.
Very low values of levels, e.g., 2, 3, 4, have the most visible effect.
There is a bit more info in the Color Quantization examples - it also has some example images:
The operators original purpose (using an argument of '2') is to re-color images using just 8 basic colors, as if the image was generated using a simple and cheap poster printing method using just the basic colors. Thus the operator gets its name.
...
An argument of '3' will map image colors based on a colormap of 27 colors, including mid-tone colors. While an argument of '4' will generate a 64 color colortable, and '5' generates a 125 color colormap.
Essentially it reduces the number of colors used in the image - and by extension the size. Using a level of 136 would not have much visible affect, as this translates to a 2,515,456 color colortable (136^3).
It is also worth noting from the commit for the issue you linked is that this isn't even always an effective way of reducing image size:
... it turns out that posterization only improves file sizes
for PNGs and can actually lead to slightly larger file sizes for
JPG images.
Posterisation is a reduction of the amount of colour information stored in an image - as such, it is really a decrease in quality. It's hard to imagine how stopping doing this could be detrimental. And, if it turns out later that there is/was a legitimate reason for doing it, you can always do it later because if you stop doing it now, you will still have all the original information.
If it was the other way around, and you started to introduce posterisation and later found out it was undesirable for some reason, you would no longer be able to get the original information back.
So, I would see no harm in stopping posterising. And the fact that I have written that, kind of challenges anyone who knows better to speak up and tell me I am wrong :-)

Find similar images in (pure) PHP / MySQL

My users are uploading images to my website and i would like first to offer them already uploaded images first. My idea is to
1. create some kind of image "hash" of every existing image
2. create a hash of newly uploaded image and compare it with the other in the database
i have found some interesting solutions like http://www.pureftpd.org/project/libpuzzle or or http://phash.org/ etc. but they got one or more problems
they need some nonstandard extension to PHP (or are not in PHP at all) - it would be OK for me, but I would like to create it as a plugin to my popular CMS, which is used on many hosting environments without my control.
they are comparing two images but i need to compare one to many (e.g. thousands) and doing it one by one would be very uneffective / slow ...
...
I would be OK to find only VERY similar images (so e.g. different size, resaved jpg or different jpg compression factor).
The only idea I got is to resize the image to e.g. 5px*5px* 256 colors, create a string representation of it and then find the same. But I guess that it may have create tiny differences in colors even with just two same images with different size, so finding just the 100 % same would be useless.
So I would need some good format of that string representation of image which than could be used with some SQL function to find similar, or some other nice way. E.g. phash create perceptional hashes, so when two numbers are close, the images should be close as well, so i just need to find closest distances. But it is again external library.
Is there any easy way?
I've had this exact same issue before.
Feel free to copy what I did, and hopefully it will help you / solve your problem.
How I solved it
My first idea that failed, similar to what you may be thinking, is I ended up making strings for every single image (no matter what size). But I quickly worked out this fills your database super fast, and wasn't effective.
Next option (that works) was a smaller image (like your 5px idea), and I did exactly that, but with 10px*10px images. The way I created the 'hash' for each image was the imagecolorat() function.
See php.net here.
When receiving the rgb colours for the image, I rounded them to the nearest 50, so that the colours were less specific. That number (50) is what you want to change depending on how specific you want your searches to be.
for example:
// Pixel RGB
rgb(105, 126, 225) // Original
rgb(100, 150, 250) // After rounding numbers to nearest 50
After doing this to every pixel (10px*10px will give you 100 rgb()'s back), I then turned them into an array, and stored them in the database as base64_encode() and serialize().
When doing the search for images that are similar, I did the exact same process to the image they wanted to upload, and then extracted image 'hashes' from the database to compare them all, and see what had matching rounded rgb's.
Tips
The Bigger that 50 is in the rgb rounding, the less specific your search will be (and vice versa).
If you want your SQL to be more specific, it may be better to store extra/specific info about the image in the database, so that you can limit the searches you get in the database. eg. if the aspect ratio is 4:3, only pull images around 4:3 from the database. (etc)
It can be difficult to get this perfectly 5px*5px, so a suggestion is phpthumb. I used it with the syntax:
phpthumb.php?src=IMAGE_NAME_HERE.png&w=10&h=10&zc=1
// &w= width of your image
// &h= height of your image
// &zc= zoom control. 0:Keep aspect ratio, 1:Change to suit your width+height
Good luck mate, hope I could help.
For an easy php implementation check out: https://github.com/kennethrapp/phasher
However - I wonder if there is a native mySql function for "compare" (see php class above)
I scale down image to 8x8 then I convert RGB to 1-byte HSV so result hash is 172 bytes string.
HSVHSVHSVHSVHSVHSVHSVHSV... (from 8x8 block, 172 bytes long)
0fff0f3ffff4373f346fff00...
It's not 100% accurate (some duplicates aren't found) but it works nice and looks like there is no false positive results.
Putting it down in an academical way, what you are looking for is a similarity function which takes in two images and returns an indicator how far/similar the two images are. This indicator could easily be a decimal number ranging from -1 to 1 (far apart to very close). Once you have this function you can set an image as a reference and compare all the images against it. Then finding the similar images to one is as simple as finding the closest similarity factor to it which is done with a simple search over a double field within an RDBMS like MySQL.
Now all that remains is how to define the similarity function. To be honest this is problem specific. It depends on what you call similar. But covariance is usually a good starting point, it just needs your two images to be of the same size which I think is of no big deal. Yet you can find lots of other ideas searching for 'similarity measures between two images'.

More Efficient Method for PHP Imagick Color Replacement In PNG-24

I'm creating a little web app where people can choose colors for different parts of a race car. I'm using ImageMagick to take separate images of each part of the car, recolor them (with the clutImage function), and layer them on top of each other to create the final image.
Each image is PNG-24 with alpha transparency. Each image is exactly the same size to make it easier to combine them.
There are about ten layers, six of which are being recolored. The problem is that performance is a bit poor; it takes about ten seconds to render the image after each color change. Here's my code:
Sample code:
<?php
if(isSet($piece['color'])) {
//make a 1pixel image to replace color with
$clut = new Imagick();
$clut->newImage(1, 1, new ImagickPixel("#".$piece["color"]));
//change the color of the part
$car_part->clutImage($clut);
}
//now we need to add the part onto the car
$car->compositeImage($car_part, Imagick::COMPOSITE_DEFAULT, 0, 0);
clutImage seems like it might be overkill for this task; I'm not doing a gradient map, I'm simply replacing all colored pixels with a solid color. Yet no matter what function I use, it will probably still have to iterate over several million pixels total.
Is there a more efficient way to accomplish this, or is this just the nature of what I'm trying to do?
AFAIK, that is the nature of what you are doing. When it really comes down to it, you are just changing the values of a bunch of indices of a multidimensional array. At some point or another, there will be a loop.
There is also colorFloodfillImage, but I have never actually used that method. It may be worth a shot though.
I would guess clutImage is better than iterating through the pixels, because it is running at c speeds, where as iterating through the pixels using a php construct (for/while/foreach) would likely be slower.
One way that you may be able to really increase performance would be to cache parts, so that a given part x with a given color y is only generated once. The next time someone wants to paint part x with the color y, all you have to do is pull the rendered image from the cache. You could either store the image as a file, or store the imagick object in a memory object cache (apc/memcached/etc.) right after the clutImage call, so you can composite it with other objects.
hth.

Image comparison with php + gd

What's the best approach to comparing two images with php and the Graphic Draw (GD) Library?
This is the scenario:
I have an image, and I want to find which image of a given set is the most similar to it.
The most similar image is in fact the same image, not pixel perfect match but the same image.
I've dramatised the difference between the two images with the number one on the example just to ease the understanding of what I meant.
Even though it brought no consistent results, my approach was to reduce the images to 1px using the imagecopyresampled function and see how close the RGB values where between images.
The sum of the values of deducting each red, green and blue decimal equivalent value from the red, green and blue decimal equivalent value of the possible match gave me a dissimilarity index that, even though it didn't work as expected since not always the most RGB similar image was the target image, I could use to select an image from the available targets.
Here's a sample of the output when comparing 4 images against a target image, in this case the apple logo, that matches one of them but is not exactly the same:
Original image:
Red:222 Green:226 Blue:232
Compared against:
http://a1.twimg.com/profile_images/571171388/logo-twitter_normal.png
Red:183 Green:212 Blue:212 and an index of similarity of 56
Red:117 Green:028 Blue:028 and an index of dissimilarity 530
Red:218 Green:221 Blue:221 and an index of dissimilarity 13 Matched Correctly.
Red:061 Green:063 Blue:063 and an index of dissimilarity 491
May not even be doable better with better results than what I'm already getting and I'm wasting my time here but since there seems to be a lot of experienced php programmers I guess you can point me in the right directions on how to improve this.
I'm open to other image libraries such as iMagick, Gmagick or Cairo for php but I'd prefer to avoid using other languages than php.
Thanks in advance.
I'd have thought your approach seems reasonable, but reducing an entire image to 1x1 pixel in size is probably a step too far.
However, if you converted each image to the same size and then computed the average colour in each 16x16 (or 32x32, 64x64, etc. depending on how much processing time/power you wish to use) cell you should be able to form some kind of sensible(-ish) comparison.
I would suggest, like middaparka, that you do not downsample to a 1 pixel only image, because you loose all the spatial information. Downsampling to 16x16 (or 32x32, etc.) would certainly provide better results.
Then it also depends on whether color information is important or not to you. From what I understand you could actually do without it and compute a gray-level image starting from your color image (e.g. luma) and compute the cross-correlation. If, like you said, there is a couple of images that matches exactly (except for color information) this should give you a pretty good reliability.
I used the ideas of scaling, downsampling and gray-level mentioned in the question and answers, to apply a Mean Squared Error between the pixels channels values for 2 images, using GD Library.
The code is in this answer, including a test with those ideas.
Also I did some benckmarking and I think the downsampling could be not needed in those little images, cause the method is fast (being PHP), just a fraction of a second.
Using middparka's methods, you can transform each image into a sequence of numeric values and then use the Levenshtein algorithm to find the closest match.

Image Classification - Detecting Floor Plans

I am working on a real estate website and i would like to write a program that
can figure out(classify) if an image is a floor plan or a company logo.
Since i am writing in php i will prefer a php solution but any c++ or opencv solution will be fine as well.
Floor Plan Sample:
alt text http://www.rentingtime.com/uploads/listing/l0050/0000050930/68614.jpg
alt text http://www.rentingtime.com/uploads/listing/l0031/0000031701/44199.jpg
Logo Sample:
alt text http://www.rentingtime.com/uploads/listing/l0091/0000091285/95205.jpg
As always, there is a built-in PHP function for this. Just joking. =)
All the floor plans I've seen they are pretty monochromatic, I think you can play with the number of colors and color saturation to have a pretty good guess is the image is a logo or a floor plan.
E.g.: is the image has less than 2 or 3 colors is a floor plan.
E.g.: if the sum / average of the saturation is less than X it's a floor plan.
Black and white (and other similar colors that are used in floor plans) have a saturation that is zero, or very close to zero, while logos tend to be more visually attractive, hence use more saturated colors.
Here is a simple function to compute the saturation of a Hex RGB color:
function Saturation($color)
{
$color = array_map('hexdec', str_split($color, 2));
if (max($color) > 0)
{
return (max($color) - min($color)) / max($color);
}
return 0;
}
var_dump(Saturation('000000')); // black 0.0000000000000000
var_dump(Saturation('FFFFFF')); // white 0.0000000000000000
var_dump(Saturation('818185')); // grey 0.0300751879699249
var_dump(Saturation('5B9058')); // green 0.3888888888888889
var_dump(Saturation('DE1C5F')); // pink 0.8738738738738738
var_dump(Saturation('FE7A15')); // orange 0.9173228346456692
var_dump(Saturation('FF0000')); // red 1.0000000000000000
var_dump(Saturation('80FF80')); // --- 0.4980392156862745
var_dump(Saturation('000080')); // --- 1.0000000000000000
Using imagecolorat() and imagecolorsforindex() you can implement a simple function that loops trough all the pixels of the image and sums / computes the average of the saturation. If the image has a saturation level above of a custom threshold you define you can assume that the image is a logo.
One thing you shouldn't forget is that images that have a higher resolution will normally have more saturation (more pixels to sum), so for the sake of this algorithm and also for the sake of your server performance it would be wise to resize all the images to a common resolution (say 100x100 or 50x50) to classify them and once classified you could use the original (non-resized) images.
I made a simple test with the images you provided, here is the code I used:
$images = array('./44199.jpg', './68614.jpg', './95205.jpg', './logo.png', './logo.gif');
foreach ($images as $image)
{
$sat = 0;
$image = ImageCreateFromString(file_get_contents($image));
for ($x = 0; $x < ImageSX($image); $x++)
{
for ($y = 0; $y < ImageSY($image); $y++)
{
$color = ImageColorsForIndex($image, ImageColorAt($image, $x, $y));
if (is_array($color) === true)
{
$sat += Saturation(dechex($color['red']) . dechex($color['green']) . dechex($color['blue']));
}
}
}
echo ($sat / (ImageSX($image) * ImageSY($image)));
echo '<hr />';
}
And here are the results:
green floor plant: 0.0151028053
black floor plant: 0.0000278867
black and white logo: 0.1245559912
stackoverflow logo: 0.0399864136
google logo: 0.1259357324
Using only these examples, I would say the image is a floor plant if the average saturation is less than 0.03 or 0.035, you can tweak it a little further by adding extra examples.
It may be easiest to outsource this to humans.
If you have a budget, consider Amazon's Mechanical Turk. See Wikipedia for a general description.
Alternatively, you could do the outsourcing yourself. Write a PHP script to display one of your images and prompt the user to sort it as either a "logo" our "floorplan." Once you have this running on a webserver, email your entire office ånd ask everyone to sort 20 images as a personal favor.
Better yet, make it a contest-- the person who sorts the most images will win an ipod!
Perhaps most simply, invite everyone you know over for pizza and beers and setup a bunch of laptops and get everyone to spend a few minutes sorting.
There are software ways to accomplish your task, but if it is a one-off event with less than a few thousand images and a budget of at least a few hundred dollars, than I think your life may be easier using humans.
One of the first things that comes to mind is the fact that floor plans tend to have considerably more lines oriented at 90 degrees than any normal logo would.
A fast first-pass would be to run Canny edge detection on the image and vote on angles using a Hough transform and the rho, Theta definition of a line. If you see a very strong correspondence for Theta=(0, 90, 180, 270) summed over rho, you can classify the image as a floor plan.
Another option would be to walk the edge image after the Canny step to only count votes from long, continuous line segments, removing noise.
I highly doubt any such tool already exists, and creating anything accurate would be non-trivial. If your need is to sort out a set of existing images ( e.g., you have an unsorted directory ), then you might be able to write a "good enough" tool and manually handle the failures. If you need to do this dynamically with new imagery, it's probably the wrong approach.
Were I to attempt this for the former case, I would probably look for something trivially different I can use as a proxy. Are floor plans typically a lot larger then logos ( in either file size or image dimensions )? Do floor plans have less colors then a logo? If I can get 75% accuracy using something trivial, it's probably the way to go.
Stuff like this - recoginition of patterns in images - tends to be horribly expensive in terms of time, horribly unreliable and in constant need of updating and patching to match new cases.
May I ask why you need to do this? Is there not a point in your website's workflow where it could be determined manually whether an image is a logo or a floor plan? Wouldn't it be easier to write an application that lets users determine which is which at the time of upload? Why is there a mixed set of data in the first place?
Despite thinking this is something that requires manual intervention, one thing you could do is check the size of the image.
A small (both in terms of MB and dimensions) image is likely to be a logo.
A large (both in terms of MB and dimensions) image is likely to be a floorplan.
However, this would only be a probability measurement and by no means foolproof.
The type of image is also an indicator, but less of one. Logos are more likely to be JPG, PNG or GIF, floorplans are possibly going to be TIFF or other lossless format - but that's no guarantee.
A simple no-brainer attempt I would first try would be to use SVM to learn the SIFT keypoints obtained from the samples. But before you can do that, you need to label a small subset of the images, giving it either -1 (a floor plan) or 1 (a logo). if an image has more keypoints classified as a floor plan then it must be a floorplan, if it has more keypoints classified as a logo then it must be a logo. In Computer Vision, this is known as the bag-of-features approach, also one of the simplest methods around. More complicated methods will likely yield better results, but this is a good start.
As others have said, such image recognition is usually horribly complex. Forget PHP.
However, looking over your samples I see a criteria that MIGHT work pretty well and would be pretty easy to implement if it did:
Run the image through good OCR, see what strings pop out. If you find a bunch of words that describe rooms or such features...
I'd rotate the image 90 degrees and try again to catch vertical labels.
Edit:
Since you say you tried it and it doesn't work maybe you need to clean out the clutter first. Slice the image up based on whitespace. Run the OCR against each sub-image in case it's getting messed up trying to parse the lines. You could test this manually using an image editor to slice it up.
Use both color saturation and image size (both suggested separately in previous answers). Use a large sample of human-classified figures and see how they plot in the 2-D space (size x saturation) then decide where to put the boundary. The boundary needs not be a straight line, but don't make too many twists trying to make all the dots fit, or you'll be "memoryzing" the sample at the expense of new data. Better to find a relatively simple boundary that fits most of the samples, and it should fit most of the data.
You have to tolerate a certain error. A foolproof solution to this is impossible. What if I choose a floorplan as my company's logo? (this is not a joke, it just happens to be funny)

Categories