Using imagecolorat() for motion detection - php

I've got a security camera set up, and i'm comparing 1 minute batches of images to check if there's been any motion. I have 10 coordinates that I check between each image. If any pixels don't match the previous image, it triggers a warning message.
Problem is, it works too well.
The logic is basically if imagecolorat() is greater or less than a 10% difference from the previous imagecolorat(), it triggers. So, if a cloud comes over the house, it triggers. Basically any change in light triggers it. So, I've moved the threshold from 10% to 30% and it triggers less but I'm worried now that if I move any past that, real motion wont be detected.
Note: I'm using the raw output of imagecolorat(), not the RGB values. I'm not sure if this would have an impact.

You are looking for larger discontinuities - things like noise and slow variation should be discounted (unless this motion detection is for very slow moving small things, like garden gnomes).
Therefore, do a histogram equalize or something similar before image subtraction to account for global shifts in light, and do some filtering / edge enhancement before differencing to enhance the changes. Using more of the image will be better I would think, than just 10 points.
Histogram equalization entails looping through the image, and counting bins for each brightness value - so you have a resulting data set that says how many pixels are in a set of tonal ranges. In other words, say you divvy up into 16 bins - pixels that have a greyscale value (or alternately, the Brightness in an HSB model) in the value of 0..15 (assuming an 8 bit image here in this channel) are all in bin 1. Then you go forth and compute a series of linear stretches to apply to each bin so that it occupies an output range in proportion to its population. So if ALL of your pixels are in the 0.. 15 bin, you would just multiply everything by 16 to stretch them out. The goal is that your histogram of your equalized image is flat - equal amounts of pixel in every bin.
Edge enhancement can be simply done with the application of a Sobel filter.

Related

Image comparison in php

My scenario is as follows:
I have to save 1000 of images in database, and then I have to compare new image with database images for matches (match should be 70% or more) to get the best match image from database in php.
is there any algorithm or method for fast comparison with better result ...
Thanks in advance :)
I would suggest you use a Perceptual Hash or similar - mainly for reasons of performance. In essence, you create a single number, or hash, for each image ONCE in your database at the point where you insert it, and retain that hash in the database. Then when you get a new image to insert, you calculate its hash and compare it to the PRE-CALCULATED hash of all the other images so that you don't have to drag all the megabytes of pixels of your existing images from disk to compare them.
The best pHASHes are scale-invariant and image format invariant. Here is an article by Dr Neal Krawetz... Perceptual Hashing.
ImageMagick can also do Perceptual Hashing and is callable from PHP - see here.
Try this class. It support get hash string from image to store in database and compare with new image later:
https://github.com/nvthaovn/CompareImage
It is very fast and accurate, although not optimal code. I have 20000 pictures in my database.
This depends entirely on how smart you want the algorithm to be.
For instance, here are some issues:
cropped images vs. an uncropped image
images with a text added vs. another without
mirrored images
The easiest and simplest algorithm I've seen for this is just to do the following steps to each image:
scale to something small, like 64x64 or 32x32, disregard aspect ratio, use a combining scaling algorithm instead of nearest pixel
scale the color ranges so that the darkest is black and lightest is white
rotate and flip the image so that the lighest color is top left, and then top-right is next darker, bottom-left is next darker (as far as possible of course)
Edit A combining scaling algorithm is one that when scaling 10 pixels down to one will do it using a function that takes the color of all those 10 pixels and combines them into one. Can be done with algorithms like averaging, mean-value, or more complex ones like bicubic splines.
Then calculate the mean distance pixel-by-pixel between the two images.
To look up a possible match in a database, store the pixel colors as individual columns in the database, index a bunch of them (but not all, unless you use a very small image), and do a query that uses a range for each pixel value, ie. every image where the pixel in the small image is between -5 and +5 of the image you want to look up.
This is easy to implement, and fairly fast to run, but of course won't handle most advanced differences. For that you need much more advanced algorithms.

How to check whether an image is part of an larger image

I am looking for an algorithm to compare two images, one is given static in highest quality and the other is taken individually with maybe not so good quality periodically.
The static one is way smaller and should be IN the second image at different positions.
Is there an algorithm to compare whether an image is part of another image like i described and the result is maybe given as odds to be in there?
If the size of the small image does not change in the bigger images (just the position changes) then all you need is a fuzzy comparison function between two small squares of pixels in two images that compare the positions of colors and give a match score (You could even just use sum-of-squared distance in RGB space). If the small image is higher resolution than the larger images then you'll have to scale the pixel widths of the squares you are comparing accordingly, and possibly use fractional pixels to make sure the actual sizes of the square patches you are comparing are equal.
Anyway, once you get a strong enough match for a small square then you can continue to compare the rest of the squares in the images to verify that you have a match. Just keep going as long as the match score between squares is high enough, and as soon as it isn't, then move on to the next possible position for the small image inside the larger image. For images with a lot of entropy everywhere (not a lot of plain black spots, for example) this should work very fast.

Image comparison with php + gd

What's the best approach to comparing two images with php and the Graphic Draw (GD) Library?
This is the scenario:
I have an image, and I want to find which image of a given set is the most similar to it.
The most similar image is in fact the same image, not pixel perfect match but the same image.
I've dramatised the difference between the two images with the number one on the example just to ease the understanding of what I meant.
Even though it brought no consistent results, my approach was to reduce the images to 1px using the imagecopyresampled function and see how close the RGB values where between images.
The sum of the values of deducting each red, green and blue decimal equivalent value from the red, green and blue decimal equivalent value of the possible match gave me a dissimilarity index that, even though it didn't work as expected since not always the most RGB similar image was the target image, I could use to select an image from the available targets.
Here's a sample of the output when comparing 4 images against a target image, in this case the apple logo, that matches one of them but is not exactly the same:
Original image:
Red:222 Green:226 Blue:232
Compared against:
http://a1.twimg.com/profile_images/571171388/logo-twitter_normal.png
Red:183 Green:212 Blue:212 and an index of similarity of 56
Red:117 Green:028 Blue:028 and an index of dissimilarity 530
Red:218 Green:221 Blue:221 and an index of dissimilarity 13 Matched Correctly.
Red:061 Green:063 Blue:063 and an index of dissimilarity 491
May not even be doable better with better results than what I'm already getting and I'm wasting my time here but since there seems to be a lot of experienced php programmers I guess you can point me in the right directions on how to improve this.
I'm open to other image libraries such as iMagick, Gmagick or Cairo for php but I'd prefer to avoid using other languages than php.
Thanks in advance.
I'd have thought your approach seems reasonable, but reducing an entire image to 1x1 pixel in size is probably a step too far.
However, if you converted each image to the same size and then computed the average colour in each 16x16 (or 32x32, 64x64, etc. depending on how much processing time/power you wish to use) cell you should be able to form some kind of sensible(-ish) comparison.
I would suggest, like middaparka, that you do not downsample to a 1 pixel only image, because you loose all the spatial information. Downsampling to 16x16 (or 32x32, etc.) would certainly provide better results.
Then it also depends on whether color information is important or not to you. From what I understand you could actually do without it and compute a gray-level image starting from your color image (e.g. luma) and compute the cross-correlation. If, like you said, there is a couple of images that matches exactly (except for color information) this should give you a pretty good reliability.
I used the ideas of scaling, downsampling and gray-level mentioned in the question and answers, to apply a Mean Squared Error between the pixels channels values for 2 images, using GD Library.
The code is in this answer, including a test with those ideas.
Also I did some benckmarking and I think the downsampling could be not needed in those little images, cause the method is fast (being PHP), just a fraction of a second.
Using middparka's methods, you can transform each image into a sequence of numeric values and then use the Levenshtein algorithm to find the closest match.

Image Classification - Detecting Floor Plans

I am working on a real estate website and i would like to write a program that
can figure out(classify) if an image is a floor plan or a company logo.
Since i am writing in php i will prefer a php solution but any c++ or opencv solution will be fine as well.
Floor Plan Sample:
alt text http://www.rentingtime.com/uploads/listing/l0050/0000050930/68614.jpg
alt text http://www.rentingtime.com/uploads/listing/l0031/0000031701/44199.jpg
Logo Sample:
alt text http://www.rentingtime.com/uploads/listing/l0091/0000091285/95205.jpg
As always, there is a built-in PHP function for this. Just joking. =)
All the floor plans I've seen they are pretty monochromatic, I think you can play with the number of colors and color saturation to have a pretty good guess is the image is a logo or a floor plan.
E.g.: is the image has less than 2 or 3 colors is a floor plan.
E.g.: if the sum / average of the saturation is less than X it's a floor plan.
Black and white (and other similar colors that are used in floor plans) have a saturation that is zero, or very close to zero, while logos tend to be more visually attractive, hence use more saturated colors.
Here is a simple function to compute the saturation of a Hex RGB color:
function Saturation($color)
{
$color = array_map('hexdec', str_split($color, 2));
if (max($color) > 0)
{
return (max($color) - min($color)) / max($color);
}
return 0;
}
var_dump(Saturation('000000')); // black 0.0000000000000000
var_dump(Saturation('FFFFFF')); // white 0.0000000000000000
var_dump(Saturation('818185')); // grey 0.0300751879699249
var_dump(Saturation('5B9058')); // green 0.3888888888888889
var_dump(Saturation('DE1C5F')); // pink 0.8738738738738738
var_dump(Saturation('FE7A15')); // orange 0.9173228346456692
var_dump(Saturation('FF0000')); // red 1.0000000000000000
var_dump(Saturation('80FF80')); // --- 0.4980392156862745
var_dump(Saturation('000080')); // --- 1.0000000000000000
Using imagecolorat() and imagecolorsforindex() you can implement a simple function that loops trough all the pixels of the image and sums / computes the average of the saturation. If the image has a saturation level above of a custom threshold you define you can assume that the image is a logo.
One thing you shouldn't forget is that images that have a higher resolution will normally have more saturation (more pixels to sum), so for the sake of this algorithm and also for the sake of your server performance it would be wise to resize all the images to a common resolution (say 100x100 or 50x50) to classify them and once classified you could use the original (non-resized) images.
I made a simple test with the images you provided, here is the code I used:
$images = array('./44199.jpg', './68614.jpg', './95205.jpg', './logo.png', './logo.gif');
foreach ($images as $image)
{
$sat = 0;
$image = ImageCreateFromString(file_get_contents($image));
for ($x = 0; $x < ImageSX($image); $x++)
{
for ($y = 0; $y < ImageSY($image); $y++)
{
$color = ImageColorsForIndex($image, ImageColorAt($image, $x, $y));
if (is_array($color) === true)
{
$sat += Saturation(dechex($color['red']) . dechex($color['green']) . dechex($color['blue']));
}
}
}
echo ($sat / (ImageSX($image) * ImageSY($image)));
echo '<hr />';
}
And here are the results:
green floor plant: 0.0151028053
black floor plant: 0.0000278867
black and white logo: 0.1245559912
stackoverflow logo: 0.0399864136
google logo: 0.1259357324
Using only these examples, I would say the image is a floor plant if the average saturation is less than 0.03 or 0.035, you can tweak it a little further by adding extra examples.
It may be easiest to outsource this to humans.
If you have a budget, consider Amazon's Mechanical Turk. See Wikipedia for a general description.
Alternatively, you could do the outsourcing yourself. Write a PHP script to display one of your images and prompt the user to sort it as either a "logo" our "floorplan." Once you have this running on a webserver, email your entire office ånd ask everyone to sort 20 images as a personal favor.
Better yet, make it a contest-- the person who sorts the most images will win an ipod!
Perhaps most simply, invite everyone you know over for pizza and beers and setup a bunch of laptops and get everyone to spend a few minutes sorting.
There are software ways to accomplish your task, but if it is a one-off event with less than a few thousand images and a budget of at least a few hundred dollars, than I think your life may be easier using humans.
One of the first things that comes to mind is the fact that floor plans tend to have considerably more lines oriented at 90 degrees than any normal logo would.
A fast first-pass would be to run Canny edge detection on the image and vote on angles using a Hough transform and the rho, Theta definition of a line. If you see a very strong correspondence for Theta=(0, 90, 180, 270) summed over rho, you can classify the image as a floor plan.
Another option would be to walk the edge image after the Canny step to only count votes from long, continuous line segments, removing noise.
I highly doubt any such tool already exists, and creating anything accurate would be non-trivial. If your need is to sort out a set of existing images ( e.g., you have an unsorted directory ), then you might be able to write a "good enough" tool and manually handle the failures. If you need to do this dynamically with new imagery, it's probably the wrong approach.
Were I to attempt this for the former case, I would probably look for something trivially different I can use as a proxy. Are floor plans typically a lot larger then logos ( in either file size or image dimensions )? Do floor plans have less colors then a logo? If I can get 75% accuracy using something trivial, it's probably the way to go.
Stuff like this - recoginition of patterns in images - tends to be horribly expensive in terms of time, horribly unreliable and in constant need of updating and patching to match new cases.
May I ask why you need to do this? Is there not a point in your website's workflow where it could be determined manually whether an image is a logo or a floor plan? Wouldn't it be easier to write an application that lets users determine which is which at the time of upload? Why is there a mixed set of data in the first place?
Despite thinking this is something that requires manual intervention, one thing you could do is check the size of the image.
A small (both in terms of MB and dimensions) image is likely to be a logo.
A large (both in terms of MB and dimensions) image is likely to be a floorplan.
However, this would only be a probability measurement and by no means foolproof.
The type of image is also an indicator, but less of one. Logos are more likely to be JPG, PNG or GIF, floorplans are possibly going to be TIFF or other lossless format - but that's no guarantee.
A simple no-brainer attempt I would first try would be to use SVM to learn the SIFT keypoints obtained from the samples. But before you can do that, you need to label a small subset of the images, giving it either -1 (a floor plan) or 1 (a logo). if an image has more keypoints classified as a floor plan then it must be a floorplan, if it has more keypoints classified as a logo then it must be a logo. In Computer Vision, this is known as the bag-of-features approach, also one of the simplest methods around. More complicated methods will likely yield better results, but this is a good start.
As others have said, such image recognition is usually horribly complex. Forget PHP.
However, looking over your samples I see a criteria that MIGHT work pretty well and would be pretty easy to implement if it did:
Run the image through good OCR, see what strings pop out. If you find a bunch of words that describe rooms or such features...
I'd rotate the image 90 degrees and try again to catch vertical labels.
Edit:
Since you say you tried it and it doesn't work maybe you need to clean out the clutter first. Slice the image up based on whitespace. Run the OCR against each sub-image in case it's getting messed up trying to parse the lines. You could test this manually using an image editor to slice it up.
Use both color saturation and image size (both suggested separately in previous answers). Use a large sample of human-classified figures and see how they plot in the 2-D space (size x saturation) then decide where to put the boundary. The boundary needs not be a straight line, but don't make too many twists trying to make all the dots fit, or you'll be "memoryzing" the sample at the expense of new data. Better to find a relatively simple boundary that fits most of the samples, and it should fit most of the data.
You have to tolerate a certain error. A foolproof solution to this is impossible. What if I choose a floorplan as my company's logo? (this is not a joke, it just happens to be funny)

Detect predominant shade of image with PHP

I've used GD before, but only ever for resizing/generating images on the fly - though I'm pretty positive it has the capabilities to do what I'm after.
As simply as possible, I need to check an image to find out whether it has a light background or a dark background. I.e. if the background is predominately 'light' I'm returned a value of '1', and if it is predominately 'dark' it returns '0'.
There's only going to be 5 images iterated through in this process at a time, but I'm very concious here of processing time - the page is going to be called often.
Can anyone point me in the right direction on where to go with this?
First see if there are any patterns you can take advantage of - for instance, is the top-left or top-right corner (for example) always going to be of the background colour? If so, just look at the colour of that pixel.
Maybe you can get a "good enough" idea by looking at some key pixels and averaging them.
Failing something simple like that, the work you need to do starts to rise by orders of magnitude.
One nice idea I had would be to take the strip of pixels going diagonally across from the top-left corner to the bottom-right corner (maybe have a look at Bresenham's line algorithm). Look for runs of dark and light colour, and probably take the longest run; if that doesn't work, maybe you should "score" runs based on how light and dark they are.
If your image is unnecessarily large (say 1000x1000 or more) then use imagecopyresized to cheaply scale it down to something reasonable (say 80x80).
Something that will work if MOST of the image is background-colour is to resample the image to 1 pixel and check the colour of that pixel (or maybe something small, 4x4 or so, after which you count up pixels to see if the image is predominantly light or dark).
Note that imagecopyresampled is considerably more expensive than imagecopyresized, since 'resized just takes individual pixels from the original whereas 'resampled actually blends the pixels together.
If you want a measure of "lightness" you could simply add the R, G and B values together. Or you could go for the formula for luma used in YCbCr:
Y' = 0.299 * R + 0.587 * G + 0.114 * B
This gives a more "human-centric" measure of lightness.

Categories