I'm trying to work on an algorithm that will morph one "shape" into another "shape". Both shapes are arbitrary, and may even have smaller, disjointed shapes too.
The basic idea I have so far is as follows: locate the edges of the shape, place points all along those edges, then do the same with the target image, then move the points to their targets.
Here's an illustration:
I just don't know where to start. The image above is a simplification, actual use case has more complex shapes/outlines. My main problem is: How do I handle disjoint shapes? The best I can come up with is to figure out the closest point between the two pieces, and join them together as part of the path. But how would I implement this?
I don't have any code yet, I'm still at the planning phase for this. I guess what I'm asking for is if anyone can link me to any resources that may help, or give any pointers. Searching Google has yielded some interesting morph algorithms, but they all deal with full images and involve breaking the image into pieces to reshape them, which is not what I'm looking for.
Note that this will be used in JavaScript, but could be precomputed in PHP instead if it's easier.
It's best to break the problem into multiple smaller problems which can be solved independently. That way you also own independent functionalities after solving this problem, which can be added to some global module collection.
First we need to figure out which pixel in the from_shape goes to which pixel in the to_shape. We can figure that out with the following method:
Place to_shape over from_shape.
For every pixel in from_shape, find its closest to_shape pixel.
Every pixel in a shape must have a unique id, that id can be for instance, its xy location.
Now you can record each unique pixel in from_shape, and which unique pixel it goes to in to_shape.
Delete the overlapped shapes and go back to the original ones,
just now each pixel in from_shape knows its destination in to_shape.
We also need to know which 'siblings' each pixel has.
A sibling is a pixel that lies right next to another pixel.
To find it, go to a given pixel, collect all pixels in radius one from it, all of them which are black.. are the from-pixel's siblings. This information is necessary to keep the shape as a single unit when the pixels travel to their destination. Skipping the siblings would substantially speed up and simplify the morph, but without them the shape might become fragmented during morph. Might wanna begin with a siblingless version, see how that goes.
And finally we implement the morph:
There is morph_time_duration.
For each pixel in from_shape, find the distance to it's destination in to_shape.
That distance, divided by morph_time_duration, is the speed of the pixel during the morph.
Also, the angle towards destination is the angle to travel in.
So now you have speed and angle.
So at each frame in the morphing procedure, a given from_pixel now knows which direction to travel in, speed, and it also knows its siblings. So in each frame just draw the pixel in its new location, after having traveled at its speed in its direction. And then draw a line to all of that pixels siblings.
And that will display your morph.
I've found a demonstration (using Raphael.js) of outline morphing and motion tweening in JavaScript, showing how Raphael.js can be used to morph one curve into another curve.
Also, this related question (about shape tweening in JavaScript) may contain some answers that are relevant to this question.
The MorpherJS library may also be suitable for this purpose. Some demonstrations of outline morphing with MorpherJS can be found here.
Doing that wont be very easy, but I can give you a couple of starting points. If you want a plain javascript implementation a great starting point would be:
http://raphaeljs.com/animation.html
which is doing exactly what you want. So you can check what methods are invoked and browse through the library source for those methods to see the implementation.
If you instead need to morph 2 images in PHP, I would suggest you use some sort of an extension and not do that in plain PHP. Here is an example using ImageMagick to do it:
http://www.fmwconcepts.com/imagemagick/shapemorph2/index.php
If you want to know more about the internals of it:
http://web.mit.edu/manoli/www/ecimorph/ecimorph.html#algo
Hope one of those helps.
The short answer, if you're trying to roll your own, it's not a straightforward task. There's plenty of math out there on these topics that perform these very transformations (the most common you're find deal with the most common shapes, obviously), but that may or may not be accessible to you and it won't be as easy to figure out how to do the non-standard transformations.
If you're just looking for a logical approach, here's where I'd start (not having done the math in years, and not having studied the inner workings of the graphics libraries linked):
Choose a distance, measured in whatever units make sense, pixels perhaps.
Identify each continuous edge in each shape. Pick an arbitrary point on one edge for each shape (say, on a plane where (0,0) represents the upper left corner, the edge point on each shape closest to (0,0)), and align your separate shapes on that point. For the purposes of your transformation, that point will remain static and all other points will conform to it.
If your shape has two or more distinct edges, order them by perimeter length. Consider the shorter lengths to be subordinate to the longer lengths. Use a similar process as in step 2 to pick an arbitrary point to connect these two edges together.
At each of your chosen points, count points around your edges at the interval of the distance you selected in step 1.
(left as an exercise for the reader) conform your points on your disparate edges together and into the target shape, aligning, reducing or adding points on the edges as necessary.
Alternatively, you could select an arbitrary number of points instead of an arbitrary distance, and just spread them appropriately along the edges at whatever distance they will fit, and then conform those points together.
Just throwing some ideas out there, I don't honestly know how deep the problem goes.
Related
The Problem
I have lists of GPS coordinates, these coordinates correspond to houses in the same area of a town/city. Each list of coordinates will have a team assigned to it. I would like the team to visit every house in the list, so I would like each team member to visit the roughly the same amount of houses
So I would like to assign an equal sized subset of this list to each team member. So the clusters obviously need to group coordinates close together. I know because I need equal sized clusters that they won't be perfect, but it is more important to me that they are of the same size.
The Setup
I think in Python I could use k-means-constrained, so that I can declare a min & max size of my clusters, but I cannot find anything similar in PHP. I also cannot wrap my head around adapting standard k-means to do what I want.
I know I will not get perfect clusters, but it is more important to me that they are all of roughly equal size, than how good the clusters are.
Question
If someone has implemented something to do what I want could you please link me, as I haven't found anything similar in PHP. Or maybe I am looking at the problem in the wrong way, so if anyone has any suggestions they would be very welcome.
Thanks.
I don't know what the best technology to use here is, I know PHP can do vaguely similar things but point me in the right direction if I'm wrong.
I'm building an online store and I'd like an easy (automated) way to categorise the colours of each item for sale.
I've seen numerous posts on Stack which are related to this, here are some good discussions for those interested:
Programmatically determine human readable colours
Get Image Colour
Detect overall average colour of a picture
These are all well and good. However, my issue is a little different. The images in question are all on different coloured backgrounds, and these affect the "average colour" of the image. I've tried resizing my images down to 1px to get a colour average, but this doesn't quite work.
As you can see, for image #1 the average colour is going to be a lot whiter than the product colour; for #2 and #3 it's going to be a lot more brown.
Can anyone think of any methods I could use to get the right average colour, in an automated way, with PHP, Ruby, Python, or anything similar? My idea was to take a section from the middle of each photo (which is usually where the product in question is) and take the average of that. For instance, get a 30px x 30px square in the centre of the image and process that.
This won't be absolutely perfect though, and I'm completely new to this sort of programming - is there any better way to determine foreground colour?
I'd suggest you explode the image, giving weight to the center of the image.
convert image_source.jpg -implode -32 image_destination.jpg
Then calculate the average color (by scaling to 1x1) or pick an average from a centered box.
If you need more precision, you'll need a computer vision algorithm, to segregate the foreground from the background; you can have a look at OpenCV
It's quite a hard task what your up to.
My suggestion is that you maybe use quite a little more input
One picture only the background (without the object) and one with the object.
Now if you threshold the subtraction you can get the object pixels. (I meen just take those who change between the two pictures)
Using these Pixels you could take the histogramm and select the most common ones.
(http://php.net/manual/en/imagick.getimagehistogram.php)
I run a website with thousands of user-contributed photos on it. What I'd like is a script to help me weed out poor photos from good photos. Obviously this isn't 100% possible, but it should be possible to determine if an image has no discernable focussed area? I think?
I did a bit of googling and couldn't find much on the subject.
I've written a very simple script that iterates over the pixels, and sums the difference in brightness between neighbouring pixels. This gives a high value for sharp contrasty images, and a low value for blurred/out of focus images. It's far from ideal though, as if there's a perfectly focussed small subject in the frame, and a nice bokeh background, it'll give a low value.
So I think what I want is a script that can determine if a part of an image is well-focussed, and if none is then to alert me?
Any bright ideas? Am I wasting my time?
I'd be interested in any code that can determine other sorts of "bad" photos too - too dark, too light, too flat, that sort of thing.
Too dark and too light are easy - calculate a colour average as you iterate through every pixel.
For your focus issue, I think you're going to run into a lot of problems with this one. I would strongly recommend looking up kernel convolution, as I have a sinking feeling that you'll need it. This allows you to perform more complex operations on pixels based on neighbors - and is how most Photoshop filters are done!
Once you've got the maths background to do it, what I would do is to convert your image to an array of unique values (as opposed to RGB) representing brightness. From there, use an edge-finder kernel (Sobel operator should do the trick) and find the edges. Once that is done, iterate over again, mapping the bits with no edge, and calculate the largest square area without an edge from this. It is probably the least CPU-intensive solution, though not the most esoteric.
What mean-stat-equation should I use when I have an image with N-number sample-size of selections?
I have a unique problem for which i was hoping to get some advice, so that i don't miss out on anything.
The Problem: To find the most favored/liked/important area on an image based on user selection of areas in different selection ratios.
Scenario: Consider an Image of a dog, and hundreds of users selecting area over this image in various resolutions, the obvious area of focus in most selections will be the area containing the dog. I can record the x1,x2,y1,y2 co-ordinates and put them into a db, now if i want to automatically generate versions of this image in a set of resolutions i should be able to recognize the area with the max attraction of the users.
The methods i think could work are:
Find the average center point of all selections and base the selection in that. - Very simple but would not be as accurate.
Use some algorithm like K Means or EM Clustering but i don't know which one would be best suited.
Looking forward to some brilliant solution to my problem
More info on the problem:
The Actual image will be most probably be a 1024x768 image, and the selections made on it will be of the most common mobile phone resolutions. The objective is to automatically generate mobile phone wallpapers by intelligent learning based on user selections.
I believe that you have two distinct problems identified above:
ONE: Identification of Points
For this, you will need to develop some sort of heuristic for identifying whether a point should be considered or not.
I believe you mentioned that hundreds of users will be selection locations over this image? Hundreds may be a lot of points to cluster. Consider excluding outliers (by removing points which do not have a certain number of neighbors within a particular distance)
Anything you can do to reduce your dataset will be helpful.
TWO: Clustering of Points
I believe that K Means Clustering would be best suited for this particular problem.
LINK
Your particular problem seems to closely mirror the standard Cartesian coordinate clustering examples used in explaining this algorithm.
What you're trying to do appears to be NP-Hard, but should be satisfied by the classical approximations.
Once clustered, you can take an average of the points within that cluster for a rather accurate approximation.
In Addition:
You dataset sounds like it will already be tightly clustered. (i.e. Most people will pick the dog's face, not the side of it's torso.) You need to be aware of local minima. LINK These can really throw a wrench into your algorithm. Especially with a small number of clusters. Be aware that you may need a bit of dynamic programming to combat this. You can usually introduce some variance into your algorithm, allowing the average points to "pop out" of these local minima. Local Minima/Maxima
Hope this helps!
I think you might be able to approach your problem in a different way. If you have not heard of Seam Carving then I suggest you check it out, because the data you have available to use is perfectly suited to it. The idea is that instead of cropping an image to resize it, you can instead remove paths of pixels that are not necessarily in a straight line. This allows you to resize an image while retaining more of the 'interesting' information.
Ordinarily you choose paths of least energy, where energy here is some measurement of how much the hue/intensity changes along the path. This will fail when you have regions of an image that are very important (like a dog's face), but where the energy of those regions is not necessarily very high. Since you have user data indicating what parts of the image are very important you can make sure to carve around those regions of the image by explicitly adding a little energy to a pixel every time someone selects a region with that pixel.
This video shows seam carving in action, it's cool to watch even if you don't think you'll use this. I think it's worth trying, though, I've used it before for some interesting resizing applications, and it's actually pretty easy to implement.
First off, I don't mean google image search!
I would like to give users the ability to select a hex color value and then have a search programatically return (from specified sites/directories online) images where the dominant color is the color they specified (or close to it).
Is there a technology that can do this? I'd prefer PHP/MySQL, but I'd be willing to use other languages if it would be simpler.
EDIT
Taking several suggestions, I managed to find this: http://www.coolphptools.com/color_extract which does a decent job at extracting the most common colors from the image.
The next step is calculating distance from the extracted colors to the color being searched for. I have no issue implementing it except I'm unclear on the best way to calculate the color distance?
I've scoured this site and google for a concrete answer, but come up dry. The tool above extracts colors into hex color codes. I am currently converting this to RGB and using those.
Should I attempt to convert RGB to Y'UV? I'm attempting that by using:
sqrt(((r - r1) * .299)^2 + ((g - g1) * .587)^2 + ((b - b1) * .114)^2)
(based on an answer here: RGB to closest predefined color)
It's not very accurate. What should I swap that color distance formula with so it calculates accurate color distance (to the human eye)?
Interesting.
The first problem is: "What is the dominant colour of an image?" Maybe the one most pixels have. What do you do with similar shades of the same colour? Would you cluster around similar colours?
I would implement it this way:
Grab all images inside your search paths. Cluster the colors used in each of them and the biggest cluster is the dominant color. You will have to play around a bit with cluster sizes and number of clusters. If this color is within a certain range of hue, saturation and brightness of your searched color it is a match.
Firstly, I wonder how can you crawl over the sites/directories to search for a particular image color, unless you have a big list of websites. If it isn't related to your question then just ignore it.
Back to your question, I personally think this is an interesting question as well. Since it requires quite a few research, I just want to point out some ideas for you to reference.
What you need to do is to get user-specified hex colors and convert them into RGB colors, because most of the image functions in PHP that I know only work with RGB. Now, if you have a list of directories that you can search for, then just crawl over them and use some basic functions to get hold of the desired webpage' contents (e.g. file_get_contents, or cURL). Once you have the contents of a specific page, you will need to use DOM functions to get images' URLs from that page (you can work it out yourself, using: getElementsByTagName() and getAttribute()). Now assuming that you are holding a list of image URLs, now you need to get their colors and try to match them with your user-specified colors (remember to convert everything into RGB).
In PHP we have a very convenient GD library that works with images. If your server support GD2 then you can have a look at imagecolorclosest(). This function "Returns the index of the color in the palette of the image which is "closest" to the specified RGB value". Note that the function only returns the closest match (not exactly match), so you have to do some comparisons to choose the right images (I believe this is easy because you now have RGB colors with very handy values to work with, say, using some subtraction and adjustment method).
Moreover, not only the images, when you have a specific page content, you can try to search for the color scheme of that page (by getting its "background-color" value), there are quite a few details that you can get and play around with :) Of course, an image's color is somehow related to its page's styling scheme colors, think logically wider.
If I'm saying something not clear, don't hesitate to comment on my reply :)
Happy coding.