We are developing an application, in which we will show some available houses for sale in google map. User can select any houses from the map and can find the shortest driving route between all the houses he/she selected.
Can any one please tell me how we can find the shortest route and can show that on the map? Is there any PHP based TSP library, that can help us to achieve what we are trying?
A Google search shows many results.
http://scrivna.com/blog/travelling-salesman-problem/ - Brute force PHP implementation guaranteed to get the optimal answer. Only suitable for a limited number of nodes.
http://www.renownedmedia.com/blog/genetic-algorithm-traveling-salesperson-php/ - Genetic algorithm PHP implementation which will approximate the answer. Suitable for large numbers of nodes.
You could probably combine the two, choosing which to run based on the size of the graph.
As #Barbar points out in the comments, there is an existing app that does what you're attempting. There is a blog post explaining how it works.
Its old but it may be useful to people:
https://developers.google.com/maps/documentation/javascript/v2/services#RoutesAndSteps
just create waypoints for each house and let google do the math for you...
If the problem satisfy the triangle inequality you can try the Christofides algorithm.
Related
I feel like missing something obvious or at least I feel on verge of "reveleation", but still can't get over it :)
I have a client selling transports from city A to city Z. The transport order is stored in MariadDB.
I want to help him utilize his trucks so when another client asks for transport anywhere within A to B, say C to X, I need to know :)
I started with Google maps API, but getting "cities" from the result is vague, quite complicated and expensive.
Considering client can work with "major" cities only, we set to create a graph of all major cities within the country.
Possible solution:
Thinking of using Djikstra's algo to determine a path within the graph, storing all the nodes of one transport order in DB and then querying any new orders against the DB to see if start+finish place falls within any other previous transport order...
If you can think of something quicker or simple, hit me with a link and a smile :)
Final toughts:
We're working with a website on nginx/php/mariadb, no substantial frameworks.
Our solutions would probably solve the problem on regional basis, but I am looking for a global solution, think worldwide transports... I searched stackoverflow questions, but I am not seeing an answer to this problem.
I am open to most ridiculous ideas, a friend of mine started talking about xpath in xml tree or regex... :)
thanks,
Alexander
Instead of assigning it, how about having a marketplace and having drivers grab jobs?
They know where they are going and where they are at any given time.
They also know if they are full or will have room at that part of their route.
So showing origin, destination, size, and weight for open orders would be good enough, I would think.
Can anyone point me in the direction of a site, blog post, etc... that gives clear and concise examples/information about creating a match making site. I should state one point before you bite my head off:
This is NOT for a dating website. It's for a site that will attempt to match potential visitors to our site with other like minded visitors and pros on our site. It's more match making, and potentially social networking, but not exactly. Good examples/information would be algorithms used or code samples. My language of choice is php but I'm not averse to ruby on rails either.
Thanks to all who can contribute.
You could map their preferences onto a multi-dimensional space and then use the Euclidean distance between the two subjects Cartesian coordinates to determine how "matched up" they are. Then you just need to find the subjects with the shortest distances and these are your suitable matches.
You could have a look at http://www.socialengine.net
You need to purchase a license, but you can get a 1 month demo. Its pretty complete (almost like you're own facebook).
Its built on Zend Framework and allows you to extend it with modules.
Say I have a collection of 100,000 articles across 10 different topics. I don't know which articles actually belong to which topic but I have the entire news article (can analyze them for keywords). I would like to group these articles according to their topics. Any idea how I would do that? Any engine (sphinx, lucene) is ok.
In term of machine learning/data mining, we called these kind of problems as the classification problem. The easiest approach is to use past data for future prediction, i.e. statistical oriented:
http://en.wikipedia.org/wiki/Statistical_classification, in which you can start by using the Naive Bayes classifier (commonly used in spam detection)
I would suggest you to read this book (Although written for Python): Programming Collective Intelligence (http://www.amazon.com/Programming-Collective-Intelligence-Building-Applications/dp/0596529325), they have a good example.
Well an apache project providing maschine learning libraries is Mahout. Its features include the possibility of:
[...] Clustering takes e.g. text documents and groups them into groups of topically related documents. Classification learns from exisiting categorized documents what documents of a specific category look like and is able to assign unlabelled documents to the (hopefully) correct category. [...]
You can find Mahout under http://mahout.apache.org/
Although I have never used Mahout, just considered it ;-), it always seemd to require a decent amount of theoretical knowledge. So if you plan to spend some time on the issue, Mahout would probably be a good starting point, especially since its well documented. But don't expect it to be easy ;-)
Dirt simple way to create a classifier:
Hand read and bucket N example documents from the 100K into each one of your 10 topics. Generally, the more example documents the better.
Create a Lucene/Sphinx index with 10 documents corresponding to each topic. Each document will contain all of the example documents for that topic concatenated together.
To classify a document, submit that document as a query by making every word an OR term. You'll almost always get all 10 results back. Lucene/Sphinx will assign a score to each result, which you can interpret as the document's "similarity" to each topic.
Might not be super-accurate, but it's easy if you don't want to go through the trouble of training a real Naive Bayes classifier. If you want to go that route you can Google for WEKA or MALLET, two good machine learning libraries.
Excerpt from Chapter 7 of "Algorithms of the Intelligent Web" (Manning 2009):
"In other words, we’ll discuss the adoption of our algorithms in the context of a hypothetical
web application. In particular, our example refers to a news portal, which is inspired by the Google News website."
So, the content of Chapter 7 from that book should provide you with code for, and an understanding of, the problem that you are trying to solve.
you could use sphinix to search for all the articles for all the 10 different topics and then set a threshold as to the number of matches that what make an article linked to a particular topic, and so on
I recommend the book "Algorithms of the Intelligent Web" by Haralambos Marmanis and Dmitry Babenko. There's a chapter on how to do this.
I don't it is possible to completely automate this, but you could do most of it. The problem is where would the topics come from?
Extract a list of the most non-common words and phrases from each article and use those as tags.
Then I would make a list of Topics and assign words and phrases which would fall within that topic and then match that to the tags. The problem is that you might get more than one topic per article.
Perhaps the best way would be to use some form of Bayesian classifiers to determine which topic best describes the article. It will require that you train the system initially.
This sort of technique is used on determining if an email is SPAM or not.
This article might be of some help
I have designed a weighted graph using a normalized adjacency list in mysql. Now I need to find the shortest path between two given nodes.
I have tried to use Dijkstra in php but I couldnt manage to implement it (too difficult for me). Another problem I felt was that if I use Dijkstra I would need to consider all the nodes, that may be perhaps very inefficient in a large graph. So does anybody has a code relating to the above problem? It would be great if somebody atleast shows me a way of solving this problem. I have been stuck here for almost a week now. Please help.
This sounds like a classic case of the A* algorithm, but if you can't implement Dijkstra, I can't see you implenting A*.
A* on Wikipedia
edit: this assumes that you have a good way to estimate (but it is crucial you don't over-estimate) the cost of getting from one node to the goal.
edit2: you are having trouble with the adjacency list representation. It occurs to me that if you create an object for each vertex in the map then you can link directly to these objects when there is a link. So what you'd have essentially is a list of objects that each contain a list of pointers (or references, if you will) to the nodes they are adjacent to. Now, if you want to access the path for a new node, you just follow the links. Be sure to maintain a list of the paths you've followed for a given vertex to avoid infinite cycles.
As far as querying the DB each time you need to access something, you're going to need to do this anyway. Your best hope is to only query the DB when you NEED to... this means only querying it when you want to get info on a specific edge in the graph, or for all edges for one vertext in the graph (the latter would likely be the better route) so you only hit the slow I/O once in a while rather than gigantic chunks all at once.
Here is a literate version of the Dijkstra algorithm, in Java, that may help you to figure out how to implement it in PHP.
http://en.literateprograms.org/Dijkstra%27s_algorithm_%28Java%29
Dijkstra algorithm returns shortest paths from given vertex to other vertexes.
You can find its pseudo-code in Wiki.
But I think you need Floyd algorithm which finds shortest paths between all vertexes in a DIRECTED grapth.
The mathematical complexity of both are pretty close.
I could find PHP implementation from the Wiki for both of them.
Hi I am working on a online TIc-Tac-Toe game using the miniclip algorithm to calculate the best move.I found few examples but i really don't understand the miniclips logic.Some example would be great.
Thanks!
For a game with such a small number of possible states as Tic-Tac-Toe, it's quite feasible to just build a tree of all possible game states and have your AI only take branches that don't end in a loss.
Beyond that, I think what you're looking for is called minimax, and there's an article here that explains a variation of it in the context of Tic-Tac-Toe.
I guess Decision tree or more like game tree is what you're looking for