I am working on a mobile web site for a MS Bike event. I already have geo code for tagging email requests, and a check-in site to check riders in to a location based on their location. I would like to add the distance to the next rest stop / finish. I know how to figure out the distance between two locations. And all my research on this, it to allow Google to provide the route. But since this is an event, there is a predetermined route that the riders ride.
Does anyone have any ideas on how to tackle this? I have the Lat/Long of the routes (each corner and turn) and I have it in a kml format.
If the resolution of the way-points is distinct enough I can see two cases: The nearest way-point is either the next point or was the previous point:
So if you not only calculate the distance to the nearest point but also to the previous and next to it, you should be able to simply decide which one the next is.
As written, this requires that the resolution between the points is good enough. E.g. if you have a course with a 180 degree curve things don't evaluate that well any longer:
The solution is to have enough way-points in these areas then. So this might or might not be suitable for your problem. Sorry for the trashy graphics, but I hope they illustrate this well enough. Also the concept is a bit rough, but probably good enough to create a first mock.
Related
I am making a routing application on Android when user can input the amount of hours to travel a place and my application would give an output of possible route users can travel.
I am using genetic algorithm (GA) to give the route to the user, and I use PHP to execute my GA.
Here comes the problem, in order for routing to be effective, I need to know distance between each city to verified if the route is possible and the distance in minimized. How to store distance between each city in order to make the execution faster? I have tried to get the distance directly
from Google Maps API but it takes longer execution time.
I was thinking to store the distance to json file, but is it possible? Or is there another effective ways?
Note that the destination will be dynamic. Users can add a new destination, so whenever there is a new destination the matrix distance needs to be updated.
Please help me :) Thank you.
You know the initial position of the user and want to know different destination distances. I suggest you to use single source shortest path deterministic algorithm like Dijkstra instead of evolutionary algorithm. The implementation based on a min-priority queue implemented by a Fibonacci heap running in O(E.logV) where E is the number of edges and V is the number of vertices. It runs much faster than genetic algorithm and also find the best answer instead of some approximate one. It also has the property that finds the first nearest destinations first which is suitable for you.
I am not skilled in the world of statistics, so I hope this will be easy for someone, my lack of skill also made it very hard to find the correct search terms on this topic so I may have missed my answer in searching. anyway. I am looking at arrays of data, say CPU usage for example. how can i capture accurate information in as few data-points as possible on say, a set of data containing 1-second time intervals on cpu usage over the cores of 1 hr, where the first 30mins where 0% and the second 30 mins are 100%. right now, all i will know in one data-point i can think of is the mean, which is 50%, and not useful at all in this case. also, another case is when the usage graph was like a wave, evenly bouncing up and down between 0-100, yet still giving a mean of 50%. how can i capture this data? thanks.
If I understand your question, it is really more of a statistics question than a programming question. Do you mean, what is the best way to capture a population curve with the fewest variables possible?
Firstly, the assumptions with most standard statistics implies that the system is more or less stable (although, if the system is unstable, the numbers you get will let you know because they will be non-sensical).
The main measures that you need to know statistically are the mean, population size and the standard deviation. From this, you can calculate the rough bell curve defining to population curve, and know the accuracy of the curve based on the scale of the standard deviation.
This gives you a three variable schema for a standard bell curve.
If you want to get in further detail, you can add Cpk, Ppk, which are calculated fields.
Otherwise, you may need to get into non-linear regression and curve fitting which is best handled on a case by case basis (not great for programming).
Check out the following sites for calculating the Cp, Cpk, Pp and Ppk:
http://www.qimacros.com/control-chart-formulas/cp-cpk-formula/
http://www.macroption.com/population-sample-variance-standard-deviation/
I am designing a web app where I need to determine which places listed in my DB are in the users driving distance.
Here is a broad overview of the process that I am currently using -
Get users current location via Google's map api
Run through each place in my database(approx 100) checking if the place is within the users driving distance using the google places api. I return and parse the JSON file with PHP to see if any locations exist given the users coordinates.
If place is in users driving distance display top locations(limited to 20 by google places), other wise don't display
This process works fine when I am running through a handful of places, but running through 100 places is much slower, and makes 100 api calls. With Google's current limit of 100,000 calls per day, this could become an issue down the road.
So is there a better way to determine which places in my database are within a users driving distance? I do not want to keep track of addresses in my DB, I want to rely on Google for that.
Thanks.
You can use the formula found here to calculate the distance between zip codes:
http://support.sas.com/kb/5/325.html
This is not precise (door-step to door-step) but you can calculate the distance from the user's zip code to the location's zip code.
Using this method, you won't even have to hit Google's API.
I have an unconventional idea for you. This will be very, very odd when you think about it for the first time, as it does exactly the opposite order of what you will expect to do. However, you might get to see the logic.
In order to put it in action, you'll need a broad category of stuff that you want the user to see. For instance, I'm going to go with "supermarkets".
There is a wonderful API as part of google places called nearbySearch. Its true wonder is to allow you to rank places by distance. We will make use of this.
Pre-requisites
Modify your database and store the unique ID returned on nearbySearch places. This isn't against the ToS, and we'll need this
Get a list of those IDs.
The plan
When you get the user's location, query nearbySearch for your category, and loop through results with the following constraints:
If the result's ID matches something in your database, you have that result. Bonus #1: it's sorted by distance ascending! Bonus #2: you already get the lat-loc for it!
If the result's ID does not match, you can either silently skip it or use it and add it to your database. This means that you can quite literally update your database on-the-fly with little to no manual work as an added bonus.
When you have run through the request, you will have IDs that never came up in the results. Calculate the point-to-point distance of the furthest result in Google's data and you will have the max distance from your point. If this is too small, use the technique I described here to do a compounded search.
The only requirement is: you need to know roughly what you are searching for. However, consider this: your normal query cycle takes you anywhere between 1 and 100 google Queries. My method takes 1 for a 50km radius. :-)
To calculate distances, you will need Haversine's formula rather than doing a zip code lookup, by the way. This has the added advantage of being truly international.
Important caveats
This search method directly depends on the trade-off between the places you know about and the distance. If you are looking for less than 10km radii, use this method to only generate one request.
If, however, you have to do compounded searching, bear in mind that each request cycle will cost you 3N, where N is the number of queries generated on the last cycle. Therefore, if you only have 3 places in a 100km radius, it makes more sense to look up each place individually.
I was thinking about an idea of auto generated answers, well the answer would actually be a url instead of an actual answer, but that's not the point.
The idea is this:
On our app we've got a reporting module which basically show's page views, clicks, conversions, details about visitors like where they're from, - pretty much a similar thing to Google Analytics, but way more simplified.
And now I was thinking instead of making users select stuff like countries, traffic sources and etc from dropdown menu's (these features would be available as well) it would be pretty cool to allow them to type in questions which would result in a link to their expected part of the report. An example:
How many conversions I had from Japan on variant (one page can have many variants) 3.
would result in:
/campaign/report/filter/campaign/(current campaign id they're on)/country/Japan/variant/3/
It doesn't seem too hard to do it myself, but it's just that it would take quite a while to make it accurate enough.
I've tried google'ing but had no luck to find an existing script, so maybe you guys know anything alike to my idea that's open source and well reliable/flexible enough to suit my needs.
Thanks!
You are talking about natural language processing - an artificial intelligence topic. This can never be perfect, and eventually boils down to the system only responding to a finite number of permutations of one question.
That said, if that is fine with you - then you simply need to identify "tokens". For example,
how many - evaluate to count
conversations - evaluate to all "conversations"
from - apply a filter...
japan - ...using japan
etc.
I'm stuck here again. I have a database with over 120 000 coordinates that I need to be displayed on a google maps integrated in my application. The thing is that and I've found out the hard way simply looping through all of the coordinates and creating an individual marker for each and adding it using the addOverlay function is killing the browser. So that definitely has to be the wrong way to do this- I've read a bit on clustering or Zoom level bunching - I do understand that theres no point in rendering all of the markers especially if most of them won't be seen in non rendered parts of the map except I have no idea how to get this to work.
How do I fix this here. Please guys I need some help here :(
There is a good comparison of various techniques here http://www.svennerberg.com/2009/01/handling-large-amounts-of-markers-in-google-maps/
However, given your volume of markers, you definitely want a technique that only renders the markers that should be seen in the current view (assuming that number is modest - if not there are techniques in the link for doing sensible things)
If you really have more than 120,000 items, there is no way that any of the client-side clusterers or managers will work. You will need to handle the markers server-side.
There is a good discussion here with some options that may help you.
Update: I've posted this on SO before, but this tutorial describes a server-side clustering method in PHP. It's meant to be used with the Static Maps API, but I've built it so that it will return clustered markers whenever the view changes. It works pretty well, though there is a delay in transferring the markers whenever the view changes. Unfortunately I haven't tried it with more than 3,000 markers - I don't know how well it would handle 120,000. Good luck!
I've not done any work with Google maps specifically but many moons ago, I was involved in a project which managed a mobile workforce for a large Telco.
They had similar functionality in that they had maps which they could zoom in on for their allocated jobs (local to the machine rather than over the network) and we solved a similar problem which sounds very similar like yours. Points of interest on the maps were called landmarks and were indicated by small markers on the map called landmark pointers, which the worker could select to get a textual description..
At the minimum zoom, there would have been a plethora of landmark pointers, making the map useless. We made a command decision to limit the landmark pointers to a smaller number (400). In order to do that, the map was divided into a 20x20 matrix no matter what the zoom level, which gave us 400 matrix elements.
Then, if a landmark shared the same matrix element as another, the application combined them and generated a single landmark pointer with the descriptive text containing the text of all the landmarks in that matrix element.
That way there were never more than 400 landmark pointers. As the minion zoomed in, the landmark pointers were regenerated and landmarks could end up in different matrix elements - in that case, they were no longer combined with other landmarks.
Similarly, zooming out sometimes merged two or more landmarks into a single landmark pointer.
That sounds like what you're trying to achieve with "clustering or zoom level bunching" although, as I said, I have little experience with Google Maps itself so I'm not sure this is possible. But given Google's reputation, I suspect it is.
I suggest that you use a marker manager class such as this one along with your existing code. Marker manager class allows you to manage thousands of markers and optimizes memory usage. There is a variety of marker managers (there is not just one) and I suggest you Google a bit.
Here is a non-cluster solution if you want to display hundreds or even thousands of markers very quickly. You can use a combination of OverlayView and DocumentFragment.
http://nickjohnson.com/b/google-maps-v3-how-to-quickly-add-many-markers
If only there is something more powerful than JS for this...
Ok enough sarcasm : ).
Have you used the Flash Maps API? Kevin Macdonald has successfully used it to cluster not 120K markers but 1,000,000 markers. Check out the Million Marker Map:
http://www.spatialdatabox.com/million-marker-map/million-marker-map.html
Map responsiveness is pretty much un-affected in this solution. If you are interested you can contact him here: http://www.spatialdatabox.com/sdb-contact-sales.html
Try this one :
http://googlegeodevelopers.blogspot.com/2009/04/markerclusterer-solution-to-too-many.html
Its an old question and already got many answers , but Stackoverflow is more as reference i hope this will help anyone who searches for the same problem .
There is a fairly simple solution- Use HTML5 canvas, though it sounds strange , its the fastest way to load upto 10,000 markers as well as a labels, which am sure no browser can handle if its a normal marker. Not conventional markers but light markers.