We have sets of rough coordinates for a few hundred road routes across the UK.
The waypoints along the routes are often 50 metres apart. This means that when we draw a line through the coordinates (our software limits us to straight lines) they sometimes cut across roads, buildings etc.
The plan is to create a PHP script that will run the coordinates through something which will return close, nicely placed road coordinates and insert them into our database, essentially replacing our spread out, 50 metre apart coordinates.
The only technology we've found that can do this is the Google Maps Directions API. If we pass the waypoints along the route, Google can in return give us perfect road route with the coordinates of each step on a particular leg/straight on the route.
We'd go ahead and do this now if we weren't uncertain about this being allowed.
https://developers.google.com/maps/faq
We have read through the Google Maps FAQ and can't find anything about caching the road coordinates. We don't want to do anything that would breach the terms of service as our application heavily relies on other Google APIs and Google Maps itself.
How should we continue? If this is breaking Google's terms of service, couldn't we just randomize the coordinates they return slightly so they're not the same? How could it be proved we've broken them then?
Related
I am working on a mobile web site for a MS Bike event. I already have geo code for tagging email requests, and a check-in site to check riders in to a location based on their location. I would like to add the distance to the next rest stop / finish. I know how to figure out the distance between two locations. And all my research on this, it to allow Google to provide the route. But since this is an event, there is a predetermined route that the riders ride.
Does anyone have any ideas on how to tackle this? I have the Lat/Long of the routes (each corner and turn) and I have it in a kml format.
If the resolution of the way-points is distinct enough I can see two cases: The nearest way-point is either the next point or was the previous point:
So if you not only calculate the distance to the nearest point but also to the previous and next to it, you should be able to simply decide which one the next is.
As written, this requires that the resolution between the points is good enough. E.g. if you have a course with a 180 degree curve things don't evaluate that well any longer:
The solution is to have enough way-points in these areas then. So this might or might not be suitable for your problem. Sorry for the trashy graphics, but I hope they illustrate this well enough. Also the concept is a bit rough, but probably good enough to create a first mock.
I am currently developing a kind of google maps overview widget that displays locations as markers on the map. The amount of markers varies from several hundreds up to thousands of markers (10000 up). Right now I am using MarkerClusterer for google maps v3 1.0 and the google maps javascript api v3 (premier) and it works pretty decent for lets say a hundred markers. Due to the fact that the number of markers will increase I need a new way of clustering the markers. From what I read the only way to keep the performance up is moving the clustering from the client-side to the server-side. Does anyone know a good PHP5 library which is able to get this done for me?
Atm I am digging deeper into the layer mechanisms of google maps. Maybe there are also a few leading PHP librarys I could start to check out? I also ran across FusionTables but since I need clustering I think this might not be the right solution.
Thanks in advance!
I don't know of a server-side library that'll do the job for you. I can however give you some pointers on how to implement one yourself.
The basic approach to clustering is simply to calculate the distance between your markers and when two of them are close enough you replace them with a single marker located at the mid-point between the two.
Instead of just having a limitation on how close to each other markers may be, you may also (or instead) choose to limit the number of clusters/markers you want as a result.
To accomplish this you could calculate the distance between all pairs of markers, sort them, and then merge from the top until you only have as many markers/clusters as you wish.
To refine the mid-point positioning when forming a cluster you may take into account the number of actual markers represented by each of the two to be merged. Think of that number as a weight and the line between the two markers as a scale. Then instead of always choosing the mid-point, choose the point that would balance the scale.
I'd guess that this simple form of clustering is good enough if you have a limited number of markers. If your data set (# of markers and their position) is roughly static you can calculate clustering on the server once in a while, cache it, and server clients directly from the cache.
However, if you need to support large scale scenarios potentially with markers all over the world you'll need a more sophisticated approach.
The mentioned cluster algorithm does not scale. In fact its computation cost would typically grow exponentially with the number of markers.
To remedy this you could split the world into partitions and calculate clustering and serve clients from each partition. This would indeed support scaling since the workload can be split and performed by several (roughly) independent servers.
The question then is how to find a good partitioning scheme. You may also want to consider providing different clustering of markers at different zoom levels, and your partitioning scheme should incorporate this as well to allow scaling.
Google divide the map into tiles with x, y and z-coordinates, where x and y are the horizontal and vertical position of the tile starting from the north-west corner of the map, and where z is the zoom level.
At the minimum zoom level (zero) the entire map consist of a single tile. (all tiles are 256x256 pixels). At the next zoom level that tile is divided into four sub tiles. This continues, so that in zoom level 2 each of those four tiles has been divided into four sub tiles, which gives us a total of 16 tiles. Zoom level 3 has 64 tiles, level 4 has 256 tiles, and so on. (The number of tiles on any zoom level can be expressed as 4^z.)
Using this partitioning scheme you could calculate clustering per tile starting at the lowest zoom level (highest z-coordinate), bubbling up until you reach the top.
The set of markers to be clustered for a single tile is the union of all markers (some of which may represent clusters) of its four sub tiles.
This gives you a limited computational cost and also gives you a nice way of chunking up the data to be sent to the client. Instead of requesting all markers for a given zoom level (which would not scale) clients can request markers on a tile-by-tile basis as they are loaded into the map.
There is however a flaw in this approach: Consider two adjacent tiles, one to the left and one to the right. If the left tile contains a marker/cluster at its far right side and the right tile contains a marker/cluster at its far left side, then those two markers/clusters should be merged but won't be since we're performing the clustering mechanism for each tile individually.
To remedy this you could post-process tiles after they have been clustered so that you merge markers/clusters that lay on the each of the four edges, taking into account each of the eight adjacent tiles for a given tile. This post-merging mechanism will only work if we can assume that no single cluster is large enough to affect the surrounding markers which are not in the same sub tile. This is, however, a reasonable assumption.
As a final note: With the scaled out approach you'll have clients making several small requests. These requests will have locality (i.e. tiles are not randomly requested, but instead tiles that are geographically close to each other are also typically accessed together).
To improve lookup/query performance you would benefit from using search keys (representing the tiles) that also have this locality property (since this would store data for adjacent tiles in adjacent data blocks on disk - improving read time and cache utilization).
You can form such a key using the tile/sub tile partitioning scheme. Let the top tile (the single one spanning the entire map) have the empty string as key. Next, let each of its sub tiles have the keys A, B, C and D. The next level would have keys AA, AB, AC, AD, BA, BC, ..., DC, DD.
Apply this recursively and you'll end up with a partitioning key that identifies your tiles, allows quick transformation to x,y,z-coordinates and has the locality property. This key naming scheme is sometimes called a Quad Key stemming from the fact that the partitioning scheme forms a Quad Tree. The locality property is the same as you get when using a Z-order curve to map a 2D-value into a 1D-value.
Please let me know if you need more details.
This article has some PHP examples for marker clustering:
http://www.appelsiini.net/2008/11/introduction-to-marker-clustering-with-google-maps
You could try my free clustering app. It is capable of more pins than the clientside google maps api. It offers kmeans an grid based clustering.
https://github.com/biodiv/anycluster
I have a website with around half a million geocoded locations in a database. I want people to be able to search for these via a map. Obviously, that's far too many for a standard Google (or, for that matter, Bing) map display, even when using something like MarkerClusterer.
What I want to do, therefore, is dynamically load the map data as people scroll around on the map so that there are never too many icons, or too much data, loaded at once. Here's an example of a site which already does something like this:
http://www.globrix.com/property/buy/wr11%203dl?ns=true&rd=1&hits=10&br=buy&qt=wr11+3dl&keyword_field=
Unfortunately, I'm not a skilled enough javascript programmer to reverse engineer that code! So I was hoping that there might be an open source project which I can use or adapt instead.
I've mostly used Google maps in the past (and the site currently uses Google maps for small-area search), but I'd be equally happy with Bing if that's easier. The backend is all in PHP.
Any suggestions?
Listen for the 'idle' event on the Map.
You'll want to do some sort of spatial query, using the bounds of the Map.
Also, consider using Fusion Tables:
http://google.com/fusiontables
I'm connecting to the Google Maps API from PHP to geocode some starting points for a rental station locator application.
Those starting points don't have to be exact addresses; city names are enough. Geocoding responses with an accuracy equal to or grater than 4 (city/locality level) are used as starting points, and the surrounding rental stations searched.
The application is supposed to work in Germany. When a locality name is ambiguous (i.e. there are more than one place of that name) I want to display a list of possibilities.
That is not a problem in general: If you make an ambiguous search, Google's XML output returns a list of <PlaceMark> elements instead of just one.
Obviously, I need to bias the geocoding towards Germany, so if somebody enters a postcode or the name of a locality that exists in other countries as well, only hits in Germany actually come up.
I thought I could achieve this by adding , de or , Deutschland to the search query. This works mostly fine, but produces bizarre and intolerable results in some cases.
There are, for example, 27 localities in Germany named Neustadt. (Wikipedia)
When I search for Neustadt alone:
http://maps.google.com/maps/geo/hl=de&output=xml&key=xyz&q=Neustadt
I get at least six of them, which I could live with (it could be that the others are not incorporated, or parts of a different locality, or whatever).
When, however, I search for Neustadt, de, or Neustadt, Deutschland, or Neustadt, Germany, I get only one of the twenty-seven localities, for no apparent reason - it is not the biggest, nor is it the most accuracy accurate, nor does it have any other unique characteristics.
Does anybody know why this is, and what I can do about it?
I tried the region parameter but to no avail - when I do not use , de, postcodes (like 50825 will be resolved to their US counterparts, not the german ones.
My current workaround idea is to add the country name when the input is numeric only, and otherwise filter out only german results, but this sounds awfully kludgy. Does anybody know a better way?
This is definitely not an exhaustive answer, but just a few notes:
You are using the old V2 version of the Geocoding API, which Google have recently deprecated in favour of the new V3 API. Google suggests to use the new service from now on, and while I have no real experience with the new version, it seems that they have improved the service on various points, especially with the structure of the response. You do not need an API key to use the new service, and you simply need to use a slightly different URL:
http://maps.google.com/maps/api/geocode/xml?address=Neustadt&sensor=false
You mentioned that you were filtering for placemarks on their accuracy property. Note that this field does not appear anymore in the results of the new Geocoding API, but in any case, I think it was still not very reliable in the old API.
You may want to try to use the bounds and region parameters of the new API, but I still doubt that they will solve your problem.
I believe that the Google Geocoder is a great tool for when you give it a full address in at least a "street, locality, country" format, and it is also very reliable in other formats when it doesn't have to deal with any ambiguities (Geocoding "London, UK" always worked for me).
However, in your case, I would really consider pre-computing all the coordinates of each German locality and simply handle the geocoding yourself from within your database. I think this is quite feasible especially since your application is localized to just one country. Each town in the Wikipedia "List of German Towns" appears to have the coordinates stored inside a neat little <span> which looks very easy to parse:
<span class="geo">47.84556; 8.85167</span>
There are sixteen Neustadts in that list, which may be better than Google's six :)
found google autocomplete works better then their geocode
http://code.google.com/apis/maps/documentation/javascript/places.html#places_autocomplete
Found this question searching for this exact issue. Then realized Bing Maps API works much better. It has its quirks though. For example, if you pass an airport code, it would show you 6 different solutions, each corresponding to each terminal of the airport.
I'm stuck here again. I have a database with over 120 000 coordinates that I need to be displayed on a google maps integrated in my application. The thing is that and I've found out the hard way simply looping through all of the coordinates and creating an individual marker for each and adding it using the addOverlay function is killing the browser. So that definitely has to be the wrong way to do this- I've read a bit on clustering or Zoom level bunching - I do understand that theres no point in rendering all of the markers especially if most of them won't be seen in non rendered parts of the map except I have no idea how to get this to work.
How do I fix this here. Please guys I need some help here :(
There is a good comparison of various techniques here http://www.svennerberg.com/2009/01/handling-large-amounts-of-markers-in-google-maps/
However, given your volume of markers, you definitely want a technique that only renders the markers that should be seen in the current view (assuming that number is modest - if not there are techniques in the link for doing sensible things)
If you really have more than 120,000 items, there is no way that any of the client-side clusterers or managers will work. You will need to handle the markers server-side.
There is a good discussion here with some options that may help you.
Update: I've posted this on SO before, but this tutorial describes a server-side clustering method in PHP. It's meant to be used with the Static Maps API, but I've built it so that it will return clustered markers whenever the view changes. It works pretty well, though there is a delay in transferring the markers whenever the view changes. Unfortunately I haven't tried it with more than 3,000 markers - I don't know how well it would handle 120,000. Good luck!
I've not done any work with Google maps specifically but many moons ago, I was involved in a project which managed a mobile workforce for a large Telco.
They had similar functionality in that they had maps which they could zoom in on for their allocated jobs (local to the machine rather than over the network) and we solved a similar problem which sounds very similar like yours. Points of interest on the maps were called landmarks and were indicated by small markers on the map called landmark pointers, which the worker could select to get a textual description..
At the minimum zoom, there would have been a plethora of landmark pointers, making the map useless. We made a command decision to limit the landmark pointers to a smaller number (400). In order to do that, the map was divided into a 20x20 matrix no matter what the zoom level, which gave us 400 matrix elements.
Then, if a landmark shared the same matrix element as another, the application combined them and generated a single landmark pointer with the descriptive text containing the text of all the landmarks in that matrix element.
That way there were never more than 400 landmark pointers. As the minion zoomed in, the landmark pointers were regenerated and landmarks could end up in different matrix elements - in that case, they were no longer combined with other landmarks.
Similarly, zooming out sometimes merged two or more landmarks into a single landmark pointer.
That sounds like what you're trying to achieve with "clustering or zoom level bunching" although, as I said, I have little experience with Google Maps itself so I'm not sure this is possible. But given Google's reputation, I suspect it is.
I suggest that you use a marker manager class such as this one along with your existing code. Marker manager class allows you to manage thousands of markers and optimizes memory usage. There is a variety of marker managers (there is not just one) and I suggest you Google a bit.
Here is a non-cluster solution if you want to display hundreds or even thousands of markers very quickly. You can use a combination of OverlayView and DocumentFragment.
http://nickjohnson.com/b/google-maps-v3-how-to-quickly-add-many-markers
If only there is something more powerful than JS for this...
Ok enough sarcasm : ).
Have you used the Flash Maps API? Kevin Macdonald has successfully used it to cluster not 120K markers but 1,000,000 markers. Check out the Million Marker Map:
http://www.spatialdatabox.com/million-marker-map/million-marker-map.html
Map responsiveness is pretty much un-affected in this solution. If you are interested you can contact him here: http://www.spatialdatabox.com/sdb-contact-sales.html
Try this one :
http://googlegeodevelopers.blogspot.com/2009/04/markerclusterer-solution-to-too-many.html
Its an old question and already got many answers , but Stackoverflow is more as reference i hope this will help anyone who searches for the same problem .
There is a fairly simple solution- Use HTML5 canvas, though it sounds strange , its the fastest way to load upto 10,000 markers as well as a labels, which am sure no browser can handle if its a normal marker. Not conventional markers but light markers.