related to the doc https://developers.google.com/maps/documentation/geocoding/intro i am using this api request to figure out the full address data (especially lat/lon)
https://maps.googleapis.com/maps/api/geocode/xml?address=30449+hannover&sensor=false&language=en&key=YOURAPIKEY
the language parameter is the important part. its EN for english.
lets now face the result: "Niedersachsen" this part is german.
okay.. what a pitty.. i thought that google possibly doesnt know the english name of it so it returns the default german name since my request is about a german town but then i changed the api request a little bit and removed the ZIP (30449) of the address.
https://maps.googleapis.com/maps/api/geocode/xml?address=hannover&sensor=false&language=en&key=YOURAPIKEY
now my result was CORRECT: "Lower Saxony" in english.
so obviously google knows the name.
am I doing something wrong? do you have any ideas? i cant get this issue sorted...
please advise.
thanks!
p.s. its a little related to this I think: Google Geocoding API returns wrong language in response but not completely... I have also already opened a bug report on google side just in case...
I'm having Geocoding issues as well, ever since the new "pay-as-you-go" API nonsense. Installed new API key after activiating a new CC attached accnt with Google dev. Fixed things for a moment, but seems like cache issues are keeping the geocoding, as well as front side UX with the map embed itself (markers don't show, limited/no functionality). Wish I had an answer for you, but for now, just commiserating...
I'm looking for a way to get a user default language by the country. For example I have windows in english but still I would like to get my country 2 letter language ("cs")
You can see an example of what I want, In the source code of http://search.conduit.com/ using (Autocompleteplus) as well. This is what I see:
window.language = "en-us";
window.countryCode = "cz";
window.suggestBaseUrl = "http://api.autocompleteplus.com/?q=UCM_SEARCH_TERM&l=cs&c=cz&callback=acp_new";
You can see the api url has inside "l=cs&c=cz" how did they get this information? I would like to have the same thing I use the same autocompleteplus method just need a way to generate the l=(user true langague)&c=(country code) and performance is important as well. It's autosuggestions for my website search.
This is Ed from AutoComplete+. Getting the user country is typically done when using our API through server side implementations. There are however some open APIs that can assist you. Regardless, you can use our autocomplete feed without the user country. Feel free to contact us directly for further info at http://www.autocompleteplus.com
Thanks,
--ed
I'm thinking about a project, where I need such informations as described in the title. Does the Google Maps API provide something like this, or does anyone know how to get these informations?
Project will be done in PHP, HTML and Javascript.
AFAIK Google doesn't provide this information via the API. The only thing I can think of is getting the image, and then detecting the colour. A map of colour hex values against point type might give you what you need.
However, this may well break the Terms and Conditions, depending on what you're doing.
Is it possible to find geolocation using zip code using PHP.
For example, if i am entering the zip code i need to find the name of that particular zipcode.
Is that possible? if yes kindly explain me how?
Thanks in advance
See the duplicate question on how to turn an address into coordinates using the Google Geocoding service.
A query in the format [zipcode], United States, e.g..
72116, United States
should always work for you.
Try maps.google.com to simulate the request.
The XML returned from the Google Service will contain the official place name, as well as other information like county and state names etc.
You can do this through Javascript and PHP using Google's Map API - check this question for an example of how this is used.
I dont think it can be done solely through PHP however because you'd need to access an API's data and they normally (not always) do this through javascript.
You should check the API documentation.
You can access the Geocoding API solely using PHP.. you don't even need an API Key anymore.
http://code.google.com/apis/maps/articles/phpsqlgeocode.html
So if understand this correctly, you want to know the City/State of the zip code you have entered?
Maybe you should look into the Access database here :
http://databases.about.com/od/access/a/zipcodedatabase.htm
And if you find it useful, just convert it into a MySQL table and use as you please.
I'm still stuck on my problem of trying to parse articles from wikipedia. Actually I wish to parse the infobox section of articles from wikipedia i.e. my application has references to countries and on each country page I would like to be able to show the infobox which is on corresponding wikipedia article of that country. I'm using php here - I would greatly appreciate it if anyone has any code snippets or advice on what should I be doing here.
Thanks again.
EDIT
Well I have a db table with names of countries. And I have a script that takes a country and shows its details. I would like to grab the infobox - the blue box with all country details images etc as it is from wikipedia and show it on my page. I would like to know a really simple and easy way to do that - or have a script that just downloads the information of the infobox to a local remote system which I could access myself later on. I mean I'm open to ideas here - except that the end result I want is to see the infobox on my page - of course with a little Content by Wikipedia link at the bottom :)
EDIT
I think I found what I was looking for on http://infochimps.org - they got loads of datasets in I think the YAML language. I can use this information straight up as it is but I would need a way to constantly update this information from wikipedia now and then although I believe infoboxes rarely change especially o countries unless some nation decides to change their capital city or so.
I'd use the wikipedia (wikimedia) API. You can get data back in JSON, XML, php native format, and others. You'll then still need to parse the returned information to extract and format the info you want, but the info box start, stop, and information types are clear.
Run your query for just rvsection=0, as this first section gets you the material before the first section break, including the infobox. Then you'll need to parse the infobox content, which shouldn't be too hard. See en.wikipedia.org/w/api.php for the formal wikipedia api documentation, and www.mediawiki.org/wiki/API for the manual.
Run, for example, the query: http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=xmlfm&titles=fortran&rvsection=0
I suggest you use DBPedia instead which has already done the work of turning the data in wikipedia into usable, linkable, open forms.
It depends what route you want to go. Here are some possibilities:
Install MediaWiki with appropriate
modifications. It is a after all a
PHP app designed precisely to parse
wikitext...
Download the static HTML version, and parse out the parts you want.
Use the Wikipedia API with appropriate caching.
DO NOT just hit the latest version of the live page and redo the parsing every time your app wants the box. This is a huge waste of resources for both you and Wikimedia.
There is a number of semantic data providers from which you can extract structured data instead of trying to parse it manually:
DbPedia - as already mentioned provides SPARQL endpoint which could be use for data queries. There is a number of libraries available for multiple platforms, including PHP.
Freebase - another creative commons data provider. Initial dataset is based on parsed Wikipedia data, but there is some information taken from other sources. Data set could be edited by anyone and, in contrast to Wikipedia, you can add your own data into your own namespace using custom defined schema. Uses its own query language called MQL, which is based on JSON. Data has WebID links back to correspoding Wikipedia articles. Free base also provides number of downloadable data dumps. Freebase has number of client libraries including PHP.
Geonames - database of geographical locations. Has API which provides Country and Region information for given coordinates, nearby locations (e.g. city, railway station, etc.)
Opensteetmap - community built map of the world. Has API allowing to query for objects by location and type.
Wikimapia API - another location service
To load the parsed first section, Simply add this parameter to the end of the api url
rvparse
Like this:
http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=xmlfm&titles=fortran&rvsection=0&rvparse
Then parse the html to get the infobox table (using Regex)
$url = "http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=json&titles=Niger&rvsection=0&rvparse";
$data = json_decode(file_get_contents($url), true);
$data = current($data['query']['pages']);
$regex = '#<\s*?table\b[^>]*>(.*)</table\b[^>]*>#s';
$code = preg_match($regex, $data["revisions"][0]['*'], $matches);
echo($matches[0]);
if you want to parse one time all the articles, wikipedia has all the articles in xml format available,
http://en.wikipedia.org/wiki/Wikipedia_database
otherwise you can screen scrape individual articles e.g.
To update this a bit: a lot of data in Wikipedia infoboxes are now taken from Wikidata, which is a free database of structured information. See data page for Germany for example, and https://www.wikidata.org/wiki/Wikidata:Data_access for information on how to access the data programatically.
def extract_infobox(term):
url = "https://en.wikipedia.org/wiki/"+term
r = requests.get(url)
soup = BeautifulSoup(r.text, 'lxml')
tbl = soup.find("table", {"class": "infobox"})
if not tbl:
return {}
list_of_table_rows = tbl.findAll('tr')
info = {}
for tr in list_of_table_rows:
th = tr.find("th")
td = tr.find("td")
if th is not None and td is not None:
innerText = ''
for elem in td.recursiveChildGenerator():
if isinstance(elem, str):
# remove references
clean = re.sub("([\[]).*?([\]])", "\g<1>\g<2>", elem.strip())
# add a simple space after removing references for word-separation
innerText += clean.replace('[]','') + ' '
elif elem.name == 'br':
innerText += '\n'
info[th.text] = innerText
return info
I suggest performing a WebRequest against wikipedia. From there you will have the page and you can simply parse or query out the data that you need using a regex, character crawl, or some other form that you are familiar with. Essentially a screen scrape!
EDIT - I would add to this answer that you can use HtmlAgilityPack for those in C# land. For PHP it looks like SimpleHtmlDom. Having said that it looks like Wikipedia has a more than adequate API. This question probably answers your needs best:
Is there a Wikipedia API?