I'm using curl to retrieve information from wikipedia. So far I've been successful in retrieving basic text information but I really would want to retrieve it in HTML.
Here is my code:
$s = curl_init();
$url = 'http://boss.yahooapis.com/ysearch/web/v1/site:en.wikipedia.org+'.$article_name.'?appid=myID';
curl_setopt($s,CURLOPT_URL, $url);
curl_setopt($s,CURLOPT_HEADER,false);
curl_setopt($s,CURLOPT_RETURNTRANSFER,1);
$rs = curl_exec($s);
$rs = Zend_Json::decode($rs);
$rs = ($rs['ysearchresponse']['resultset_web']);
$rs = array_shift($rs);
$article= str_replace('http://en.wikipedia.org/wiki/', '', $rs['url']);
$url = 'http://en.wikipedia.org/w/api.php?';
$url.='format=json';
$url.=sprintf('&action=query&titles=%s&rvprop=content&prop=revisions&redirects=1', $article);
curl_setopt($s,CURLOPT_URL, $url);
curl_setopt($s,CURLOPT_HEADER,false);
curl_setopt($s,CURLOPT_RETURNTRANSFER,1);
$rs = curl_exec($s);
//curl_close( $s );
$rs = Zend_Json::decode($rs);
$rs = array_pop(array_pop(array_pop($rs)));
$rs = array_shift($rs['revisions']);
$articleText = $rs['*'];
However the text retrieved this way isnt well enough to be displayed :( its all in this kind of format
'''Aix-les-Bains''' is a [[Communes of
France|commune]] in the [[Savoie]]
[[Departments of France|department]]
in the [[Rhône-Alpes]] [[regions of
France|region]] in southeastern
[[France]].
It lies near the [[Lac du Bourget]],
{{convert|9|km|mi|abbr=on}} by rail
north of [[Chambéry]].
==History== ''Aix'' derives from [[Latin]] ''Aquae'' (literally,
"waters"; ''cf'' [[Aix-la-Chapelle]]
(Aachen) or [[Aix-en-Provence]]), and
Aix was a bath during the [[Roman
Empire]], even before it was renamed
''Aquae Gratianae'' to commemorate the
[[Emperor Gratian]], who was
assassinated not far away, in
[[Lyon]], in [[383]]. Numerous Roman
remains survive. [[Image:IMG 0109 Lake
Promenade.jpg|thumb|left|Lac du
Bourget Promenade]]
How do I get the HTML of the wikipedia article?
UPDATE: Thanks but I'm kinda new to this here and right now I'm trying to run an xpath query [albeit for the first time] and can't seem to get any results. I actually need to know a couple of things here.
How do I request just a part of an article?
How do I get the HTML of the article requested.
I went through this url on data mining from wikipedia - it put an idea to make a second request to wikipedia api with the retrieved wikipedia text as parameters and that would retrieve the html - although it hasn't seemed to work so far :( - I don't want to just grab the whole article as a mess of html and dump it. Basically my application what it does is that you have some locations and cities pin pointed on the map - you click on the city marker and it would request via ajax details of the city to be shown in an adjacent div. This information I wish to get from wikipedia dynamically. I'll worry about about dealing with articles that don't exist for a particular city later on just need to make sure its working at this point.
Does anyone know of a nice working example that does what I'm looking for i.e. read and parse through selected portions of a wikipedia article.
According to the url provided - it says I should post the wikitext to the wikipedia api location for it to return parsed html. The issue is that if I post the information I get no response and instead an error that I'm denied access - however if I try to include the wikitext as GET it parses with no issue. But it fails of course when I have waaaaay too much text to parse.
Is this a problem with the wikipedia api? Because I've been hacking at it for two days now with no luck at all :(
The simplest solution would probably be to grab the page itself (e.g. http://en.wikipedia.org/wiki/Combination ) and then extract the content of <div id="content">, potentially with an xpath query.
There is a PEAR Wiki Filter that I have used and it does a very decent job.
Text Wiki
Phil
Try looking at the printable version of the desired Wikipedia article in question.
In other words, change this line of your source code:
$url.=sprintf('&action=query&titles=%s&rvprop=content&prop=revisions&redirects=1', $article);
to something like:
$url.=sprintf('&action=query&titles=%s&printable=yes&redirects=1', $article);
Disclaimer: Have not tested, and this is just a guess at how your API might work.
As far as I understand it, the Wikipedia software converts the Wiki markup into HTML when the page is requested. So using your current method, you'll need to deal with the results.
A good place to start is the Mediawiki API. You can also use http://pear.php.net/package/Text_Wiki to format the results retrieved via cURL.
Related
Alright so I started with something simple just to get me familiar with what exactly I am getting my self into, how ever when tinkering I became lost.
Okay so I am trying to get contents from the following
$details = json_decode(file_get_contents("https://beam.pro/api/v1/users/63662"));
what's the best way I can go about doing this?
Currently I can display the username portion using print $details->username; and the id portion using print $details->id; but after this I become lost how could I go about pulling the title for example.
Here is what the Twitter looks like currently in the API
"name":"Thursday -- BR 2's [NA] w/ beam.pro/para",
Documentation is here
You would use the following:
echo $details->channel->name;
However, if you're more comfortable with arrays, you could do this:
$details = json_decode(file_get_contents("https://beam.pro/api/v1/users/63662"), true);
echo $details['channel']['name'];
Here is the object structure for future reference:
I'm trying to find out a way to get latitude and longitude bounds from google's routeboxer to php and then query mysql by those limits. I'll then output the results to json or xml to use those with android maps api v2. I found this http://luktek.com/Blog/2011-02-03-google-maps-routeboxer-in-php , but I think that this only does boxes between two points on a map, not boxes around the route itself which makes it not accurate enough. Using javascript is not an option, since I can't use it with google maps api or get the results from my database. Is there any way to accomplish this by using some server side code (preferably PHP, but any other language that works with mysql can be used as well), that can fetch the bounds, query mysql by those and output the data to json or xml so that it can be parsed by android?
I finally found a solution that I'm satisfied with.
I'm not going to paste every step because it's going to take like thousand lines but here's it in a nutshell:
1. Parse this field from Google directions api json(https://developers.google.com/maps/documentation/directions/#JSON): "overview_polyline": {
2. Decode the polyline to latitude and longitude points with this:
http://unitstep.net/blog/2008/08/02/decoding-google-maps-encoded-polylines-using-php/
3. Download this https://github.com/bazo/route-boxer. I piled all the code from GeoTools php-files to a one file but that's not propably necessary if you'll know how to use that :)
4. And here is an example of getting those boxes using those scripts:
...
$from = "(Startup point for example: "Turku,Finland")";
$to = "(Destination point fro example: "Porvoo,Finland")";
$json_string = file_get_contents("http://maps.googleapis.com/maps/api/directions/json?origin=$from&destination=$to&sensor=false");
$parsed_json = json_decode($json_string, true);
$polyline = $parsed_json['routes'][0]['overview_polyline']['points'];
$routepoints = decodePolylineToArray($polyline);
$collection = new LatLngCollection($routepoints);
$boxer = new RouteBoxer();
//calculate boxes with 10km distance from the line between points
$boxes = $boxer->box($collection, $distance = 10);
foreach($boxes as $row){
$southWestLtd = $row->southWest->latitude;
$southWestLng = $row->southWest->longitude;
$northEastLtd = $row->northEast->latitude;
$northEastLng = $row->northEast->longitude;
$query = "SELECT * FROM markers WHERE Latitude > $southWestLtd AND Latitude < $northEastLtd AND Longitude > $southEastLng AND Longitude < $norhtEastLng";
}
Run that query and it'll give you only the markers (or what ever you are querying) that are inside those boxes. If you'll need some more detailed instructions just leave a comment. I'm more than happy to help since I spent many nights trying to find a reasonable solution to this.
Lots of questions here, let me try to tackle some:
Getting lat and long bounds from routeboxer:
Once you have your 'boxes', you can loop through and get these
var northeast = boxes[i].getNorthEast();
var southwest = boxes[i].getSouthWest();
var lat_north = northeast.lat();
var long_east = northeast.lng();
var lat_south = southwest.lat();
var long_west = southwest.lng();
> at this only does boxes between two points on a map, not boxes around the route itself which makes it not accurate enough
I do not know about this persons implementation of Routeboxer but the original Google Routeboxer for the API v3 creates boxes around the entire route. That is what is documented and I can say with confidence, how it works (I've used it)
http://google-maps-utility-library-v3.googlecode.com/svn/trunk/routeboxer/docs/examples.html
> Using javascript is not an option, since I can't use it with google maps api or get the results from my database.
This point makes little sense to me. You DEFINITELY CAN use Google Maps API with Javascript, there is a Javascript Library (version 3 already) specifically FOR THIS PURPOSE. If this is a browser application I would strongly recommend sticking with that, at least initially.
As for getting results from DB, you can use AJAX calls (basically just call your php script via AJAX, have it return JSON data, then use JAVASCRIPT to parse/populate this data onto the screen of the user... Alternately, it doesn't even have to be ajax, you can use routeboxer, then upon some action, submit this to your DB, then have the next page return with results rendered... But AJAX really is the more elegant approach (this case is almost an 'ideal use-case' for it)
BTW: I do not see the logic of putting Routerbox 'server-side' as this programmer has done. He points out rightly to the fact that very long, complex routes, will take time but my thoughts would be: "whether the user is waiting for his PC to crunch the data, or he is waiting for a remote PC (the server) to crunch the data....... all he knows and cares about is that he is waiting, NOT THE BACKEND METHODOLOGY! (with the exception being when one has access to a series of powerful servers and can rewrite the server-side code to take advantage of parallel-runs etc...)
So lets say i have a google news feed, like this: https://news.google.com/news/feeds?pz=1&cf=all&ned=no_no&hl=no&q=%22something%22&output=atom&num=1
Grabbing the title, author and link would be easy, but how would i go around getting say the first 200 characters of the content? its full of html, and mixed in with the title and author aswell.
i could use strip_tags on it, but it would still be a mess.
Any way to make google return a ['description'] maybe?
or is there perhaps any other good news feeds that gives me the content in a way thats easier to manage?
[edit]
Update on how i ended up doing it.
$news = #simplexml_load_string(file_get_contents('https://news.google.com/news/feeds?pz=1&cf=all&ned=no_no&hl=no&q=%22molde+fotballklubb%22+OR+%22tornekrattet%22+OR+%22mfk%22+OR+%22oddmund+bjerkeset%22+-%22moss%22&output=atom&num=1'), 'SimpleXMLElement', LIBXML_NOCDATA);
$data = get_object_vars($news->{'entry'});
$test = explode('<font size="-1">', $data['content']);
$link = get_object_vars($data['link']);
$return['title'] = strip_tags($test[0]);
$return['author'] = strip_tags($test[1]);
$return['description'] = strip_tags($test[2]);
$return['link'] = $link['#attributes']['href'];
It is still not working properly, but thats because the feed gives me the content in different ways all the time. Sometimes the content of the news article itself will just be metadata like the authors and image descriptions.
And the breaking it up by html tags when the html have changes from time to time causes some problems. But i cant figure out any othe way of doing it with this feed.
You could try loading the HTML in a DOMDocument instance and extract the parts you need, or use a wrapper for it like Goutte which makes it a lot easier to extract portions you need.
http://php.net/manual/en/class.domdocument.php
https://github.com/fabpot/Goutte
This is the first time I have came in contact with JSON, and I literally have no idea how to parse it with PHP. I know that functions to decode JSON exist in PHP, but I am unsure how to retrieve specific values defined in the JSON. Here's the JSON for my app:
http://itunes.apple.com/search?term=enoda&entity=software
I require a few values to be retrieved, including the App Icon (artworkUrl100), Price (price) and Version (version).
The things I am having issues with is putting the URL of the App Icon into an actual HTML image tag, and simply retrieving the values defined in the JSON for the Price and Version.
Any help/solutions to this would be fantastic.
Thanks,
Jack
Yeah, i have something similar, for my App review website, here is a bit code:
$context = stream_context_create(array('http' => array('header'=>'Connection: close')));
$content = file_get_contents("http://ax.phobos.apple.com.edgesuite.net/WebObjects/MZStoreServices.woa/wa/wsLookup?id=$appid&country=de");
$content = json_decode($content);
$array = $content->results["0"];
$version = $array->version;
$artistname = $array->artistName;
$artistid = $array->artistId;
Thats what I used to get Information from the AppStore, maybe you can change the link and some names and it would work for you.
In Short, I am pulling the feed from my blogger using the Zend API in PHP. I need to get the URL that will link to that post in blogger. What is the order of functions I need to call to get that URL.
Right now I am pulling the data using:
$query = new Zend_Gdata_Query('http://www.blogger.com/feeds/MYID/posts/default');
$query->setParam('max-results', "1");
$feed = $gdClient->getFeed($query);
$newestPost = $feed->entry[0];
I can not for the life of me figure out where I have to go from here to get the URL. I can successfully get the Post title using: $newestPost->getTitle() and I can get the body by using $newestPost->getContent()->getText(). I have tried a lot of function calls, even ones in the documentation and most of them error out. I have printed out the entire object to look through it and I can find the data I want (so I know it is there) but the object is too complex to be able to just look at and see what I have to do to get to that data.
If anyone can help me or at least point me to a good explanation of how that Object is organized and how to get to each sub object within it, that would be greatly appreciated.
EDIT: Never mind I figured it out.
You are almost there, really all you need to do is once you have your feed entry is access the link element inside. I like pretty URLs so I went with the alternate rather than the self entry in the atom feed.
$link = $entry->link[4]->href;
where $entry is the entry that you are setting from the feed.
The solution is:
$query = new Zend_Gdata_Query('http://www.blogger.com/feeds/MyID/posts/default');
$query->setParam('max-results', "1");
$feed = $gdClient->getFeed($query);
$newestPost = $feed->entry[0];
$body = $newestPost->getContent()->getText();
$body now contains the post contents of the latest post (or entry[0]) from the feed. This is just the contents of the body of the post, not the title or any other data or formatting.