I would like to add a video to a playlist using GData. So I have no problem creating the playlist, but I can't manage to add a video to it.
Here's what I do:
$playlist = $yt->newPlaylistListEntry();
$playlist->summary = $yt->newDescription()->setText("test");
$playlist->title = $yt->newTitle()->setText("test2");
$postLocation = 'http://gdata.youtube.com/feeds/api/users/default/playlists';
$yt->insertEntry($playlist, $postLocation);
$feedUrl = $playlist->getPlaylistVideoFeedUrl();
$videoEntryToAdd = $yt->getVideoEntry(..given id here..);
$newPlaylistListEntry = $yt->newPlaylistListEntry($videoEntryToAdd->getDOM());
$yt->insertEntry($newPlaylistListEntry, $feedUrl);
And I get the following error:
Notice: Trying to get property of non-object in C:...\library\Zend\Gdata\YouTube\PlaylistListEntry.php on line 296
Which is caused by this code:
$feedUrl = $playlist->getPlaylistVideoFeedUrl();
var_dump shows that the $feed_url is NULL. And it shows that $playlist is an object Zend_Gdata_YouTube_PlaylistListEntry, so I can't understand why it writes "property of non-object".
It seems like it is some kind of a bug in the API. So I've made a little workaround. It may seem ugly, but I had no other ideas.
function grab_dump($var)
{
ob_start();
var_dump($var);
return ob_get_clean();
}
function getPlayListLink($playlist) {
$test = grab_dump($playlist);
$test = strstr($test, "http://gdata.youtube.com/feeds/api/playlists/");
return strstr($test, "' countHint='0'", TRUE);
}
function addVideosToPlaylist($videos_arr, $playlistEntry, $yt) {
$feedUrl = getPlayListLink($playlistEntry);
foreach($videos_arr as $video)
{
$videoEntryToAdd = $yt->getVideoEntry($video);
$newPlaylistListEntry = $yt->newPlaylistListEntry($videoEntryToAdd->getDOM());
$yt->insertEntry($newPlaylistListEntry, $feedUrl);
}
}
And simply call it like that:
addVideosToPlaylist($vids_id, $playlist, $yt);
Related
I have the following url:
https://blockchain.info/multiaddr?active=1AT4ES3ee1N6iBzzbdK8xvcAV3CBTRKcbS|1FHcYth4LRJMwNx2y8NR5DH7sYCiVzXs3Y&n=1
I want to access the final_balance from the output of the url.
I have the following code:
$value = file_get_contents($url);
$FinalBalance = $value["final_balance"];
var_dump($FinalBalance);
Error PHP Warning: Illegal string offset 'final_balance'
I also tried the following code:
$value = file_get_contents($url);
$json = json_decode($value);
var_dump($json);
$FinalBalance = $json["final_balance"];
var_dump($Final_Balance);
Error PHP Fatal error: Uncaught Error: Cannot use object of type stdClass as array
Your are near to finish that stuff but I will write down desired solution. Please have a look.
$url="https://blockchain.info/multiaddr?active=1AT4ES3ee1N6iBzzbdK8xvcAV3CBTRKcbS|1FHcYth4LRJMwNx2y8NR5DH7sYCiVzXs3Y&n=1";
$value = file_get_contents($url);
$FinalBalance = $value;
$data=json_decode($FinalBalance);
echo $data->wallet->final_balance;
echo $data->addresses[0]->final_balance;
echo $data->addresses[1]->final_balance;
exit;
You are going to access the inner object so you have to provide proper reference whether it is an array or an object.
Getting the Error Message - Notice: Trying to get property of non-object in I'm trying to experiment/learn with the YELP API but I'm getting stuck with a simple error message, I can't seem to figure out. Following this: https://github.com/Yelp/yelp-api/tree/master/v2/php
function query_api($term, $location) {
$response = json_decode(search($term, $location));
$business_id = $response->businesses[0]->id;
print sprintf(
"%d businesses found, querying business info for the top result \"%s\"\n\n",
count($response->businesses),
$business_id
);
$response = get_business($business_id);
print sprintf("Result for business \"%s\" found:\n", $business_id);
print "$response\n";
}
Calling the function
$longopts = array(
"term::",
"location::",
);
$options = getopt("", $longopts);
$term = $options['term'] ?: '';
$location = $options['location'] ?: '';
query_api($term, $location);
This notice indicates that $response is not an object and you are trying to access a property businesses that doesn't exist.
Use var_dump($response) to get information about this variable.
My php project is using the reddit JSON api to grab the title of the current page's submission.
Right now I am doing running some code every time the page is loaded and I'm running in to some problems, even though there is no real API limit.
I would like to store the title of the submission locally somehow. Can you recommend the best way to do this? The site is running on appfog. What would you recommend?
This is my current code:
<?php
/* settings */
$url="http://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI'];
$reddit_url = 'http://www.reddit.com/api/info.{format}?url='.$url;
$format = 'json'; //use XML if you'd like...JSON FTW!
$title = '';
/* action */
$content = get_url(str_replace('{format}',$format,$reddit_url)); //again, can be xml or json
if($content) {
if($format == 'json') {
$json = json_decode($content,true);
foreach($json['data']['children'] as $child) { // we want all children for this example
$title= $child['data']['title'];
}
}
}
/* output */
/* utility function: go get it! */
function get_url($url) {
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,1);
$content = curl_exec($ch);
curl_close($ch);
return $content;
}
?>
Thanks!
Introduction
Here is a modified version of your code
$url = "http://stackoverflow.com/";
$loader = new Loader();
$loader->parse($url);
printf("<h4>New List : %d</h4>", count($loader));
printf("<ul>");
foreach ( $loader as $content ) {
printf("<li>%s</li>", $content['title']);
}
printf("</ul>");
Output
New List : 7New podcast from Joel Spolsky and Jeff Atwood. Good site for example code/ Pyhtonstackoverflow.com has clearly the best Web code ever conceived in the history of the Internet and reddit should better start copying it.A reddit-like, OpenID using website for programmersGreat developer site. Get your questions answered and by someone who knows.Stack Overflow launched into publicStack Overflow, a programming Q & A site. & Reddit could learn a lot from their interface!
Simple Demo
The Problem
I see some things you want to achieve here namely
I would like to store the title of the submission locally somehow
Right now I am doing running some code every time the page is loaded
From what i understand you need is a simple cache copy of your data so that you don't have to load the url all the time.
Simple Solution
A simple cache system you can use is memcache ..
Example A
$url = "http://stackoverflow.com/";
// Start cache
$m = new Memcache();
$m->addserver("localhost");
$cache = $m->get(sha1($url));
if ($cache) {
// Use cache copy
$loader = $cache;
printf("<h2>Cache List: %d</h2>", count($loader));
} else {
// Start a new Loader
$loader = new Loader();
$loader->parse($url);
printf("<h2>New List : %d</h2>", count($loader));
$m->set(sha1($url), $loader);
}
// Oupput all listing
printf("<ul>");
foreach ( $loader as $content ) {
printf("<li>%s</li>", $content['title']);
}
printf("</ul>");
Example B
You can use Last Modification Date as the cache key as so that you would only save new copy only if the document is modified
$headers = get_headers(sprintf("http://www.reddit.com/api/info.json?url=%s",$url), true);
$time = strtotime($headers['Date']); // get last modification date
$cache = $m->get($time);
if ($cache) {
$loader = $cache;
}
Since your class implements JsonSerializable you can json encode your result and also store in a Database like MongoDB or MySQL
$data = json_encode($loader);
// Save to DB
Class Used
class Loader implements IteratorAggregate, Countable, JsonSerializable {
private $request = "http://www.reddit.com/api/info.json?url=%s";
private $data = array();
private $total;
function parse($url) {
$content = json_decode($this->getContent(sprintf($this->request, $url)), true);
$this->data = array_map(function ($v) {
return $v['data'];
}, $content['data']['children']);
$this->total = count($this->data);
}
public function getIterator() {
return new ArrayIterator($this->data);
}
public function count() {
return $this->total;
}
public function getType() {
return $this->type;
}
public function jsonSerialize() {
return $this->data;
}
function getContent($url) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 1);
$content = curl_exec($ch);
curl_close($ch);
return $content;
}
}
I'm not sure what your question is exactly but the first thing that pops is the following:
foreach($json['data']['children'] as $child) { // we want all children for this example
$title= $child['data']['title'];
}
Are you sure you want to overwrite $title? In effect, that will only hold the last $child title.
Now, to your question. I assume you're looking for some kind of mechanism to cache the contents of the requested URL so you don't have to re-issue the request every time, am I right? I don't have any experience with appFog, only with orchestra.io but I believe they have the same restrictions regarding writing to files, as in you can only write to temporary files.
My suggestion would be to cache the (processed) response in either:
APC shared memory with a short TTL
temporary files
database
You could use the hash of the URL + arguments as the lookup key, doing this check inside get_url() would mean you wouldn't need to change any other part of your code and it would only take ~3 LOC.
After this:
if($format == 'json') {
$json = json_decode($content,true);
foreach($json['data']['children'] as $child) { // we want all children for this example
$title = $child['data']['title'];
}
}
}`
Then store in a json file and dump it into your localfolder website path
$storeTitle = array('title'=>$title)
$fp = fopen('../pathToJsonFile/title.json'), 'w');
fwrite($fp, json_encode($storeTitle));
fclose($fp);
Then you can always call the json file next time and decode it and extract the title into a variable for use
i usually just store the data as is as a flat file, like so:
<?php
define('TEMP_DIR', 'temp/');
define('TEMP_AGE', 3600);
function getinfo($url) {
$temp = TEMP_DIR . urlencode($url) . '.json';
if(!file_exists($temp) OR time() - filemtime($temp) > TEMP_AGE) {
$info = "http://www.reddit.com/api/info.json?url=$url";
$json = file_get_contents($info);
file_put_contents($temp, $json);
}
else {
$json = file_get_contents($temp);
}
$json = json_decode($json, true);
$titles = array();
foreach($json['data']['children'] as $child) {
$titles[] = $child['data']['title'];
}
return $titles;
}
$test = getinfo('http://imgur.com/');
print_r($test);
PS.
i use file_get_contents to get the json data, you might have your own reasons to use curl.
also i don't check for format, cos clearly you prefer json.
I'm using the following code to turn user's IP into latitude/longitude information using the hostip web service:
//get user's location
$ip=$_SERVER['REMOTE_ADDR'];
function get_location($ip) {
$content = file_get_contents('http://api.hostip.info/?ip='.$ip);
if ($content != FALSE) {
$xml = new SimpleXmlElement($content);
$coordinates = $xml->children('gml', TRUE)->featureMember->children('', TRUE)->Hostip->ipLocation->children('gml', TRUE)->pointProperty->Point->coordinates;
$longlat = explode(',', $coordinates);
$location['longitude'] = $longlat[0];
$location['latitude'] = $longlat[1];
$location['citystate'] = '==>'.$xml->children('gml', TRUE)->featureMember->children('', TRUE)->Hostip->children('gml', TRUE)->name;
$location['country'] = '==>'.$xml->children('gml', TRUE)->featureMember->children('', TRUE)->Hostip->countryName;
return $location;
}
else return false;
}
$data = get_location($ip);
$center_long=$data['latitude'];
$center_lat=$data['longitude'];
This works fine for me, using $center_long and $center_lat the google map on the page is centered around my city, but I have a friend in Thailand who tested it from there, and he got this error:
Warning: get_location() [function.get-location]: Node no longer exists in /home/bedbugs/registry/index.php on line 21
So I'm completely confused by this, how could he be getting an error if I don't? I tried googling it and it has something to do with parsing XML data, but the parsing process is the same for me and him. Note that line 21 is the one that starts with '$coordinates =' .
You need to check the service actually has an <ipLocation> listed, you're doing:
$xml->children('gml', TRUE)->featureMember->children('', TRUE)->Hostip->ipLocation
->children('gml', TRUE)->pointProperty->Point->coordinates
but the XML output for my IP is:
<HostipLookupResultSet version="1.0.1" xsi:noNamespaceSchemaLocation="http://www.hostip.info/api/hostip-1.0.1.xsd">
<gml:description>This is the Hostip Lookup Service</gml:description>
<gml:name>hostip</gml:name>
<gml:boundedBy>
<gml:Null>inapplicable</gml:Null>
</gml:boundedBy>
<gml:featureMember>
<Hostip>
<ip>...</ip>
<gml:name>(Unknown City?)</gml:name>
<countryName>(Unknown Country?)</countryName>
<countryAbbrev>XX</countryAbbrev>
<!-- Co-ordinates are unavailable -->
</Hostip>
</gml:featureMember>
</HostipLookupResultSet>
The last part ->children('gml', TRUE)->pointProperty->Point->coordinates gives the error because it has no children (for some IPs).
You can add a basic check to see if the <ipLocation> node exists like this (assuming the service always returns at least up to the <hostIp> node):
function get_location($ip) {
$content = file_get_contents('http://api.hostip.info/?ip='.$ip);
if ($content === FALSE) return false;
$location = array('latitude' => 'unknown', 'longitude' => 'unknown');
$xml = new SimpleXmlElement($content);
$hostIpNode = $xml->children('gml', TRUE)->featureMember->children('', TRUE)->Hostip;
if ($hostIpNode->ipLocation) {
$coordinates = $hostIpNode->ipLocation->children('gml', TRUE)->pointProperty->Point->coordinates;
$longlat = explode(',', $coordinates);
$location['longitude'] = $longlat[0];
$location['latitude'] = $longlat[1];
}
$location['citystate'] = '==>'.$hostIpNode->children('gml', TRUE)->name;
$location['country'] = '==>'.$hostIpNode->countryName;
return $location;
}
I wrote the class Link which has a method shortTolong() this should return the real URL for a shortened url by returning the 'location' response header. i tested it and it works OK
here is the code
public function shortTolong()
{
$urlMatch = array();
$ch = curl_init();
$options = array
(
CURLOPT_URL=>$this->getUrl(),
CURLOPT_HEADER=>true,
CURLOPT_RETURNTRANSFER=>true,
CURLOPT_FOLLOWLOCATION=>false,
CURLOPT_NOBODY=>true);
curl_setopt_array($ch, $options);
$server_output = curl_exec($ch);
preg_match_all(LINK, $server_output,&$urlMatch,PREG_SET_ORDER);
if($urlMatch)
{
foreach($urlMatch as $set)
{
$extracted_url = $set[2].'://'.$set[3];
}
return $extracted_url;
}
else
{
return $this->getUrl();
}
}
the problem starts when i try to use this method on other file which uses FeedParser to get feed entries that contain this short urls i ned to analyze from some reason i get as a result the short url instead of the long one here is the code:
foreach($parser->getItems() as $item)
{
$idpreg = '/\d+/';
preg_match_all($idpreg, $item['ID'],$statusid);
$retweetid = ($statusid[0][1]);
$datetime = $item['PUBLISHED'];
$user = $item['AUTHOR']['NAME'];
preg_match_all(LINK, $item['TITLE'], &$linkMatch);
$final = $linkMatch[0][0];
//if($linkMatch[0][0])
echo '<p>';
$link = new Link($final);
echo $link->getUrl();
echo '<br>';
echo $link->shortTolong();
echo '<br>';
echo $user;
echo '<br>';
echo $retweetid;
echo '</p>';
}
from some reason i get the same result for getUrl() and shortTolong() and i know for certain this is an error.
any ideas why this is happening?
Thanks
Edit- I added an error notice to the method with curl_eror
i get this error message: "Protocol http not supported or disabled in libcurl"
as i said i tested this method from the and it's working fine as as stand alone in the same environment (no changes) i suspect it has something to do with FeedParser using curl too....
i think you should trim() the url and that should resolve the issue.