Now that API v2 is gone, what would be a way to get a simple RSS feed of a channel, without v3 API? I'm open to Yahoo Pipes or any workaround that is simpler than creating an application for v3 API if the target is a feed reader. I only need an RSS feed. It was available publicly until now and it can cease any minute now (I think). So why not let access to it without an API key anymore.
At RSS Reader section https://support.google.com/youtube/answer/6098135?hl=en there is an option to export to an OPML file your subscriptions. Then, looking at the contents of the OPML you can extract the feeds, and the structure of each feed is:
https://www.youtube.com/feeds/videos.xml?channel_id=XXXX
So you could generate new feeds from this structure if you know the channel id. This kind of feeds are not getting the "https://youtube.com/devicesupport" error, so I expect they are going to keep working.
You can get the feeds like this:
https://www.youtube.com/feeds/videos.xml?channel_id=CHANNELID
https://www.youtube.com/feeds/videos.xml?user=USERNAME
https://www.youtube.com/feeds/videos.xml?playlist_id=YOUR_YOUTUBE_PLAYLIST_NUMBER
But the JSON format which used to be supported (with additional parameter &alt=JSON) is not supported anymore.
Additionally you can request for API key for public access to your YouTube videos from your developer console and get YouTube Videos, Playlists in JSON format like this:
- Get Channels:
https://www.googleapis.com/youtube/v3/channels?part=snippet%2CcontentDetails&forUsername={YOUR_USER_NAME}&key={YOUR_API_KEY}
- Get Playlists:
https://www.googleapis.com/youtube/v3/playlists?part=snippet%2CcontentDetails&channelId={YOUR_CHANNEL_ID}&key={YOUR_API_KEY}
- Get Playlist Videos:
https://www.googleapis.com/youtube/v3/playlistItems?part=snippet%2CcontentDetails%2Cstatus&playlistId={YOUR_PLAYLIST_ID}&key={YOUR_API_KEY}
More information from YouTube v3 docs.
in you tube, click on the subscriptions on the left hand pane. This will open up all your subscriptions in the center of the page. Scroll down and you'll find a Export to RSS reader button which produces an xml file of all your subscriptions . I've done this and added it to my prefered rss reader feedly.
If you inspect any Youtube channel page, inside the <head> you will find an rss meta node like this:
<link rel="alternate"
type="application/rss+xml" title="RSS"
href="https://www.youtube.com/feeds/videos.xml?channel_id=UCn8zNIfYAQNdrFRrr8oibKw">
This should provide you with the data you need.
Get the channel id by searching for the attribute data-channel-external-id in the source code of the YouTube channel page. (thanks to helq).
This code will grab all video titles and ids from the feed and dump it into an array:
$channel_id = 'XXX'; // put the channel id here
$youtube = file_get_contents('https://www.youtube.com/feeds/videos.xml?channel_id='.$channel_id);
$xml = simplexml_load_string($youtube, "SimpleXMLElement", LIBXML_NOCDATA);
$json = json_encode($xml);
$youtube = json_decode($json, true);
$yt_vids = array();
$count = 0;
foreach ($youtube['entry'] as $k => $v) {
$yt_vids[$count]['id'] = str_replace('http://www.youtube.com/watch?v=', '', $v['link']['#attributes']['href']);
$yt_vids[$count]['title'] = $v['title'];
$count++;
}
print_r($yt_vids);
I've created a small PHP script that scrapes a Youtube URL for video links, and then outputs them as an atom feed: https://gist.github.com/Skalman/801436d9693ff03bc4ce
URLs such as https://www.youtube.com/user/scishow/videos work.
Caveats:
The tool doesn't scrape dates
Playlists won't include more than 100 videos
Playlists include the "play all" link
Author is correctly set only for channels (e.g. not playlists)
Maybe Youtube will block you if you use this too much (but hopefully the limits are high enough)
Likely several more...
There also exist RSS-Bridge witch can extract RSS feeds from a lot of services like Twitter, Google+, Flickr, Youtube, Identi.ca, etc.
source: https://github.com/sebsauvage/rss-bridge
demo server: https://bridge.suumitsu.eu/
try using this URL:
https://www.youtube.com/feeds/videos.xml?user=USERNAME
Works fine for me.
From My Blog Post: http://tcodesblog.blogspot.com/search/label/howtofindyouryoutubechannelfeed
HOW TO FIND YOUR YOUTUBE CHANNEL FEED
In the old days, it was easy (2009) but now a days it is much harder to find it (2012-present). Here is a quick way to find your new feed from your YouTube Channel. Remember to follow the list correctly!
First find your channel id: You can do this by going to your YouTube Channel in the Dashboard
Copy the channel id: Your channel id can be found when visiting your YouTube Channel from within the Dashboard
Copy your channel id: Copy your channel id and replace channelidgoeshere below with your channel id: https://www.youtube.com/feeds/videos.xml?channel_id=channelidgoeshere
Copy your entire YouTube Channel Feed and create a simplified feed: You can do this by creating a shorter feed link in FeedBurner at http://www.feedburner.com/ (Requires a Google account. Free to use.), which is also part of Google. Create a new feed (select I'm A Podcaster! to see your videos appear in the feed and to make your feed compatible with other feed readers such as: Digg Reader, Apple iPhone Apple News App, Apple iPhone Podcasts App, Feedly, etc.) -OR- edit an existing one by copying your entire YouTube Channel Feed and then click Save Feed Details as normal
Your YouTube Channel Feed now works and your videos can be seen in a feed file directly on your FeedBurner feed. Mine is at YouTube as a feed at https://www.youtube.com/feeds/videos.xml?channel_id=UCvFR6YxwnYfLt_QqRFk_r3g & at FeedBurner as http://feeds.feedburner.com/youtube/warrenwoodhouse with my videos that appear only as text format, as an example, since I need to update mine to show my videos. You can change different settings in FeedBurner and do other things so it's worth a try since it's free and easy to use. I highly recommend using FeedBurner or another feed creation service, however, FeedBurner is your best bet since it also includes cross-feed subscription service mechanism (USM - Universal Subscription Mechanism), which means your feed can be read from any compatible device such as a computer, mobile phone (with the correct app installed), via an older web browser (such as Internet Explorer which supports Web Slices & RSS/Atom/XML Feeds).
Your feed can also be opened up in Apple iPhone Apple News App & Apple iPhone Podcasts App on your Apple iPhone, Apple iPod Touch and Apple iPad if you've set the settings correctly to USM (Universal Subscription Mechanism). Once this is in effect, your feed can be viewed through different services and devices.
Your feed on FeedBurner allows you to create an Email Subscription, Headline Animator (which shows you how a link to the latest post) along with how many subscribers, Chiclets and other cool stuff.
I hope this answer proves useful and if you want to see some more cool awesome coding practices by me, please feel free to check out my T-Codes website at http://warrenwoodhouse.webs.com/codes for lots more stuff.
I have created an example Yahoo Pipes here.
http://pipes.yahoo.com/pipes/pipe.info?_id=6eeff0110a81f2ab94e8472620770b11
You can run this pipe by pressing "Run Pipe" without API Key filled. But you must provide your own API Key and channel id (which can be obtained via channels API) when cloned. Wanted to automate fetching channelId by YouTube username but not easy to pipe.
I've made a batch script that creates an RSS feed of your new subscription videos. You don't need an API key. The script uses 2 external tools: YouTube-DL and Xidel.
Anyway, read the following thread, and go to post 98 to download the script:
http://code.google.com/p/gdata-issues/issues/detail?id=3946#c98
I hope someone codes this to php, python, javascript, powershell or bash.
I think there are some changes in youtube response so i make some changes to get channel id from rss feed using Curl.
$channel_id = 'XXXXXXXX'; // put the channel id here
//using curl
$url = 'https://www.youtube.com/feeds/videos.xml?channel_id='.$channel_id.'&orderby=published';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
//curl_setopt($ch, CURLOPT_CUSTOMREQUEST, 'GET');
$response = curl_exec($ch);
curl_close($ch);
$response=simplexml_load_string($response);
$json = json_encode($response);
$youtube= json_decode($json, true);
$count = 0;
if(isset($youtube['entry']['0']) && $youtube['entry']['0']!=array())
{
foreach ($youtube['entry'] as $k => $v) {
$yt_vids[$count]['id'] = str_replace('http://www.youtube.com/watch?v=', '', $v['link']['#attributes']['href']);
$yt_vids[$count]['title'] = $v['title'];
$count++;
}
}
else
{
$yt_vids[$count]['id']=str_replace('http://www.youtube.com/watch?v=', '', $youtube['entry']['link']['#attributes']['href']);
$yt_vids[$count]['title']=$youtube['title'];
}
echo "<pre>";
print_r($yt_vids);
I used the below code to integrate Youtube Feed with wordpress custom field "ACF plugin" & FancyBox
<?php
$channel_id = get_field('youtube_chanel_id'); //ACF text field
if ($channel_id){ // if channel_id not empty -- START
$youtube = file_get_contents('https://www.youtube.com/feeds/videos.xml?channel_id='.$channel_id);
$xml = simplexml_load_string($youtube, "SimpleXMLElement", LIBXML_NOCDATA);
$json = json_encode($xml);
$youtube = json_decode($json, true);
echo'<div class="col-md-12 youtube-videos-feed">';
foreach ($youtube['entry'] as $k => $v) {
$id = str_replace(array("yt:video:"), "", $v['id']); // Remove "yt:video:" from ID value
//$date = $v['updated']; // video updated date (disabled for now)
$title = $v['title']; // video title
echo '<a class="with-video" href="https://www.youtube.com/watch?v=',$id,'&autoplay=1&rel=0&controls=0&showinfo=0&modestbranding=0" data-fancybox="videos" data-caption="',$title,'" title="',$title,'" >
<div class="col-md-3 main-image post-image img-fancy">
<img src="https://img.youtube.com/vi/',$id,'/0.jpg" alt="',$title,'" >
</div>
</a>';
}
echo'</div>';
} // if channel_id not empty -- END
?>
I found a Chrome extension named Youtube RSS-ify that injects an RSS icon on video, channel and navigation pages. It was just what I was looking for.
Icons look like this:
I would suggest using an excellent rss parser. Many of them are available, but you can try http://simplepie.org/, one of the best I used for my personal projects.
Its pretty well documented with some examples.
Usage example
Note:Used YouTube channel college humor, you can get it from the channel page itself
<?php
include_once('../autoloader.php');
// Parse it
$feed = new SimplePie();
$feed->set_feed_url('https://www.youtube.com/feeds/videos.xml?channel_id=UCPDXXXJj9nax0fr0Wfc048g');
$feed->enable_cache(false);
$feed->init();
$items = $feed->get_items();
foreach ($items as $item)
{
echo $item->get_title() . "\n";
}
var_dump($feed->get_item_quantity());
Easiest way to get the channel id:
Open Subscription Manager (left panel, down below subscriptions) and click on the desired user.
The url will be in the form:
https://www.youtube.com/channel/XXXXXXXXXXXXXXXXX
So the feed url should be:
https://www.youtube.com/feeds/videos.xml?channel_id=XXXXXXXXXXXXXXXXX
Note: Better use channel ids rather than user names because user names may change.
I have 3 events:
https://www.facebook.com/491594114271958 (Mad Haus)
https://www.facebook.com/569226999799343 (Deuglish)
539504962802119/ (Piffle)
All are being fetched via the PHP
$config = array();
$config['appId'] = $appId;
$config['secret'] = $secret;
$config['fileUpload'] = false; // optional
$facebook = new Facebook($config);
$ev = $facebook->api('/'.$id."?fields=cover,description,location,name,owner,venue",'GET');
print_r($ev);
For some reason Mad Haus and Piffle do not return venue data but Deuglish does. All events return basic data such as title, description and start time. When I run them through the Graph API explorer the venue data is returned as expected but not through the PHP API, any ideas? I can not for the life of me see the difference with these 3 events.
Thanks,
Caroline
Well, if you don't want to wait for the Facebook developers to fix the bug, you can try the same by making a simple GET request (of course along with an Access Token) to Graph API using PHP cURL. Since you are able to retrieve the venues while using the Graph API explorer, I'm pretty sure you'll be able to get them by making a simple GET request using PHP cURL.
This however involves an overhead of parsing the received JSON object and dealing with errors will be a little difficult here. But you can give it a shot.
So I'm trying to get details of a specific venue using PHP. Here's my code that attempts to use a GET request to the Foursquare API to return results and then process them as JSON and display the name, address and city:
$curlhandle = curl_init();
curl_setopt($curlhandle, CURLOPT_URL, "https://api.foursquare.com/v2/venues/4b522afaf964a5200b6d27e3");
curl_setopt($curlhandle, CURLOPT_RETURNTRANSFER, 1);
$response = curl_exec($curlhandle);
curl_close($curlhandle);
$json = json_decode($response);
foreach ($json->groups[0]->venues as $result)
{
echo $result->name.' - '.$result->address.' '.$result->city."<p />";
}
What am I doing wrong? I'm completely new to PHP and the Foursquare API so it could be something glaringly obvious.
You don't need to authenticate using the OAuth flow to get venue information, but you do need to add your Client ID and Client Secret to the API call.
So, the URL should be something like:
"https://api.foursquare.com/v2/venues/4b522afaf964a5200b6d27e3?client_id=CLIENT_ID&client_secret=CLIENT_SECRET
In JavaScript, the URL should be
`https://api.foursquare.com/v2/venues/${venue_id}?client_id=${CLIENT_ID}&client_secret=${CLIENT_SECRET}&v=20180323`
Note, I am using template literals and don't forget v=20180323 because the Foursquare API no longer supports requests that do not pass in a version parameter. Of course, you can modify the version number to keep updated.
You need to authenticate your request, if you go to the URL you get this.
{"meta":{"code":400,"errorType":"invalid_auth","errorDetail":"Missing access credentials. See https:\/\/developer.foursquare.com\/docs\/oauth.html for details."},"response":{}}
So I'd say you need to authenticate by following this: https://developer.foursquare.com/overview/auth.html
This worked for me: (on Python)
url = 'https://api.foursquare.com/v2/venues/{0}'.format(self.placeid)
params = dict(
client_id=self.clientid,
client_secret=self.clientsecret,
v='20170801'
)
r = requests.get(url=url, params=params)
I have a text file with 5,000 Twitter Users :-
JRJSHEARD
KMM_1979
ELMOCHLOE
ANNIEMMERSON
PATLOCKLEY
LISSYNUMBER
CAL32INCHSCREEN
PRINGLEDUDE
CORESMUSIC
I have found this API http://api.twitter.com/1/users/show.xml?screen_name=JRJSHEARD which is really useful and just what I need.
How would I write a php function to loop through the user names in a text file and append their bio found between these tags (<description> </description>).
Is this possible? Any help would be gratefully received.
If you want to harvest user info in bulk from Twitter use users/lookup rather than users/show. The users/lookup API call returns 100 user objects at a time, and you can either pass the user IDs or the screen names when you make the call, however you will need to authenticate using OAuth in order to use it.
I recommend using JSON since it is a much more lightweight document format than XML. You will typically transfer only about 1/3 to 1/2 as much data over the wire, and I find that (in my experience) Twitter times-out less often when serving JSON.
http://api.twitter.com/1/users/lookup.json?screen_name=JRJSHEARD,KMM_1979,ELMOCHLOE
That's the direct API call, but if you're just starting out, what I would recommend is using a Twitter service implementation rather than try to do all the heavy lifting yourself. I'm not a PHP person, but my PHP-using Twitter buddies recommend Zend - http://framework.zend.com/manual/en/zend.service.twitter.html
$api = "http://api.twitter.com/1/users/show.xml?screen_name=";
$users = file("users.txt", FILE_IGNORE_NEW_LINES);
$i = 0;
foreach($users as $user){
$data = curl("$api$user");
preg_match("#<description>(.*?)</description>#is", $data, $matches);
$bio[$i]["user"] = $user;
$bio[$i]["description"] = $matches[1];
$i++;
}
function curl($url){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_close($ch);
return curl_exec($ch);
}
Using the twitter API (and OAuth) if i was to call for the user followers, (statuses/followers) i would be returned only 99 results.
Is there a way i can return 99, then call again starting at follower 100 then looping through this style of calling until the total number of followers has been returned?
Or just return ALL followers?
You need to specify cursor parameter as described in the API documrnation. E.g. specify cursor=-1 to request the first page and then use a next_cursor value returned in the first response:
http://twitter.com/statuses/followers/barackobama.xml?cursor=-1
http://twitter.com/statuses/followers/barackobama.xml?cursor=1300794057949944903
<?php
$trends_url = "http://api.twitter.com/1/statuses/followers/fawadghafoor.json";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $trends_url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$curlout = curl_exec($ch);
curl_close($ch);
$response = json_decode($curlout, true);
foreach($response as $friends){
$thumb = $friends['profile_image_url'];
$url = $friends['screen_name'];
$name = $friends['name'];
?>
<a title="<?php echo $name;?>" href="http://www.twitter.com/<?php echo $url;?>"><img class="photo-img" src="<?php echo $thumb?>" border="0" alt="" width="40" /></a>
<?php } ?>
Be sure you're using the right call. followers/ids gives you 5000 at a time (but it's just a list of ids). This call, too, uses the cursor to let you step through pages of users. You get a zero back when you have them all.
Twitter API restricts us to make api Call for method followers/ids is 15 requests per 15 minutes. If you make more than this api will give you an error message that Rate Limit Reached.
For more information of twitter API rate Limit
Visit-https://dev.twitter.com/docs/rate-limiting/1.1 and https://dev.twitter.com/docs/rate-limiting/1.1/limits
Twitter only allows a certain number of API requests per hour, and I think minute. You might not be able to retrieve any more than 99 requests at once.
Though I asked this quite a while ago, I came back to building something quite similar recently (+ new programming skills).
I noticed Twitter API have a method to get all of a users followers (or followings) user ID's in one request. I found the best way was to array_chunk up the ID's into batches of 100 (and only take the first 30 arrays as I dont want to use all the users api requests that hour - they might want to actually tweet!). Then there is a method that allows you to get up to 100 users userinfo (from the view of the currently authenticated user) so I just do a loop (sleep a bit inbetween) and then you've got 30,000 twitter followers!
I would recommend doing this asynchronously in a queue system, as if you do this on the fly when the users requests a page on the site it could be very slow and you might be prone to a HTTP timeout. Also cache them like hell!
Sorry I didn't post any code, but hopefully this thought process will help someone :)
$cursor = -1;
$account_from = 'twitter_account';
do
{
$json = file_get_contents('http://api.twitter.com/1/statuses/followers/' . $account_from .'json?cursor=' . $cursor);
$accounts = json_decode($json);
foreach ($accounts->users as $account)
{
array(
':twitter_id' => $account->id_str,
':account' => $account->screen_name,
':description' => $account->description,
);
}
$cursor = $accounts->next_cursor;
}
while ($cursor > 0);