Well, what I'm trying to do is quite obvious. I am receiving tweets as shown below :
$options .= 'q='.urlencode($hash_tag);
$options .= '&page=15';
$options .= '&rpp=100';
$options .= '&result_type=recent';
$url = 'https://search.twitter.com/search.atom?'.$options ;
$ch = curl_init($url);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, TRUE);
$xml = curl_exec ($ch);
curl_close ($ch);
$affected = 0;
$twelement = new SimpleXMLElement($xml);
foreach ($twelement->entry as $entry) {
$text = trim($entry->title);
$author = trim($entry->author->name);
$time = strtotime($entry->published);
$id = $entry->id;
echo '<hr>';
echo "Yazan : ".$author;
echo "</br>";
echo "Tarih : ".date('Ymd H:i:s',$time);
echo "</br>";
echo "Tweet : ".$text;
echo "</br>";
}
and as you can check on this link : linkToTrial I can receive tweets. But they are so old for me! I want to receive tweets in last moments, at least in last 5 mins. Here it says
This sounds like something you can do on your end, as created_at is one of the fields returned in the result set. Just do your query, and only use the ones that are within the last 5 seconds.
but when you check my example, you will see that I'm not even receiving the last tweets. Where am I doing wrong? Where?
Any answer will be appreciated. Thanks for your responds.
You're using a deprecated API (search.twitter.com) that will cease functioning on May 7, 2013 -- you'll want to move to the v1.1 Search API -- see https://dev.twitter.com/docs/api/1.1/get/search/tweet for docs.
It looks like the specific reason you're getting older results with this query is that you're starting on page 15 -- the end of the result set. The most recent tweets will be at the beginning of the result set -- page 1.
In API v1.1, the concept of paging no longer exists for the Search API. Instead, you navigate through the result set using since_id and max_id, details here: https://dev.twitter.com/docs/working-with-timelines
Related
Im new to php and tried to get a json object from the twitch API to retrieve one of its values and output it. i.e
i need to get the information from this link: https://api.twitch.tv/kraken/users/USERNAME/follows/channels/CHANNELSNAME
plus i need to to something so i can modify the urls USERNAME and CHANNELSUSERNAME. I want it to be a api to call for howlong user XY is following channelXY and this will be called using nightbots $customapi function.
the date i need from the json is "created_at"
Since we were able to clear out the errorsheres the final PHP file that works if anyone encounters similiar errors:
<?php
$url = "https://api.twitch.tv/kraken/users/" . $_GET['username'] . "/follows/channels/" . $_GET['channel'];
$result = file_get_contents($url);
$result = json_decode($result, true);
echo $result["created_at"];
?>
You have a typo in your code on the first line and you're not storing the result of your json_decode anywhere.
<?php
$url = "https://api.twitch.tv/kraken/users/" . $_GET['username'] . "/follows/channels/" . $_GET['channel'];
$result = file_get_contents($url);
$result = json_decode($result, true);
echo $result["created_at"];
You have to call the page this way page.php?username=yeroise&channel=ceratia in order to output the created_at value for this user and this channel.
In your code you're using 2 different ways to get the content of the page and you only need one (either file_get_contents or using CURL), I chose file_get_contents here as the other method adds complexity for no reason in this case.
Currently, I am trying to use the Twitch API to grab Twitch stats such as current viewers, Title, and more I am running into an issue with #File_Get_Contents when using this my request seem to be delayed or not getting them as quick as I refresh, i.e. I think the results may be cache'd.
For example here is my old code
$twitch = json_decode(curl_get_file_contents('https://api.twitch.tv/kraken/channels/'.$twitch_channel), true);
$display_name = $twitch['display_name'];
$game = $twitch['game'];
$status = $twitch['status'];
$url = $twitch['url'];
$avatar = $twitch['logo'];
$views = $twitch['views'];
$followers = $twitch['followers'];
The main issue with this is it didnt seem like it updated every time I refreshed, so I looked into using cURL for better results + I heard it's much quicker with load time!
Here is my current curl code
$requesturl='https://api.twitch.tv/kraken/channels/' . $twitch_username;
$ch=curl_init($requesturl);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$cexecute=curl_exec($ch);
curl_close($ch);
$twitch = json_decode($cexecute,true);
$display_name = $twitch['display_name'];
$game = $twitch['game'];
$status = $twitch['status'];
$url = $twitch['url'];
$avatar = $twitch['logo'];
$views = $twitch['views'];
$followers = $twitch['followers'];
Your log is showing PHP Notice, You don't have any error. I change a lil bit your code to test it and It's working. So you're probably just no printing your vars.
Check your code with a print_r online
So ultimnately I figured out what the issue was, it's XAMP Curl not working for whatever reason. It's setup properly in the PHP.ini, I have the Two .DLL in my Sys32. So I dont know why this isn't working
I'm making a website where I'd like the user to be able to start typing in a band name (for example, "Rad") and have Discogs API display 10 most similar suggestions to them (for example, "Radical Face", "Radiohead", etc). These suggestions could be sorted either alphabetically or, ideally, by popularity.
The problem is that I don't know how to make such a request to the Discogs API. Here's the code I'm working with now, which retrieves the content of http://api.discogs.com/releases/1 and parses it.
Any insight would be appreciated. Thank you.
<?php
$url = "http://api.discogs.com/releases/1"; // add the resource info to the url. Ex. releases/1
//initialize the session
$ch = curl_init();
//Set the User-Agent Identifier
curl_setopt($ch, CURLOPT_USERAGENT, 'SiteName/0.1 +http://your-site-here.com');
//Set the URL of the page or file to download.
curl_setopt($ch, CURLOPT_URL, $url);
//Ask cURL to return the contents in a variable instead of simply echoing them
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
//Execute the curl session
$output = curl_exec($ch);
//close the session
curl_close ($ch);
function textParser($text, $css_block_name){
$end_pattern = '], "';
switch($css_block_name){
# Add your pattern here to grab any specific block of text
case 'description';
$end_pattern = '", "';
break;
}
# Name of the block to find
$needle = "\"{$css_block_name}\":";
# Find start position to grab text
$start_position = stripos($text, $needle) + strlen($needle);
$text_portion = substr($text, $start_position, stripos($text, $end_pattern, $start_position) - $start_position + 1);
$text_portion = str_ireplace("[", "", $text_portion);
$text_portion = str_ireplace("]", "", $text_portion);
return $text_portion;
}
$blockStyle = textParser($output, 'styles');
echo $blockStyle. '<br/>';
$blockDescription = textParser($output, 'description');
echo $blockDescription. '<br/>';
?>
With the discogs API you can easily execute a search. I think you have already viewed the documentation: https://www.discogs.com/developers/#page:database,header:database-search
There you can even say that you only want to search for artists. When you retrieve the results you must either sort them alphabetically by yourself or must relay on the order of the results. I think that order is already some kind of popularity by discogs as far as I can see from the documentation. And it is the same implementation as in the website integrated search.
You should keep in mind that the result set can be very large. So sorting by alphabet wouldn't be the best idea as you have to retrieve all result pages. Here you should increase the page_size parameter to the maximum of 100 items per page.
I'm developing a Twitter App and have a problem I cannot resolve. Could you help me please?
The app is for a promotion for a brand. We need to count every tweet using a hashtag and give the author of tweet #50000 a price. How can we take that data from Twitter API and identify tweet #50000? Thanks for your help!
We use PHP and MySQL.
I would start by looking into phirehose which will allow you to obtain the tweets. You can also use the Ruby Twitter Gem which is fairly well documented and seems to be easy to use if you are comfortable with ruby.
this php source code for get count hashtag(#) twitter
<?php
global $total, $hashtag;
//$hashtag = '#supportvisitbogor2011';
$hashtag = '#australialovesjustin';
$total = 0;
function getTweets($hash_tag, $page) {
global $total, $hashtag;
$url = 'http://search.twitter.com/search.json?q='.urlencode($hash_tag).'&';
$url .= 'page='.$page;
$ch = curl_init($url);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, TRUE);
$json = curl_exec ($ch);
curl_close ($ch);
//echo "<pre>";
//$json_decode = json_decode($json);
//print_r($json_decode->results);
$json_decode = json_decode($json);
$total += count($json_decode->results);
if($json_decode->next_page){
$temp = explode("&",$json_decode->next_page);
$p = explode("=",$temp[0]);
getTweets($hashtag,$p[1]);
}
}
getTweets($hashtag,1);
echo $total;
?>
Thanks..
I was looking this up last night. You can request the URL ie. http://search.twitter.com/search.json?q=%23hashtag
(Here's the docs page http://dev.twitter.com/doc/get/search)
And on say a 5 minute cron script, keep a record of the last tweet ID you got, passing that to the search URL since_id parameter, while also keeping a count of how many tweets you have counted, optionally storing each tweet in a table for reference... that's my 2 cents
I'm building an event based on the current tweet count in php that holds a particular hashtag.
On the main page I have a counter that says: "tweets so far: xxx". I used this php script:
global $total, $hashtag;
$hashtag = '#somehashtag';
$total = 0;
function getTweets($hash_tag, $page) {
global $total, $hashtag;
$url = 'http://search.twitter.com/search.json?q='.urlencode($hash_tag).'&';
$url .= 'page='.$page;
$ch = curl_init($url);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, TRUE);
$json = curl_exec ($ch);
curl_close ($ch);
$json_decode = json_decode($json);
$total += count($json_decode->results);
if($json_decode->next_page){
$temp = explode("&",$json_decode->next_page);
$p = explode("=",$temp[0]);
getTweets($hashtag,$p[1]);
}
}
getTweets($hashtag,1);
Next, I save the result in my database:
$updateCount = "UPDATE tweets SET currentcount = '$total'";
$doQuery = mysql_query($updateCount, $con) or die(mysql_error());
This script runs by a cronjob that fires the script every 2 minutes.
It works fine. But this script returns only 1500 tweets since twitter doesn't allow any more.
How can I still keep track of the current tweet count? I know there is no way of going above the max tweets of the twitter api. But maybe through time or database checks?
Thanks in advance!
The way I would do it:
save the last tweets id you got in the database with the current count.
on the next run, get the tweets from that tweet on, add the number of tweets to the count and update the last tweet entry with the new last tweet.
Lather, rinse, repeat.
One problem with the approach: if there are more than 1500 tweets in less than 2 minutes, some will not be counted.
To get the tweets from your last tweet on, you can use the since_id parameter:
since_id: returns tweets with status ids greater than the given id.
Twitter Search API