probably like many people my site has been affected by twitter deprecating basic authentication, so I was looking at implementing OAuth. But all I want to do is just pull the last couple of tweets from my account - I don't need to post anything, it's just readonly access to the user timeline. I've seen a couple of posts showing how to do this easily with javascript, so i'm thinking it might also be similarly straightforward with PHP (i.e not requiring OAuth)? One reason for having to use PHP instead of Javascript is that i need to check when the rate limit is about to be exceeded, then I cache the last couple of tweets for the required amount of time.
If OAuth is the best solution, I'll get on with that - grateful for any suggestions though!
I wrote a article showing how to do this:
http://philsturgeon.co.uk/news/2009/07/How-to-Create-a-Twitter-feed-with-full-syntax-support
but the basics are even easier. It all boils down to:
$tweets = json_decode(
file_get_contents('http://twitter.com/statuses/user_timeline/philsturgeon.json?count=10')
);
Enjoy.
Related
so i searched here and there yet i couldn't find anything except the outdated FQL that's gonna get limited or removed ( not quite sure ). So i'm building a app and i want to make it to post in let's say all the friends's walls but i don't wanna use a for loop because that will eat the host's CPU like a mad dog. My question is can you suggest a method ?
What I currently got:
Long lived tokens
JQuery + PHP login
Let's clarify a couple of things here:
Facebook Query Language (FQL) is not outdated and as far as I know, there are no plans to deprecate it!
Posting to friends wall is going to be removed in February 2013
Facebook always recommends using user-initiated sharing models instead of automating the process.
read about the Batch Requests concept here:
https://developers.facebook.com/docs/reference/api/batch/
this will enable you to make multiple graph API calls at once
but, this is extremely not recommended to make 1000 calls with it, and it probably won't work at all because you will easily reach facebook's call timeout.
FQL will be a better method in this case because you can do it in one query (with start and limits)
I am trying to build a small useful application with twitter. I will publish it as an open source project once I am done. I am trying to decide what is the best way to do the following:
I want to get the latest 200 tweets from Washington for example and see the most important thing these 200 tweets share. For example, if 20 tweets have tweeted the same link, this is probably an important story in Washington. Or if 50 tweets mentioned (This specific subject) it means this is important and I could get information about it.
What is the best way to do that? and is there a better way to get this information without getting the latest 200 tweets (except trends).
If you feel like this is not clear enough please provide some questions and I will clear this up
Thank you all for the help.
I don't think there is going to be any "custom" trending available, so you are going to have to parse out the links from the search results yourself.
You would use the search api function:
http://search.twitter.com/search.atom?geocode=40.757929%2C-73.985506%2C25km
After that, it should be pretty trivial to maintain a list of trends and links over the past 24 hours.
I would suggest you use a php twitter library which does already the things you wrote.
Please have a look at this question to find a library which fits your needs https://stackoverflow.com/questions/422879/best-twitter-php-library
Disclaimer: I have no Twitter API
experience nor have I used Twitter
until today
I've been given the task of creating a 'tweeting contest' - if anyone has Twitter API experience and/or has done this in the past, I would appreciate any useful tips that you may have.
So the basic rules are that in order for a user to enter the contest, said user must follow the contest's twitter and must retweet with a specific message, such as 'just entered a contest for http://foo.com/contest'.
Questions:
To get the entrants, I have to parse the rss feed of the contest, http://twitter.com/statuses/user_timeline/21586418.rss seems to only list the last few posts so I would probably have to interact with the Twitter API in order to get all messages. Can someone recommend documentation or a page that covers this?
I'm not exactly sure if I should store the actual users in a local xml file or rely on querying the Twitter API, if I store them I would have a cache local copy of users... a database would be overkill and if I were to store them it would be better off in an xml file, right?
Related to #1, should I actually parse for the exact message which the user has to tweet, eg "just entered a contest", the exact string when I parse through the data feed of all the tweets? Or is there some sort of tagging system I can use?
Related to #1, I would have to determine whether the user is a follower or not, so I can't determine that by parsing an entry/tweet, I would have to query the user's id and grab statistics from the people he/she follows?
You could search for the URL, but the best approach would be to use a hashtag:
just entered #supercoolcontest for http://foo.com/contest
You can search for incidences of #supercoolcontest which contain the required contest URL or whatever other keywords you might want. This will ensure users don't have to be text-precise when retweeting, and also gives people a way to talk about the contest in a general way that is trackable.
You can pull all tweets with a hashtag by using the search API:
http://search.twitter.com/search.json?q=%23supercoolcontest
This is probably the most efficient approach, since you are guaranteed to only pull the tweets you're interested in, instead of n tweets from n users, only a tiny fraction of which has anything to do with you.
Every time you scrape that API feed (every n minutes), insert new unique users. I'd use a database - not hard or time consuming to stand something up with a table or two. Easier to query against later.
To answer your last question, you do need to make a separate API call to determine if a given user follows another user.
I know this is an old question and is probably not relevant to meder anymore, nonetheless I want to comment that now there is another way to solve this problem using Twitter's Streamming API http://dev.twitter.com/pages/streaming_api the advantage of this approach is that you are telling twitter to send all the tweets that accomplish some conditions right when they are generated.
With the search API you need to poll twitter for new tweets all the time and there is a bigger chance that some of them will be missing from the search results; meanwhile with the streaming API you keep an open connection to twitter and process the tweets as they come, Twitter won't guarantee that you will get all the tweets that meet the conditions, but from my experience the risk is much lower.
I am losing my mind here.
Im looking at a beginners OAuth php package that has a 700 line file. Im used to using 10-12 lines of CURL or just a couple lines with simpleXML to get the same data. Is there a very meat and potatoes way to convey the concepts of interfacing with twitter via oauth without totally alienating someone?
Im used to learning by downloading an example, and tooling around with it. The only examples i can find are so confusing that id have to take a course to begin to understand the DEMO.
Specific Question:
I have a users access token. The api address is
http://twitter.com/statuses/friends_timeline.xml
How do i take the token, and mash it into the address and make that give me the data that i want? Im willing to learn, but i cant learn if i dont understand what is going on to begin with. I get the basics, you send a request, user approves, you get a token, i get that. I dont get how you make the requests with your token as authorization in place of the plain text user and pass.
Have you considered looking at one of the Twitter API PHP libraries listed on the Twitter API wiki?
I felt exactly the same, but then I found EpiTwitter which makes the whole process much, much easier. Checkout the authors blog for specific examples that actually work :)
I haven't tried this library, but someone posted a link to simple OAuth library few day ago here.
I have an app where I pull in tweets with a certain hash tag. When I find the hash tag the app automatically creates a user if they don't exist. When the user logs in via Twitter, I want be able to present them with their friends which are also using the app. The problem is for Twitter users with a ton of friends there is a max response of 100 and I'd have to continue to hit the API to 10 times to get the users of someone with 1000 friends.
Also, when pulling the friends info, should I just cache the friends in an array and move to a matched array so I don't have to hit the API again?
Given that most Twitter apps have a per hour limit on API calls you really should cache pretty much everything. Check the cache to see if you have the data first before pulling down any information.
If you are worried about how up-to-date the data is then put a time stamp in the cache. When you try to access something from the cache check whether the time difference to now is larger than some defined amount (depending on how fresh your data needs to be & how much you can keep hitting the server with requests) and if it is go and refresh the data.
This is a little like writing a good web crawler (which Jeff Atwood seems to suggest has only been done by Google). It is easy to write something that will attempt to pull down everything from the internet at once but it is more difficult to write something that will do it in a sustainable, manageable way.
Twitter have been sensible in forcing people to think through these issues by placing a "per-hour access count" on their API.
I found an API call that just returns the IDs of a Twitter user's friends and returns upwards of 5000, however, tries to return all. The docs for the call are here: http://apiwiki.twitter.com/Twitter-REST-API-Method:-friends%C2%A0ids
What I did was took the response from the API call and created a SQL statement utilizing IN. This way, I now can handle all my sorting and so forth via SQL, rather than doing a nasty array compare.