Netflix: Fetching new remote data when it becomes readily available - php

I hope this is an appropriate question: I'm using the Netflix API and I'm wondering what the best way one would be able to automatically receive new data when presented (in this case, recently watched films when a Netflix user finishes watching one) The only way I can think of is spamming requests in intervals to query their feed. And would PHP be my best bet?

That's right, Netflix doesn't provide any push notifications through their API. You'll have to poll their feed periodically, but not too often: your consumer key is limited to a certain number of requests per second and requests per day.
I'm not exactly sure what you're trying to do, to know whether PHP would be the right choice. OAuth libraries are available for pretty much every major language, so it's up to you.

Related

How to share real time updates on a website between users on different computers?

I'm trying to figure out a way for users of a website (say a student and teacher) to share a secure connection where real time updates on one page are viewed by both of them.
From research I've concluded that some of the real time updates could be performed using ajax and javascript.
But I'm stumped as to how users could share a connection where only the two users would be viewing the updates that take place on the website (such as flash animations of a drawing board.) I'm also confused how you would even begin to set up a connection like this.
I've looked intp php sessions and cookies but I'm not sure I'm doing the right research.
Any pointers as to how two specific users could share a secure connection where real time updates are viewed by the both of them only. I don't want a terse response please. I'm looking for specific details like functions and syntax specific to php. I appreciate the help and will rate you up if you give me good answers!
You cannot share a secure connection (e.g. HTTPS) its one client to one server.
If both clients are logged in and have a background AJAX task running in the browser, is it acceptable to have each client "pull" every few seconds the same data to display for both users?
This would require the "drawing board" updates to also be sent continuously back to the server to share the updated data with the other client. I'm sure there will be an event you can use to trigger the post of data (e.g. on mouse up).
If performance is an issue, you'd want to use a better server technology like Java that is able to keep session state between requests without having to persist to a database.
You can look at ajax push techniques. I used comet once where an administrator posted messages and everybody who was logged in saw that message appear on their screen. I don't know if comet supports PHP. I only used it with JSP. Just search for "ajax push" in Google.
Flash allows for connections between users, I think they refer to them as sockets.
If you want to use Ajax, et al, you need a server side technology that supports push.
Node is the standard in this regard, and you can set up a Heroku instance for free.
There are others, and you need to learn tools before even beginning to learn application.
Among the many overviews, this might interest you:
http://arstechnica.com/business/2012/05/say-hello-to-the-real-real-time-web/?1
A few good examples where this is happening:
Google Docs
Etherpad
HTML5 Games: Multi player
Techniques you can use (with varying browser support)
HTML5 WebSockets (Wikipedia; MDN; HTML5 Demo)
Comet (Wikipedia)
Really pushing data to a web browser client from a server (which would do that when it receives something from another client) is only possible with WebSockets as far as I know. Other mechanism would either require browser plugins or a stand-alone application.
However with Comet (through AJAX) you can get really close to pushing data by polling the server periodically for data. However contrary to traditional polling (e.g. where a clients asks for data every 5 seconds), with the Comet principle the server will hold that periodic request hostage for, say, up to 30 seconds. The server will not send back data until either it has data or the time out is reached. That way, during those 30 seconds, any data that the server receives can be instantly pushed back to the other clients. And right after that the client starts a new 30 second session, and so forth.
Although both Comet and WebSockets should work with a PHP backend served by Apache. I'd recommend looking into NodeJS (as server technology) for this.
There is a lot of information regarding Comet on the internet, I suggest you google it, maybe start at Wikipedia.
The great thing about Comet is that it is more a principle than a technology. It uses what we already have (simple HTTP requests with AJAX), so browser support is very wide.
You could also do a combination, where you use sockets if supported and fallback to Comet.
I'm sure you have looked into this. The opinion that this can be done via ajax is misleading to believe that two users of a website can communicate via javascript.
As you are aware, javascript happens on the client and ajax is essentially 'talking to the server without a page change or refresh'.
The communication between two users of the website has to happen via the server - php and some chosen datastore.
Hope that wasn't terse.
cheers, Rob

Server-side or client-side for fetching tweets?

I run this website for my dad which pulls tweets from his twitter feed and displays them in an alternative format. Currently, the tweets are pulled using javascript so entirely client-side. Is the most efficient way of doing things? The website has next to no hit rate but I'm just interested in what would be the best way to scale it. Any advice would be great. I'm also thinking of including articles in the stream at some point. What would be the best way to implement that?
Twitter API requests are rate limited to 150 an hour. If your page is requested more than that, you will get an error from the Twitter API (an HTTP 400 error). Therefore, it is probably a better idea to request the tweets on the server and cache the response for a certain period of time. You could request the latest tweets up to 150 times an hour, and any time your page is requested it receives the cached tweets from your server side script, rather than calling the API directly.
From the Twitter docs:
Unauthenticated calls are permitted 150 requests per hour.
Unauthenticated calls are measured against the public facing IP of the
server or device making the request.
I recently did some work integrating with the Twitter API in exactly the same way you have. We ended up hitting the rate limit very quickly, even just while testing the app. That app does now cache tweets at the server, and updates the cache a few times every hour.
I would recommend using client-side to call the Twitter API. Avoid calling your server. The only downfall to using client-side js is that you cannot control whether or not the viewer will have js deactivated.
What kind of article did you want to include in the stream? Like blog posts directly on your website or external articles?
By pulling the tweets server side, you're routing all tweet traffic through your server. So, all your traffic is then coming from your server, potentially causing a decrease in the performance of your website.
If you don't do any magic stuff with those tweets that aren't possible client side, I should stick with your current solution. Nothing wrong with it and it scales tremendously (assuming that you don't outperform Twitter's servers of course ;))
Pulling your tweets from the client side is definitely better in terms of scalability. I don't understand what you are looking for in your second question about adding articles
I think if you can do them client side go for it! It pushes the bandwith usage to the browser. Less load on your server. I think it is scalable too. As long as your client can make a web request they can display your site! doesn't get any easier than that! Your server will never be a bottle neck to them!
If you can get articles through an api i would stick to the current setup keep everythign client side.
For really low demand stuff like that, it's really not going to matter a whole lot. If you have a large number of tasks per user then you might want to consider server side. If you have a large number of users, and only a few tasks (tweets to be pulled in or whatever) per user, client side AJAX is probably the way to go. As far as your including of articles, I'd probably go server side there because of the size of the data you'll be working with..

how to practice good ethics while executing many curl requests in php

I have done a fair amount of reading on this and I am not quite sure what the correct way to go about this is.
I am accessing a websites api that provides information that I am using on my site. On average I will be making over 400 different API requests which means over 400 curl requests. What is the proper way to make my code pause for an amount of time then continue. The site does not limit the amount of hits on so I will not get banned for just pulling all of the stuff at once, but I would not want to be that server when 10,000 people like me do the same thing. What I am trying to do is pause my code and politely use the service they offer.
What is the best method to pause php execution with resource consumption in mind?
What is the most courteous amount of requests per wait cycle?
What is the most courteous amount of wait per cycle?
With all of these questions I would also like to obtain the information as fast as possible while attempting to stay with in the above questions.
sample eve central API response
Thank you in advance for your time and patience.
Here's a thought: have you asked? If an API has trouble handling a high load, they usually include a limit in their terms. If not, I'd recommend emailing the service provider, explain what you want to do, and ask what they think would be a reasonable load. Though it's quite possible that their servers are quite capable of handling any load you might reasonably want to give it, which is why they don't specify.
If you want to do good by the service provider, don't just guess want they want. Ask, and then you'll know exactly how far you can go without upsetting the people who built the API.
For the actual mechanics of pausing, I'd use the method alex suggested (but has since deleted) of PHP's usleep.

Push notification to the client browser

I'd like to create an application where when a Super user clicks a link the users should get a notification or rather a content like a pdf for them to access on the screen.
Use Case: When a teacher wants to share a PDF with his students he should be able to notify his students about the pdf available for download and a link has to be provided to do the same.
There are several ways you can accomplish this. The most supported way is through a technique called Comet or long-polling. Basically, the client sends a request to the server and the server doesn't send a response until some event happens. This gives the illusion that the server is pushing to the client.
There are other methods and technologies that actually allow pushing to the client instead of just simulating it (i.e. Web Sockets), but many browsers don't support them.
As you want to implement this in CakePHP (so I assume it's a web-based application), the user will have to have an 'active' page open in order to receive the push messages.
It's worth looking at the first two answers to this, but also just think about how other sites might achieve this. Sites like Facebook, BBC, Stackoverflow all use techniques to keep pages up to date.
I suspect Facebook just uses some AJAX that runs in a loop/timer to periodically pull updates in a way that would make it look like push. If the update request is often enough (short time period), it'll almost look realtime. If it's a long time period it'll look like a pull. Finding the right balance between up-to-dateness and browser/processor/network thrashing is the key.
The actual request shouldn't thrash the system, but the reply in some applications may be much bigger. In your case, the data in each direction is tiny, so you could make the request loop quite short.
Experiment!
Standard HTTP protocol doesn't allow push from server to client. You can emulate this by using for example AJAX requests with small interval.
Have a look at php-amqplib and RabbitMQ. Together they can help you implement AMQP (Advanced Message Queuing Protocol). Essentially your web page can be made to update by pushing a message to it.
[EDIT] I recently came across Pusher which I have implemented for a project. It is a HTML5 WebSocket powered realtime messaging service. It works really well and has a free bottom tier plan. It's also extremely simple to implement.
Check out node.js in combination with socket.io and express. Great starting point here

Real time activity feed - code / platform implementation?

I am defining out specs for a live activity feed on my website. I have the backend of the data model done but the open area is the actual code development where my development team is lost on the best way to make the feeds work. Is this purely done by writing custom code or do we need to use existing frameworks to make the feeds work in real time? Some suggestions thrown to me were to use reverse AJAX for this. Some one mentioned having the client poll the server every x seconds but i dont like this because it is unwanted server traffic if there are no updates. I was also mentioned a push engine like light streamer to push from server to browser.
So in the end: What is the way to go? Is it code related, purely pushing SQL quires, using frameworks, using platforms, etc.
My platform is written in PHP codeignitor and DB is MySQL.
The activity stream will have lots of activities. There are 42 components on the social networking I am developing, each component has approx 30ish unique activities that can be streamed.
Check out http://www.stream-hub.com/
I have been using superfeedr.com with Rails and I can tell you it works really well. Here are a few facts about it:
Pros
Julien, the lead developer is very helpful when you encounter a problem.
Immediate push of new feed entries which support PubSubHubHub.
JSon response which is perfect for parsing whoever you'd like.
Retrieve API in case the update callback fails and you need to retrieve the latest entries for a given feed.
Cons
Documentation is not up to the standards I would like, so you'll likely end up searching the web to find obscure implementation details.
You can't control how often superfeedr fetches each feed, they user a secret algorithm to determine that.
The web interface allows you to manage your feeds but becomes difficult to use when you subscribe to a loot of them
Subscription verification mechanism works synchronous so you need to make sure the object URL is ready for the superfeedr callback to hit it (they do provide an async option which does not seem to work well).
Overall I would recommend superfeedr as a good solution for what you need.

Categories