HTTP overhead in API-centric PHP application - php

I am reorganizing an existing PHP application to separate data access (private API calls) from the application itself.
The purpose of doing this is to allow for another application on the intranet to access the same data without duplicating the code to run the queries and such. I am also planning to make it easier for developers to write code for the current web application, while only a few team members would be adding features to the API.
Currently the application has a structure like this (this is only one of many pages):
GET /notes.php - gets the page for the user to view notes (main UI page)
GET /notes.php?page=view&id=6 - get the contents of note 6
POST /notes.php?page=create - create a note
POST /notes.php?page=delete - delete a note
POST /notes.php?page=append - append to a note
The reorganized application will have a structure like this:
GET /notes.php
Internal GET /api/notes/6
Internal POST /api/notes
Internal DELETE /api/notes/6
Internal PUT /api/notes (or perhaps PATCH, depending on whether a full representation will be sent)
In the web application I was thinking of doing HTTP requests to URLs on https://localhost/api/ but that seems really expensive. Here is some code to elaborate on what I mean:
// GET notes.php
switch ($_GET['page']) {
case 'view':
$data = \Requests::get(
"https://localhost/api/notes/{$_GET['id']}",
array(),
array('auth' => ... )
);
// do things with $data if necessary and send back to browser
break;
case 'create':
$response = \Requests::post( ... );
if ($response->status_code === 201) {
// things
}
break;
// etc...
}
I read this discussion and one of the members posted:
Too much overhead, do not use the network for internal communications. Instead use much more readily available means of communications between different process or what have you. This depends on the system its running on of course...Now you can mimic REST if you like but do not use HTTP or the network for internal stuff. Thats like throwing a whale into a mini toilet.
Can someone explain how I can achieve this? Both the web application and API are on the same server (at least for now).
Or is the HTTP overhead aspect just something of negligible concern?
Making HTTP requests directly from the JavaScript/browser to the API is not an option at the moment due to security restrictions.
I've also looked at the two answers in this question but it would be nice for someone to elaborate on that.

The HTTP overhead will be significant, as you have to go through the full page rendering cycle. This will include HTTP server overhead, separate process PHP execution, OS networking layer, etc. Whether it is negligible or not really depends on the type of your application, traffic, infrastructure, response time requirements, etc.
In order to provide you with better solution, you need to present your reasoning for considering this approach in the first place. Factors to consider also include current application architecture, requirements, frameworks used, etc.
If security is your primary concern, this is not necessarily a good way to go in the first place, as you will need to now store some session related data in yet another layer.
Also, despite the additional overhead, final application could potentially perform faster given the right caching mechanisms. It really depends on your final solution.

I am doing the same application framework. Had the same problem. So I decided to do following design:
For processes that are located remotely (on a different machine) then I user crul or other calls to a remote resource. If I store user on a different server to get user status I do this API->Execute(https://remote.com/user/currentStatus/getid/6) it will return status.
For local calls, say Events will require Alerts (these are 2 separate package with their own data model but on the same machine) - I make a local API like call. something like this:
API->Execute(array('Alerts', Param1, Param2).
API->Execute then knows that's a local object. Will get the object local physical path. Initialize it, pass the data and return the results into context. No remote execution with protocols overhead.
For example if you want to keep an encryption service with keys and what not away from the rest of the applications - you can send data securely and get back the encrypted value; then that service is always called over a remote api (https://encryptionservice.com/encrypt/this/value)

Related

How to avoid repeating business logic between client and server?

As the needs of web apps have grown, I have found myself writing more and more API driven web applications. I use frameworks like AngularJS to build rich web clients that communicate with these APIs. Currently I am using PHP (Lumen or Laravel) for the server side / API.
The problem is, I find myself repeating business logic between the client and the server side often.
When I say business logic I mean rules like the following for an order form:
You can buy X if you buy Y.
You cannot buy Y if you have Z.
If you buy 10 of these you get 10% off.
Height x Width x Depth x Cost = Final Cost.
Height must be between 10 and 20 if your width is greater than 5.
Etc etc.
To make this app both responsive and fast, the logic for calculations (along with other business logic) is being done on the client side. Since we shouldn't trust the client, I then re-verify those numbers on the server side. This logic can get pretty complex and writing this complex logic in both places feels dangerous.
I have three solutions in mind:
Make everything that require business logic make an ajax call to the API. All the business logic would live in one place and can be tested once. This could be slow since the client would have to wait for each and every change they make to the order form to get updated values and results. Having a very fast API would help with this. The main downside is that this may not work well when users are on poor connections (mobile devices).
Write the business logic on the client side AND on the server side. The client gets instant feedback as they make changes on the form, and we validate all data once they submit on the server. The downside here is that we have to duplicate all the business logic, and test both sides. This is certainly more work and would make future work fragile.
Trust the client!?! Write all the business logic on the client side and assume they didn't tamper with the data. In my current scenario I am working on a quote builder which would always get reviewed by human so maybe this is actually ok.
Honestly, I am not happy about any of the solutions which is why I am reaching out to the community for advice. I would love to hear your opinions or approaches to this problem!
You can do one more thing.
Create your validation and business logic code with JavaScript only. But make it very loosely coupled, as much as possible. If possible, only take JSON as input and give JSON as output.
Then set up a separate NodeJS server alongside the existing PHP server to serve that logic to the client, so that on the client side it can be used without an AJAX call.
Then from the PHP server, when you need to validate and run all those business logic rules, use cURL to call the NodeJS business logic and validate the data. That means an HTTP call from the PHP server to the NodeJS server. The NodeJS server will have additional code which will take the data, validate with the same code, and return the result.
By this way you can make
Faster development - one place to unit test your logic.
Faster client code execution - no need for AJAX, since the same validation JavaScript code is being served by NodeJS to your client.
All business logic lives in the NodeJS server - when business logic changes, you only need to touch this part; so that in the near future, if you need to create some other additional interfaces, then you can use this server to validate your data. It will work just like your Business Rule Server.
The only thing you need to do is setup a NodeJS server alongside your PHP server. But you do not need to change all of your code to run on the NodeJS server.
I had the same issue when I decided to create an application using Laravel for back end, and Angular 2 for front-end. And it seems to me there is no solution to avoid the business logic duplicate so far, because:
At the moment PHP and JavaScript cannot be converted from one to another. Would it be nice if we can use same language for writing the business logic and then embed them into both back-end and front-end. From this point it leads me to another point:
To achieve the goal, we should write the business logic in one language only, and so far JavaScript is the best solution. As you know TypeScript/EMCA Script help us to write the code in the OOP way. Meteor framework NodeJS infrastructure help us to write code in JavaScript for running in both sides Back-end and front-end.
So from my point of view, we can use TypeScript/EMCA to write packages for business logic, for example a class for validation written in JavaScript can be implemented both side, so you just write one time only, but it will be called twice from front-end and back-end also.
That's my point. Hope to see some other solutions for this very interesting topic.
One possible solution is to declare your validation rules in a declarative abstract language like XML or JSON Schema.
Then in the client side, say AngularJS -- you can transform these rules into a an off the shelf form renderer. So now on the client side you end up with forms that validate the declared rules.
Then on your server side API you need to create a reusable validation engine that will validate based on the defined rules.
What you end up with is a single place, your JSON Schema or where ever you declaratively define your rules, that your form and validation rules are defined.
I was also in this position when I worked on some of my own projects. It is always tempting to make use of the power of the client's device to do the heavy lifting and then just validate the results on the server side. Which will result in the business logic appearing twice, both front-end and back-end.
I think option 1 is the best option, it makes the most sense and seems most logical as well. If you want to expand your web app to native mobile apps in the future you will be able to re-use all of the business logic through calling those APIs. To me, this is a massive win.
If the worry if making too many API requests and this could impact mobile performance, then maybe try to group together some of the requests and perform a single check at the end? So instead of doing a check for each field in a form, do a check when the user submit the entire form. Also most internet connection will be sufficient if you keep to request and response data to the minimum, so I won't worry about this.
A bigger problem I normally come across is that since your web app will be broken down into sections, with each section calling the relevant APIs. The state of the app is much more complex to understand, since the user could jump between these states. You will need to think very carefully about the user journey and ensure that the process is not buggy.
Here are some of the common issues I had to deal with:
Does the front-end display error if the API returns one?
If the user made a mistake and submitted the form, he/she should see an error. But once the user fixed the mistake and submits again, the error should hide and success message should now show.
What if the API is buggy or internet connection is unstable, so nothing is returned. Will the front-end hang?
What if there are multiple errors messages, can/does the front-end display them all?
I would recommend have a lot of Unit tests on the front-end to ensure it is stable, even if the business logic is only on the back-end.
First of all: Never trust the client.
That being said, I deal with this all the time, and sadly I haven't find an easy solution. You need to do validation on both sides, BUT, you don't need to do the whole validation on them both.
What I do is try to balance it out. On client side, you do most of the simple (but valuable) validation, normal stuff, numbers must be numbers, dates must be dates, data within range, etc, so than when you submit it, it goes to the server to get fully validated, but you are making sure, on client side, that most of the information is at the very least in its proper format, and some (or most) of it, its already validated, however, the real business logic is done server side, but as most of the data is already correct, the server side validation will most likely approve the request, so you will avoid a lot resubmits.
Now, how to make it so that when you need to change something, you don't need to change it on both sides? Well, sometimes you won't be able to avoid this, when major changes are required, BUT, business logic parameters can be shared, and like you suggested, this can be done through ajax. You make a php file, where you have all your business logic parameters, and with an ajax request, you load this on client side, only once (when the script is loaded), you need to optimize this, so you get only the parameters values, everything else should be already there on the client side, so if some parameter value in the business logic changes, you only change it on your parameter file. (If a parameter is changed after the script was loaded, validation will fail on server side, now you need to decide if you force them to reaload script, so parameters are realodaded, or not, I make them reload them)
I think you get the idea. This is what I do, and it works pretty ok for me, saves me a lot of recoding.
I hope you find this helpful.
I feel option 1 is the best going forward in the future. API first development allows all business logic to be tested and work properly and allow interfaces to access. You should NEVER ever ever trust the user!
The power API first development has is unlimited compared to coding the same logic again and again for each interface needed.
Here's a similar thread about whether to put logic client-side or server-side. At the end of the day, each situation is unique and warrants a different plan, but there are some good, guiding tips in this thread.
Client-side vs. Server-side
Today the solution is clearly the one from #ParthaSarathiGhosh, but the near future will certainly give us another solution...
WebAssembly is a low level assembly language that can be deployed with your application and run in the browser. It will allow you to request some logic from the JavaScript by calling the compiled code in the assembly. This is recommended for heavy scripts that runs client side, but will at the same time allow you to reuse your backend code in the front. In that way, you will be able to write your logic for your backend, and reuse it in the front.
Today this technology is already supported in most modern browser, but it's only available from c/c++. So you can already use it if you've these skills.
It's surely planned to expand it to other language also (as there is already some researches for c# - ex: blazor - and other languages). But the maturity level seems not stable enough for production (even the blazor developer team don't recommend it yet for production).
It's only my own opinion but => Logic in NodeJS is a solution to reuse the javascript code, but I still feel the need for a strongly typed language when it comes to big maintainable logic code. (Yes, I know TypeScript and it's really good, but I miss something). WebAssembly is still a bit young, but will for sure bring a big improvement to respect the DRY principle.
Very interesting problem - another caveat can be that we want to support offline mode, i.e. app must run offline as well.
Another further complication will be if lets say your nice server side was all in one technology like java or .Net etc. and on client side you are choosing between something like native tools or Xamarin but unfortunately not the same as server.
So Partha's approach seems most promising - but as it is stated, that will not work in completely offline mode. So a slightly modified approach will be to consider validation rules as data. But not simple data - rather say that "the whole damn code is data". You can choose any interpreted code language you like - Groovy, JavaScript, CScript etc. - but one rule you will follow 100% is that ALL BUSINESS LOGIC IS IN THAT CODE!
If you are able to achieve this, then in offline mode - when you are syncing data --- you will also sync this very special type of data, i.e. the code! (so no risk of "trusting" client)
And then the offline API and online API is 100% same code - but code is in our interpreted language. I think this approach will not only solve this problem but also make business logic maintainence much simpler. We often created highly complex data models to support rules; when in fact in 2019 - you could simply create the rule with ifs/elses and it will be much simpler. We could train end users in a very simple scripting tool and achive less code to do better things.
I have put together a blog post with these ideas: https://medium.com/#thesaadahmad/business-logic-conundrum-offline-mobile-apps-a06ecc134aee

Should I use WebSockets in a social network app, or will PHP/AJAX suffice?

I would like your opinion on a project. The way I started myself and slowly presenting many gaps and problems either now or in the future they will create big issue.
The system will have a notification system, friends system, message system (private), and in general such systems. All these I have set up with:
jQuery, PHP, mysqli round-trips to avoid wordiness. I am getting at what the title says.
If all these do a simple PHP code and post and get methods for 3-4 online users will be amazing! The thing is when I have several users what can I do to make better use of the resources of the server? So I started looking more and and found like this socket.io
I just want someone to tell me who knows more what would be best to look for. Think how the update notification system work now. jQuery with post and repeated every 3-5 seconds, but it is by no means right.
If your goal is to set up a highly scalable notification service, then probably not.
That's not a strict no, because there are other factors than speed to consider, but when it comes to speed, read on.
WebSockets does give the user a consistently open, bi-directional connection that is, by its very nature, very fast. Also, the client doesn't need to request new information; it is sent when either party deems it appropriate to send.
However, the time savings that the connection itself gives is negligible in terms of the costs to generate the content. How many database calls do you make to check for new notifications? How much structured data do you generate to let the client know to change the notification icon? How many times do you read data from disk, or from across the network?
These same costs do not go away when using any WebSocket server; it just makes one mitigation technique more obvious: Keep the user's notification state in memory and update it as notifications change to prevent costly trips to disk, to the database, and across the server's local network.
Known proven techniques to mitigate the time costs of serving quickly changing web content:
Reverse proxy (Varnish-Cache)
Sits on port 80 and acts as a very thin web server. If a request is for something that isn't in the proxy's in-RAM cache, it sends the request on down to a "real" web server. This is especially useful for serving content that very rarely changes, such as your images and scripts, and has edge-side includes for content that mostly remains the same but has some small element that can't be cached... For instance, on an e-commerce site, a product's description, image, etc., may all be cached, but the HTML that shows the contents of a user's cart can't, so is an ideal candidate for an edge-side include.
This will help by greatly reducing the load on your system, since there will be far fewer requests that use disk IO, which is a resource far more limited than memory IO. (A hard drive can't seek for a database resource at the same time it's seeking for a cat jpeg.)
In Memory Key-Value Storage (Memcached)
This will probably give the most bang for your buck, in terms of creating a scalable notification system.
There are other in-memory key-value storage systems out there, but this one has support built right into PHP, not just once, but twice! (In the grand tradition of PHP core development, rather than fixing a broken implementation, they decided to consider the broken version deprecated without actually marking that system as deprecated and throwing the appropriate warnings, etc., that would get people to stop using the broken system. mysql_ v. mysqli_, I'm looking at you...) (Use the memcached version, not memcache.)
Anyways, it's simple: When you make a frequent database, filesystem, or network call, store the results in Memcached. When you update a record, file, or push data across the network, and that data is used in results stored in Memcached, update Memcached.
Then, when you need data, check Memcached first. If it's not there, then make the long, costly trip to disk, to the database, or across the network.
Keep in mind that Memcached is not a persistent datastore... That is, if you reboot the server, Memcached comes back up completely empty. You still need a persistent datastore, so still use your database, files, and network. Also, Memcached is specifically designed to be a volatile storage, serving only the most accessed and most updated data quickly. If the data gets old, it could be erased to make room for newer data. After all, RAM is fast, but it's not nearly as cheap as disk space, so this is a good tradeoff.
Also, no key-value storage systems are relational databases. There are reasons for relational databases. You do not want to write your own ACID guarantee wrapper around a key-value store. You do not want to enforce referential integrity on a key-value store. A fancy name for a key-value store is a No-SQL database. Keep that in mind: You might get horizontal scalability from the likes of Cassandra, and you might get blazing speed from the likes of Memcached, but you don't get SQL and all the many, many, many decades of evolution that RDBMSs have had.
And, finally:
Don't mix languages
If, after implementing a reverse proxy and an in-memory cache you still want to implement a WebSocket server, then more power to you. Just keep in mind the implications of which server you choose.
If you want to use Socket.io with Node.js, write your entire application in Javascript. Otherwise, choose a WebSocket server that is written in the same language as the rest of your system.
Example of a 1 language solution:
<?php // ~/projects/MySocialNetwork/core/users/myuser.php
class MyUser {
public function getNotificationCount() {
// Note: Don't got to the DB first, if we can help it.
if ($notifications = $memcachedWrapper->getNotificationCount($this->userId) !== null) // 0 is false-ish. Explicitly check for no result.
return $notifications;
$userModel = new MyUserModel($this->userId);
return $userModel->getNotificationCount();
}
}
...
<?php // ~/projects/WebSocketServerForMySocialNetwork/eventhandlers.php
function websocketTickCallback() {
foreach ($connectedUsers as $user) {
if ($user->getHasChangedNotifications()) {
$notificationCount = $user->getNotificationCount();
$contents = json_encode(array('Notification Count' => $notificationCount));
$message = new WebsocketResponse($user, $contents);
$message->send();
$user->resetHasChangedNotifications();
}
}
}
If we were using socket.io, we would have to write our MyUser class twice, once in PHP and once in Javascript. Who wants to bet that the classes will implement the same logic in the same ways in both languages? What if two developers are working on the different implementations of the classes? What if a bugfix gets applied to the PHP code, but nobody remembers the Javascript?

PHP Web Service optimisations and testing methods

I'm working on a web service in PHP which accesses an MSSQL database and have a few questions about handling large amounts of requests.
I don't actually know what constitutes 'high traffic' and I don't know if my service will ever experience 'high traffic' but would optimisations in this area be largely attributed to the server processing speed and database access speed?
Currently when a request is sent to the server I do the following:
Open database connection
Process Request
Return data
Is there anyway I can 'cache' this database connection across multiple requests? As long as each request was processed simultaneously the database will remain valid.
Can I store user session id and limit the amount of requests per hour from a particular session?
How can I create 'dummy' clients to send requests to the web server? I guess I could just spam send requests in a for loop or something? Better methods?
Thanks for any advice
You never know when high traffic occurs. High traffic might result from your search engine ranking, a blog writing a post of your web service or from any other unforseen random event. You better prepare yourself to scale up. By scaling up, i don't primarily mean adding more processing power, but firstly optimizing your code. Common performance problems are:
unoptimized SQL queries (do you really need all the data you actually fetch?)
too many SQL queries (try to never execute queries in a loop)
unoptimized databases (check your indexing)
transaction safety (are your transactions fast? keep in mind that all incoming requests need to be synchronized when calling database transactions. If you have many requests, this can easily lead to a slow service.)
unnecessary database calls (if your access is read only, try to cache the information)
unnecessary data in your frontend (does the user really need all the data you provide? does your service provide more data than your frontend uses?)
Of course you can cache. You should indeed cache for read-only data that does not change upon every request. There is a useful blogpost on PHP caching techniques. You might also want to consider the caching package of the framework of your choice or use a standalone php caching library.
You can limit the service usage, but i would not recommend to do this by session id, ip address, etc. It is very easy to renew these and then your protection fails. If you have authenticated users, then you can limit the requests on a per-account-basis like Google does (using an API key for all their publicly available services per user)
To do HTTP load and performance testing you might want to consider a tool like Siege, which exactly does what you expect.
I hope to have answered all your questions.

Caching data retrieved by jQuery

I am using jQuery's $.ajax method to retrieve some JSON from an API.
Each time the page is loaded, a call to the API is made, regardless if the user has has received this data before - which means when a large amount of users are on the page, the API limiting would come into effect.
My thought of how to deal with this would be firstly pushing the data to a database (pushing to a PHP script), and then checking the database to see if anything is cached, before going back to the API to get more up to date information if required.
Is this a viable method? What are the alternatives?
It just seems like jQuery is actually a hurdle, rather than doing it all in PHP to begin with, but as I'm learning the language, would like to use it as much as I can!
In order to help distinguish between opinions and recommended techniques, lets first break down your problem to make sure everyone understands your scenario.
Let's say we have two servers: 'Server A' and 'Server B'. Call 'Server A' our PHP web server and 'Server B' our API server. I'm assuming you don't have control over the API server which is why you are calling it separately and can't scale the API server in parallel to your demand. Lets say its some third party application like flickr or harvest or something... let's say this third party API server throttles requests per hour by your developer API key effectively limiting you to 150 requests per hour.
When one of your pages loads in the end-users browser, the source is coming from 'Server A' (our php server) and in the body of that page is some nice jQuery that performs an .ajax() call to 'Server B' our API server.
Now your developer API key only allows 150 requests per hour, while hypothetically you might see 1000 requests inside one hours to your 'Server A' PHP server. So how do we handle this discrepancy of loads, given the assumption that we can't simple scale up the API server (the best choice if possible).
Here are a few things you can do in this situation:
Just continue as normal, and when jQuery.ajax() returns a 503 service
unavailable error due to throttling (what most third party APIs do) tell your end user politely that
you are experiencing higher than normal traffic and to try again
later. This is not a bad idea to implement even if you also add in
some caching.
Assuming that data being retrieved by the API is cache-able, you
could run a proxy on your PHP server. This is particularly well
suited when the same ajax request would return the same response
repeatedly over time. (ex: maybe you are fetching some description
for an object, the same object request should return the same description response for
some period of time). This could be a PHP Pass through proxy or a
lower level proxy like SQUID caching proxy. In the case of a PHP Pass through proxy you would use the "save to DB or filesystem" strategy for caching and could even re-write the expires headers to suite your level of desired cache lifetime.
If you have established that the response data is cache-able, you can
allow the client side to also cache the ajax response using
cache:true. jQuery.ajax() actually defaults to having cache:true, so you simply need to not set cache to false for it to be caching responses.
If your response data is small, you might consider caching it client-side in a
cookie. But experience says that most users who clear their temporary
internet files will also clear their cookies. So maybe the built in caching with jQuery.ajax() is just as good?
HTML5 local storage for client-side caching is cool, but lacks wide-spread popular support.
If you control your user-base (such as in a corporate environment) you
may be able to mandate the use of an HTML5 compliant browser. Otherwise, you
will likely need a cookie based fallback or polyfill for browsers lacking
HTML5 local storage. In which case you might just reconsider other options above.
To summarize, you should be able to present the user with a friendly service unavailable message no matter what caching techniques you employ. In addition to this you may employ either or both server-side and client-side caching of your API response to reduce the impact. Server-side caching saves repeated requests to the same resource while client side caching saves repeated requests to the same resources by the same user and browser. Given the scenario described, I'd avoid Html5 LocalStorage because you'll need to support fallback/polyfill options which make the built in request caching just as effective in most scenarios.
As you can see jQuery won't really change this situation for you much either way vs calling the API server from PHP server-side. The same caching techniques could be applied if you performed the API calls in PHP on the server side vs performing the API calls via jQuery.ajax() on the client side. The same friendly service unavailable message should be implemented one way or another for when you are over capacity.
If I've misunderstood your question please feel free to leave a comment and clarify and/or edit your original question.
No, don't do it in PHP. Use HTML5 LocalStorage to cache the first request, then do your checking. If you must support older browsers, use a fallback (try these plugins).

Instant Challenge/Notifications system

My setup: Currently running a dedicated server with an Apache, PHP, MYSQL.
My DB is all set up and stores everything correctly. I'm just trying to figure out how to best display things live in an efficient way.
This would be a live challenging system for a web based game.
User A sends a challenge to User B
User B is alerted immediately and must take action on whether to
Accept or Decline
Once User B accepts he and User A are both taken to a specific page
that is served up by the DB (nothing special happens on this
page,and they dont need to be in sync or anything)
The response from User B is a simple yes or no, no other parameters are set by User B, the page they are going to has already been defined when User A sends the challenge.
Whichever config I implement for this challenge system, I am assuming it will also work for instant sitewide notifications. The only difference is that notifications do not require an instant response from User B.
I have read up on long polling techniques, comet etc.. But im still looking for opinions on the best way to achieve this, and make it scalable.
I am open to trying anything as long as it will work with (or in tandem) to my current PHP and MYSQL set up. Thanks!
You're asking about Notifications from a Server to a Client. This can be implemented either by having the Client poll frequently for changes, or having the Server hold open access to the Client, and pushing changes. Both have their advantages and disadvantages.
EDIT: More Information
Pull Method Advantages:
Easy to implement
Server can be pretty naïve about who's getting data
Pull Method Disadvantages:
Resource intensive on the client side, regardless of polling frequency
Time vs. Resource debacle: More frequent polls mean more resource utilization. Less resource utilization means less immediate data.
Push Method Advantages:
Server has more control overall
Data is immediately sent to the client
Push Method Disadvantages:
Potentially very resource intensive on the server side
You need to implement some way for the server to know how to reach each individual client (for example, Apple uses Device UUIDs for their APNS)
What Wikipedia has to say (some really good stuff, actually): Pull, Push. If you are leaning toward a Push model, you might want to consider setting up your app as a Pushlet

Categories