My setup: Currently running a dedicated server with an Apache, PHP, MYSQL.
My DB is all set up and stores everything correctly. I'm just trying to figure out how to best display things live in an efficient way.
This would be a live challenging system for a web based game.
User A sends a challenge to User B
User B is alerted immediately and must take action on whether to
Accept or Decline
Once User B accepts he and User A are both taken to a specific page
that is served up by the DB (nothing special happens on this
page,and they dont need to be in sync or anything)
The response from User B is a simple yes or no, no other parameters are set by User B, the page they are going to has already been defined when User A sends the challenge.
Whichever config I implement for this challenge system, I am assuming it will also work for instant sitewide notifications. The only difference is that notifications do not require an instant response from User B.
I have read up on long polling techniques, comet etc.. But im still looking for opinions on the best way to achieve this, and make it scalable.
I am open to trying anything as long as it will work with (or in tandem) to my current PHP and MYSQL set up. Thanks!
You're asking about Notifications from a Server to a Client. This can be implemented either by having the Client poll frequently for changes, or having the Server hold open access to the Client, and pushing changes. Both have their advantages and disadvantages.
EDIT: More Information
Pull Method Advantages:
Easy to implement
Server can be pretty naïve about who's getting data
Pull Method Disadvantages:
Resource intensive on the client side, regardless of polling frequency
Time vs. Resource debacle: More frequent polls mean more resource utilization. Less resource utilization means less immediate data.
Push Method Advantages:
Server has more control overall
Data is immediately sent to the client
Push Method Disadvantages:
Potentially very resource intensive on the server side
You need to implement some way for the server to know how to reach each individual client (for example, Apple uses Device UUIDs for their APNS)
What Wikipedia has to say (some really good stuff, actually): Pull, Push. If you are leaning toward a Push model, you might want to consider setting up your app as a Pushlet
Related
What approach, mechanisms (& probably code) one should apply to fully implement Model-to-Views data update (transfer) on Model-State-Change event with pure PHP?
If I'm not mistaken, MVC pattern states an implicit requirement for data to be sent from Model layer to all active Views, specifying that "View is updated on Model change". (otherwise, it doesn't make any sense, as users, working with same source would see its data non-runtime and absolutely disconnected from reality)
But PHP is a scripting PL, so it's limited to "connection threads" via processes & it's lifetime is limited to request-response cycle (as tereško kindly noted).
Thus, one has to solve couple issues:
Client must have a live tunnel connection to server (Server Sent Events),
Server must be able to push data to client (flush(), ob_flush()),
Model-State-Change event must be raised & related data packed for transfer,
(?) Data must be sent to all active clients (connected to same exact resource/URL) together, not just one currently working with it's own processes & instance of ModelClass.php file...
UPDATE 1: So, it seems that "simultaneous" interaction with multiple users with PHP involves implementation of WEB Server over sockets of some sort, independent of NGINX and others.... Making its core non-blocking I/O, storing connections & "simply" looping over connections, serving data....
Thus, if I'm not mistaken the easiest way is, still, to go and get some ready solution like Ratchet, be it a 'concurrency framework' or WEB server on sockets...
Too much overhead for a couple of messages a day, though...
AJAX short polling seems to be quite a solution for this dilemma....
Is simultaneous updating multiple clients easier with some different backend than PHP, I wonder?.. Look at C# - it's event-based, not limited to "connection threads" and to query-reply life cycle, if I remember correctly... But it's still WEB (over same HTTP?)...
I am reorganizing an existing PHP application to separate data access (private API calls) from the application itself.
The purpose of doing this is to allow for another application on the intranet to access the same data without duplicating the code to run the queries and such. I am also planning to make it easier for developers to write code for the current web application, while only a few team members would be adding features to the API.
Currently the application has a structure like this (this is only one of many pages):
GET /notes.php - gets the page for the user to view notes (main UI page)
GET /notes.php?page=view&id=6 - get the contents of note 6
POST /notes.php?page=create - create a note
POST /notes.php?page=delete - delete a note
POST /notes.php?page=append - append to a note
The reorganized application will have a structure like this:
GET /notes.php
Internal GET /api/notes/6
Internal POST /api/notes
Internal DELETE /api/notes/6
Internal PUT /api/notes (or perhaps PATCH, depending on whether a full representation will be sent)
In the web application I was thinking of doing HTTP requests to URLs on https://localhost/api/ but that seems really expensive. Here is some code to elaborate on what I mean:
// GET notes.php
switch ($_GET['page']) {
case 'view':
$data = \Requests::get(
"https://localhost/api/notes/{$_GET['id']}",
array(),
array('auth' => ... )
);
// do things with $data if necessary and send back to browser
break;
case 'create':
$response = \Requests::post( ... );
if ($response->status_code === 201) {
// things
}
break;
// etc...
}
I read this discussion and one of the members posted:
Too much overhead, do not use the network for internal communications. Instead use much more readily available means of communications between different process or what have you. This depends on the system its running on of course...Now you can mimic REST if you like but do not use HTTP or the network for internal stuff. Thats like throwing a whale into a mini toilet.
Can someone explain how I can achieve this? Both the web application and API are on the same server (at least for now).
Or is the HTTP overhead aspect just something of negligible concern?
Making HTTP requests directly from the JavaScript/browser to the API is not an option at the moment due to security restrictions.
I've also looked at the two answers in this question but it would be nice for someone to elaborate on that.
The HTTP overhead will be significant, as you have to go through the full page rendering cycle. This will include HTTP server overhead, separate process PHP execution, OS networking layer, etc. Whether it is negligible or not really depends on the type of your application, traffic, infrastructure, response time requirements, etc.
In order to provide you with better solution, you need to present your reasoning for considering this approach in the first place. Factors to consider also include current application architecture, requirements, frameworks used, etc.
If security is your primary concern, this is not necessarily a good way to go in the first place, as you will need to now store some session related data in yet another layer.
Also, despite the additional overhead, final application could potentially perform faster given the right caching mechanisms. It really depends on your final solution.
I am doing the same application framework. Had the same problem. So I decided to do following design:
For processes that are located remotely (on a different machine) then I user crul or other calls to a remote resource. If I store user on a different server to get user status I do this API->Execute(https://remote.com/user/currentStatus/getid/6) it will return status.
For local calls, say Events will require Alerts (these are 2 separate package with their own data model but on the same machine) - I make a local API like call. something like this:
API->Execute(array('Alerts', Param1, Param2).
API->Execute then knows that's a local object. Will get the object local physical path. Initialize it, pass the data and return the results into context. No remote execution with protocols overhead.
For example if you want to keep an encryption service with keys and what not away from the rest of the applications - you can send data securely and get back the encrypted value; then that service is always called over a remote api (https://encryptionservice.com/encrypt/this/value)
I currently have a server built using Netty/MySQL which I'm in the process of optimizing. It's incredibly simple, essentially does the following:
Accepts persistent connections
Makes database queries on behalf of the client (depending on client message and authorization)
Updates state of client (right now with local variables per client/channel- i.e. "numberOfQueriesMadeForThisClientSession")
Forces disconnects based on database, authentication and knowledge of other clients (i.e. if clientA is connected and clientB sends a special command, if verified by server, clientA is disconnected)
Reacts to disconnects (updates database, etc.)
Checks aes-encrypted content
However, I'm a little worried about the type of things that can happen with scaling... for example maybe handling timeout disconnects gracefully rather than user actually quiting or forced disconnects, race conditions, etc. etc.
Odds are pubnub has thought about this stuff with more tests than myself... so I'm wondering- what would be the basic structure to migrate my netty/mysql server to use pubnub? At a glance, it seems to me like pubnub is a pure message relay without any database or business logic processing...?
Language of choice is PHP but at this point I'm most interested in the basic architecture
OK- now that I've given it some thought, I think it is totally doable for my specific use case, but Pubnub is not really designed to use (or require) additional persistent servers.
By using unique channel names and withholding subscribe/publish keys, different approaches can be taken, and it just isn't the same way as traditional server models.
It can also be accomplished by having a "super client" subscribe/process/publish, but in my opinion that kinda brings the thing back to square one.
My advice to myself and others- think of pubnub (and similar services) in a completely different way and try to avoid using a persistent "super client"/server at all if possible :)
I'm trying to figure out a way for users of a website (say a student and teacher) to share a secure connection where real time updates on one page are viewed by both of them.
From research I've concluded that some of the real time updates could be performed using ajax and javascript.
But I'm stumped as to how users could share a connection where only the two users would be viewing the updates that take place on the website (such as flash animations of a drawing board.) I'm also confused how you would even begin to set up a connection like this.
I've looked intp php sessions and cookies but I'm not sure I'm doing the right research.
Any pointers as to how two specific users could share a secure connection where real time updates are viewed by the both of them only. I don't want a terse response please. I'm looking for specific details like functions and syntax specific to php. I appreciate the help and will rate you up if you give me good answers!
You cannot share a secure connection (e.g. HTTPS) its one client to one server.
If both clients are logged in and have a background AJAX task running in the browser, is it acceptable to have each client "pull" every few seconds the same data to display for both users?
This would require the "drawing board" updates to also be sent continuously back to the server to share the updated data with the other client. I'm sure there will be an event you can use to trigger the post of data (e.g. on mouse up).
If performance is an issue, you'd want to use a better server technology like Java that is able to keep session state between requests without having to persist to a database.
You can look at ajax push techniques. I used comet once where an administrator posted messages and everybody who was logged in saw that message appear on their screen. I don't know if comet supports PHP. I only used it with JSP. Just search for "ajax push" in Google.
Flash allows for connections between users, I think they refer to them as sockets.
If you want to use Ajax, et al, you need a server side technology that supports push.
Node is the standard in this regard, and you can set up a Heroku instance for free.
There are others, and you need to learn tools before even beginning to learn application.
Among the many overviews, this might interest you:
http://arstechnica.com/business/2012/05/say-hello-to-the-real-real-time-web/?1
A few good examples where this is happening:
Google Docs
Etherpad
HTML5 Games: Multi player
Techniques you can use (with varying browser support)
HTML5 WebSockets (Wikipedia; MDN; HTML5 Demo)
Comet (Wikipedia)
Really pushing data to a web browser client from a server (which would do that when it receives something from another client) is only possible with WebSockets as far as I know. Other mechanism would either require browser plugins or a stand-alone application.
However with Comet (through AJAX) you can get really close to pushing data by polling the server periodically for data. However contrary to traditional polling (e.g. where a clients asks for data every 5 seconds), with the Comet principle the server will hold that periodic request hostage for, say, up to 30 seconds. The server will not send back data until either it has data or the time out is reached. That way, during those 30 seconds, any data that the server receives can be instantly pushed back to the other clients. And right after that the client starts a new 30 second session, and so forth.
Although both Comet and WebSockets should work with a PHP backend served by Apache. I'd recommend looking into NodeJS (as server technology) for this.
There is a lot of information regarding Comet on the internet, I suggest you google it, maybe start at Wikipedia.
The great thing about Comet is that it is more a principle than a technology. It uses what we already have (simple HTTP requests with AJAX), so browser support is very wide.
You could also do a combination, where you use sockets if supported and fallback to Comet.
I'm sure you have looked into this. The opinion that this can be done via ajax is misleading to believe that two users of a website can communicate via javascript.
As you are aware, javascript happens on the client and ajax is essentially 'talking to the server without a page change or refresh'.
The communication between two users of the website has to happen via the server - php and some chosen datastore.
Hope that wasn't terse.
cheers, Rob
I'm working on a chat application which I would love to use a SQL db for.
My problem is, after a few google searches, i have people telling me from one site, that using a DB would be much slower then using a normal file (e.g Text or JSON file), but then on some other sites, people are saying the complete opposite. And I don't know about you guys, but when it comes to creating web apps for users, the users always come first.
So as much as I'd love to use a SQL DB as 1.) I have good experience with it and 2.) it allows me to make the application much more cooler (more features). but if it would slow things down on the users end (a noticeable lag), then its a no-no.
Either way, I will be "polling" the server continuously with AJAX and PHP to check the file/DB (for new messages, contact requests, ect ect).
Also, incase your wondering, the application wont be like a 1-to-1 chat, it will have "rooms" where multiple users can join and talk with all users joining in. The users will also be able to request a "private chat" with another user, where a 1-to-1 connection opens up.
So, MySQL Database OR a boring TEXT/JSON/OTHER file, in regards to performance?
Oh, one more thing, I don't want to use any third party libraries or APIs. Hate relying on other peoples work (been let down to many times).
If you're looking to implement an IRC clone, I think you've chosen all the wrong tools.
The best way to do this would be to write a custom HTTP server that handles everything in memory. No databases, no constant polling of files. When a message arrives, you simply loop through the correct in-memory list and dispatch the message to other users. For the browser to server connection, I suggest "Comet" (with web sockets for browsers that support them, if you're feeling up to it).
PHP likely isn't the language of choice for this, because pretty much all work done with PHP is based on traditional short, isolated requests. For a long-running process which serves multiple clients in real time, I'd suggest something like Python or Node.js.
You don't really want to be storing chats in files, that can create a management nightmare, I would recommend you go with MySQL and to make sure it works probably go with Sockets instead of AJAX polling, Sockets will scale really well.
However there isn't much around about how you can integrate socket based chats with MySQL.
I have done a few tests and have a basic example working here: https://github.com/andrefigueira/PHP-MySQL-Sockets-Chat
It makes use of Ratchet (http://socketo.me/) for the creation of the chat server in PHP.
And you can send chat messages to the DB by sending the server JSON with the information of who is chatting, (if of course you have user sessions)