I'm attempting to build a live-updated system in php-mysql (and jQuery).
The question I have is if the approach i'm using is good/correct or if i should look into another way of doing it:
Using jQuery and AJAX i have made:
setInterval(function() {
Ajax('check_status.php');
},4000);
in check_status.php i use memcache, to check if the result is either 1 or 0
$memcache = new Memcache;
$memcache->connect('127.1.0.1', 11211) or die ("");
$userid.$key = md5($userid."live_feed");
$result = $memcache->get($userid.$key);
if($result==1) {
doSomething();
}
The idea is that user A does something, and that updates the memcache for user B.
The memcache is then being checked every 4 seconds via jQuery, and that way i can do a live_feed to user B.
The reason i use memcache is to limit the mysql_queries and thereby limiting the load on the datbase.
So the question is. Am i totally off here ? Is there a better way of doing this, to reduce the server load ?
Thank you
The server load on this is going to be rough cause every user will be hitting your web server every 4 seconds. If I were you I would look into two other options.
Option 1, and the better of the two in my opinion, is Websockets. Websockets allow for persistent communication between a server and the client. The web socket server can be created with PHP, or something else. You can then have all clients connect to the same web socket and send data to all or individual clients connected. On the client side this is done with Javascript and flash fallback for older browsers that don't support Websockets.
Option 2 is a technique called long polling. Right now your clients have to hit the web server every 4 seconds no matter what. In long polling the client sends an ajax request and you don't give back a response from the server until your memcache status changes. So basically you put the code you have now in a while loop, with a pause to prevent it from using up 100% of your server resources, and only run doSomething() if something has changed. Then on the client side when it gets a response, initiate a new ajax call that waits for a new response again. So while with what you currently have the user hits the server 15 times in one minute regardless of activity, in this method the user will only hit the server every time there is actual activity. So this normally saves you a ton of useless connections back and forth.
Look up both of these options and see which one would work better in your situation. Long polling is easier to implement but not nearly as efficient.
I'd rather use a version id for content. For example say the user A's content is retrieved and the version id is set as 1000. This is then saved in memcached with the key userA_1000.
If user B's action or any other action affects the user A's content, the version ID is then reset to 1001.
When you next time checks memcached, you will now check for the key userA_1001, which does not exist and will tell you that it's updated. Hence re-create the content, save in memcached and send back with the Ajax request.
User version key can also be saved in memcached in order to not to do a DB query each time.
So
User A's key/val --- user_a_key => userA_1001
User A's content --- userA_1001 => [Content]
When a change has happaned that effected the user A's content, simply change the version and update the key of the content in memcahed
User A's key/val --- user_a_key => userA_1002
So next time your Ajax request look for the users content in memcached, it will notice that there is nothing save for the key userA_1002, which will prompt it to re-create the content. If it's found simply respond saying no need to update anything.
Use a well designed class methods to handle when and how the content is updated and how the keys are invalidated.
Related
I have made a simple chat application in PHP/Ajax/MySQL.
I am regularly calling these two methods-
setInterval("GetChatParticipants()",5000);
setInterval("GetChatMessages()",1000);
Now as per client design, there's no Logout or Sign Out button, so I want to detect when user closes the browser and at that time I have to delete the participant record from the database. I dont want to use onbeforeunload as Chrome refuses to accept Ajax requests on window unload event.
So I am thinking of implementing in Ajax by making repeated reqests. But the question is how exactly shall I do that. Shall I update the Participant table with last_active_time and if the difference between current time and that last_active_time is , say, more than 10 min, then I should delete the user? Is it a practical solution or could there be a better one?
You have the best solution IMO. In Javascript create a "ping" function that pings every minute / two minutes etc with a session ID unique to that browser/session. Have a sessions table on your server and update when you get a ping. Have another script that looks for entries that have not been pinged for long periods and close the sessions.
I've had this structure running for thousands of concurrent users on one site, and I had to drop the ping down from 1 minute to every 2 minutes when load got heavyish (that was on 2 load balanced servers while running the rest of the site too). Obviously you make your ping approx 45% of the time-out time (so if one fails, a second should hit). It's a simple process that can handle the load.
Edit: don't use "setInterval" for the ping, but user "setTimeout" when each ping either returns or fails. Otherwise, with setInterval, if your server gets too loaded then the pings queue, never respond and queue some more. You get a meltdown of the server as all server sockets are used, then you can't connect by ssh to fix it... not a moment I was proud of.
Yes it is practical solution and then only one you can rely on, so, no matter what else you use in addition to that, this one final check should be in your logic at all times. Said that, combining several methods will give you more possibilities to detect user leaving early. Add onbeforeunload for browsers that support it. Add "log out" button for clients coming from insecure location that want to log out right now when leaving PC and finally, check for inactivity.
Yes, it is practical solultion. The only difference is where to store information about user ping. You can do it in database or in key-value storage like memcached. I prefer the second one as I think it takes lesser resources.
I'm writting a browser-based game. The client is in jQuery and the server is in PHP/MySQL.
It's a turn-based game, so most of the client-server communication is realised by call-respond (jQuery-PHP). That calls happens any time the user clicked button or any other active element. After call in JQuery, PHP controller creates some classes, connects to the database and returns respond. That communication is quite good for me, do not cause the problems with the number of connections etc. (it's similar to the standard trafic during using the website).
Unfortunatelly, I also need some 'calls' from server-side. The example is the list of active games to join. Client must be notify any time the game list has changed. And the maximum delay for that is no more than 1 second.
For now I make it by sending 'ping' call from client (jQuery) and server anserws with "nothing" most the time, or "game2 created" etc. But that 'pings' will be send every second from each of the players. And for each of them, the server will create classes and connect to the mysql which results with "Database connection error".
Is there any way to minimalise mysql connections and/or ajax calls?
I use standard www server, don't have root account.
Start with this:
But that 'pings' will be send every second from each of the players
Instead of calling every second the server by both players (which is actually 2 calls, with
the number going up for every player connected), you can optimize it by checking the idle time or how much time passed of doing nothing; if nothing has been returned for 2 continuous calls, you should increase the call delay to 2 seconds and then to 4 seconds etc. (just play with setInterval and make it run continuously);
This will allows some breathing to your app (i had my own game using this)
Next thing to do is the calling policy; Instead of calling the server in player's command, you can just store the player's command in a js array and every X seconds send
that array off; if no commands, no ajax call. Yes, you'll get a delay but think of many users
connected to a possible poor server...
You can also use comet technology if you want to push things further..
Having said that this may be a possible duplicate as mentioned by eggyal ..
about ping and response (client pings server scenario)
1) Have you considered writing temporary files (ie. Smarty compiled file with caching time X) ?
Once player Y has done its turn remove the file (or write something in that file).
Each player does an AJAX request with a game_id (or anything uniques) which will check existence of that compiled file. This will save you many mysql calls. You will call mysql if the caching time of the file has expired.
2) Ugly -> try if mysql persistent connections will help ( I am not sure )
Well I dont know which is the most efficient way to do. Could someone help me in finding out a better algorithm.
okay let us take some example like facebook when user posts a post, it will be updated to his friend without any page refresh and we know its by ajax request . But how can we know that some one has posted a new thing? may be like putting a timer for every 2 seconds and sending an ajax request for some table and checking if some user posted something.right? but is there a way to do without setting a timer because performing the operation for every 2 seconds may cause severe server issue i think so ? just wanna know if there is a better way instead of setting a timer?
Any help is greatly appreciated.
Currently what Facebook and Google employ, is a technique called long polling.
It's a simple system whereby the client makes an AJAX request to the server. The server takes the request and checks to see if it has the data the request needs. If not, the request is left open but deferred by the server. The second the server has the data, the request is handled and returned to the client.
If you open up facebook, you'll see requests being posted to Facebook which take around 55 seconds to complete. Same goes for Gmail and a few other web applications that seem to have some kind of push system.
Here's a simple example of how these requests might be handled:
Client:
Initial AJAX request which has the timestamp 0
Server:
Compare request with timestamp 0 by checking the timestamp of the data on the server. Lets say the data on the server has the timestamp 234.
The client stamp is different from the current stamp on the server data so we return with the new data.
Client:
Client gets the data and immediately posts a new AJAX request with timestamp 234.
Then we process the new data and update the web page appropriately.
Server:
Compares request with timestamp 234 with the current stamp of the data on the server.
The stamp values are the same so we go to sleep.
Server data is update and stamp value is now timestamp 235.
Sleeping requests are woken up and returned with update value.
You can read a more in-depth explanation of more modern mechanisms for live updates.
Just my two cents but I have previously solved a similar problem by having 2 webservices. Basically one with a method along the lines of HaveNewData() and another GetData() (In my case I had a number of webservices I wanted to call).
So basically call HaveNewData() on a regular basis (I think in any case every 2 seconds is a bad design and unnecessary) and return a simple 0 or 1 for this method (minimal data). If HaveNewData() returns 1 then make your expensive webservice call. Alternatively you could also just return a null value in the primary webservice when no new data is available and in my experience this scales at least "pretty" reasonably.
Long polling is a great technique but as with any solution the efficacy depends on many variables including hardware setup and hence there are no absolute solutions.
Please be aware that Long polling you are keeping the connection alive which may cause performance issues with many clients.
You solution should take into account :-
Does it need to be almost realtime (for example a stockticker)
How often the ajax data/output changes (chat vs new comment)
How often does the event that causes changes to ajax data/output triggers. This will dictate how the cache is generated.
You should be a frugal when it comes to ajax. Request & response should be done on a need basis. The success of a ajax implementation will be incomplete without a well thought out caching solution which is more event based rather than request based.
Below is a simplified version of one of the techniques we found useful in a project:-
Every Ajax poll request that is made will contain output_hash which is the digest of the data returned by the server previously.
Server checks this output_hash against the recent hash of the output it will have generated, from data source preferably stored in cache.
If it is different then it will serve the new content along with new output_hash. Else a small response / Not modified to indicate there is no new content.
In our solution we also did dynamic calculation of the interval of the next poll. Keeping the interval dynamic allows the server to control the request. For example let's assume most comments / answers happen in the first 1 hour, beyond that there is no point having the interval time as 1 sec, so the server can increase that to 2,3 or even 5 sec dynamically as the time increases, rather than hard coding the interval as 2 sec. Similarly the interval time can be decreased if there is flurry of activity in an old post.
We also checked for idle clients and other things.
I have a basic server in PHP and from the mobile device I send data and save the server every 1 minute.
But when ever my mobile phone loses the connection, I would like to set a counter on the server and if the mobile does not insert anything the database longer than 2 min. I would like to call a function in the server saying that your mobile lost connection. Every time the mobile phone sends the data to the server, timer will be reset.
I am not familiar to PHP but I searched and couldn't find any similar things. I am sure there must be an easy way of doing it. setting a listener or creating a count down timer.
You can't set a timer in your PHP code as it runs only when a client requests a page.
As you doesn't have an access to CRON jobs, you can't do it from CLI side either.
However, some webservices allow you to periodically call a page of your choice, so you can save() on every http://something/save.php call. Take a look at http://www.google.fr/search?ie=UTF-8&q=online+cron for more informations.
Note that if someone get the save() url, he can easily overload your server. Try to secure it as most as possible, maybe with a user/password combination passed as parameters.
Finally I got it worked. What I learned is that the server cant do anything if there is no request:) So I created a timestamp field in the database and update the current time with the request. Of course before updating the field gets it and compare it with the current time and see when was the last request. and find out the time difference. if the time difference bigger than 2 min change the position. I hope this helps other people as well.
I've a particularly long operation that is going to get run when a
user presses a button on an interface and I'm wondering what would be the best
way to indicate this back to the client.
The operation is populating a fact table for a number of years worth of data
which will take roughly 20 minutes so I'm not intending the interface to be
synchronous. Even though it is generating large quantities of data server side,
I'd still like everything to remain responsive since the data for the month the
user is currently viewing will be updated fairly quickly which isn't a problem.
I thought about setting a session variable after the operation has completed
and polling for that session variable. Is this a feasible way to do such a
thing? However, I'm particularly concerned
about the user navigating away/closing their browser and then all status
about the long running job is lost.
Would it be better to perhaps insert a record somewhere lodging the processing record when it has started and finished. Then create some other sort of interface so the user (or users) can monitor the jobs that are currently executing/finished/failed?
Has anyone any resources I could look at?
How'd you do it?
The server side portion of code should spawn or communicate with a process that lives outside the web server. Using web page code to run tasks that should be handled by a daemon is just sloppy work.
You can't expect them to hang around for 20 minutes. Even the most cooperative users in the world are bound to go off and do something else, forget, and close the window. Allowing such long connection times screws up any chance of a sensible HTTP timeout and leaves you open to trivial DOS too.
As Spencer suggests, use the first request to start a process which is independent of the http request, pass an id back in the AJAX response, store the id in the session or in a DB against that user, or whatever you want. The user can then do whatever they want and it won't interrupt the task. The id can be used to poll for status. If you save it to a DB, the user can log off, clear their cookies, and when they log back in you will still be able to retrieve the status of the task.
Session are not that realible, I would probably design some sort of tasks list. So I can keep records of tasks per user. With this design I will be able to show "done" tasks, to keep user aware.
Also I will move long operation out of the worker process. This is required because web-servers could be restrated.
And, yes, I will request status every dozens of seconds from server with ajax calls.
You can have JS timer that periodically pings your server to see if any jobs are done. If user goes away and comes back you restart the timer. When job is done you indicate that to the user so they can click on the link and open the report (I would not recommend forcefully load something though it can be done)
From my experience the best way to do this is saving on the server side which reports are running for each users, and their statuses. The client would then poll this status periodically.
Basically, instead of checkStatusOf(int session), have the client ask the server of getRunningJobsFor(int userId) returning all running jobs and statuses.