I'm using PHP as my scripting language and mySQL as my database.
Im wondering if saving into database on form submit is better than saving on every keypress.
Which is better in terms of usability/ux and in performance?
Saving on every keypress means:
You'll need a server capable of handling more requests
You'll need smarter logic then every key press as most people can press keys faster than most connections can round trip an HTTP request. Possibly something like "Every 30 seconds, but only if the data has been changed, but delayed until a response is received for the last request (or it times out)".
Saving on keypress is definitely a very bad idea ! A typical form with 10 fields, will require at least 100 calls to the database. Not very optimal. Even from a UX point of view; But you can create a setTimeout for saving the user input, say, every 30sec (the setTimeout will be initialized after the first field has been changed, for example).
Related
I made a private chat system. So far the chat has 3 jquery ajax post scripts calling to the server in a loop for new data.
Message window between current user and target user (The ajax gets the timestamp of the last message on the db and compares it to the last message timestamp that was displayed. Get all messages > than last message timestamp and display it on message window. ajax loops every 5 seconds after last return.)
Whos online checker (Checks db for whos online. ajax loops every 30 seconds after last return)
Who messaged current user (Check and Get users who are not the current target user on the message window and has messaged the current user. ajax loops every 15 seconds after last return)
So far the above 3 are the only ajax loops I have and I am still double checking my code for areas where I can trim it down.
My question is. Would it be better in conserving server system resources if I group together the above 3 ajax post to create 1 ajax post and loop it every 5/8 seconds. Or should I leave it as?
I ask this because I got a warning from my hosting before that I was consuming too much of their server's system resources (due to a very stupid experiment). If I mess up again their gonna cut my hosting so I do hope you guys understand why I ask this kind of question.
Extra details: I use jquery ajax to talk to a php script that gets the data from a mysql db. The loop for the requests are done client side.
Websockets are tricky. So if you decide to go with ajax there are a couple of factors to consider:
The frequence. Efficient systems usually use a sort of tick system. In your case a tick would be 5 seconds as all your time lines can be tacted into a 5 second tact. And yes of course you group all transmission needs of a tick into 1 transmission.
The data quantity. Try to not send more than 1KB of Bytes per tick. Eg. use sparse formats like csv over eg. XML. Set hard entry limits. Compress. Things like that. Network traffic is packaged - so sending 1025 Bytes causes allocation of 2KB resources.
Act on user's inactivity somehow. Eg. do not use up each tick for the "Message window between current user and target user" if the user is inactive for more than a minute. Sort-of-session timeout of 20 minutes or so...
The computation. Make the server side tick response QUICK and small. Consider to use memory tables or mem chaches for the tick handling and then have a ten minutes or so agent that stores to persistence what is needed to go there. Try to avoid complex fat operations (like eg. >3 db round trips) in the tick response.
The hoster. That was also said in other comment. A quick additional hint: You could ask if you are allowed to implement that thing before you sign the contract, if you are able to change the contract. Sometimes there are things like video and instant messaging mentioned in the general terms of service.
There are probably more things.. But these come to my mind immediately...
In general maybe you should also check out https://developers.google.com/speed/docs/best-practices/rtt
Our company deals with sales. We receive orders and our PHP application allows our CSRs to process these orders.
There is a record in the database that is constantly changing depending on which order is currently being processed by a specific CSR - there is one of these fields for every CSR.
Currently, a completely separate page polls the database every second using an xmlhhtp request and receives the response. If the response is not blank (only when the value has changed on the database) it performs an action.
As you can imagine, this amounts to one databse query per second as well as a http request every second.
My question is, is there a better way to do this? Possibly a listener using sockets? Something that would ping my script when a change has been performed without forcing me to poll the database and/or send an http request.
Thanks in advance
First off, 1 query/second, and 1 request/second really isn't much. Especially since this number wont change as you get more CSRs or sales. If you were executing 1 query/order/second or something you might have to worry, but as it stands, if it works well I probably wouldn't change it. It may be worth running some metrics on the query to ensure that it runs quickly, selecting on an indexed column and the like. Most databases offer a way to check how a query is executing, like the EXPLAIN syntax in MySQL.
That said, there are a few options.
Use database triggers to either perform the required updates when an edit is made, or to call an external script. Some reference materials for MySQL: http://dev.mysql.com/doc/refman/5.0/en/create-trigger.html
Have whatever software the CSRs are using call a second script directly when making an update.
Reduce polling frequency.
You could use an asynchronous architecture based on a message queue. When a CSR starts to handle an order, and the record in the database is changed, a message is added to the queue. Your script can either block on requests for the latest queue item or you could implement a queue that will automatically notify your script on the addition of messages.
Unless you have millions of these events happening simultaneously, this kind of setup will cause the action to be executed within milliseconds of the event occuring, and you won't be constantly making useless polling requests to your database.
Background:
2 minutes before every hour, the server stops access to the site returning a busy screen while it processes data received in the previous hour. This can last less than two minutes, in which case it sleeps until the two minutes is up. If it lasts longer than two minutes it runs as long as it needs to then returns. The block is contained in a its own table with one field and one value in that field.
Currently the user is only informed of the block when (s)he tries to perform an action (click a link, send a form etc). I was planning to update the code to bring down a lightbox and the blocking message via BlockUI jquery plugin automatically.
There are basically 2 methods I can see to achieve my aim:
Polling every N seconds (via PeriodicalUpdater or similar)
Long polling (Comet)
You can reduce server load for 1 by checking the local time and when it gets close to the actual time start the polling loop. This can be more accurate by sending the local time to the server returning the difference mod 60. Still has 100+ people querying the server which causes an additional hit on the db.
Option 2 is the more attractive choice. This removes the repeated hit on the webserver, but doesn't allieve the repeated check on the db. However 2 is not the choice for apache 2.0 runners like us, and even though we own our server, none of us are web admins and don't want to break it - people pay real money to play so if it isn't broke don't fix it (hence why were are running PHP4/MySQL3 still).
Because of the problems with option 2 we are back with option 1 - sub-optimal.
So my question is really two-fold:
Are there any other possibilities I've missed?
Is long polling really such a problem at this size? I understand it doesn't scale, but I am more concerned at what level does it starve Apache of threads. Also are there any options you can adjust in Apache so it scales slightly further?
Can you just send to the page how many time is left before the server starts processing data received in the previous hour. Lets say that when sending the HTML you record that after 1 min the server will start processing. And create a JS that will trigger after that 1 min and will show the lightbox.
The alternative I see is to get it done faster, so there is less downtime from the users perspective. To do that I would use a distributed system to do the actual data processing behind the hourly update, such as Hadoop. Then use whichever method is most appropriate for that short downtime to update the page.
I am trying to create a live orders page for my website. I need the page to auto-update to show that a new order has been placed and to display an alert/sound whenever a new order row is in the database.
Any ideas on how i can easily achieve this?
Thanks,
-AJay
You will need to use something like Comet to be able to push data to the client.
Then, use a MySQL trigger to somehow raise an event in your server application that's holding the Comet connection open to push the appropriate data.
The less elegant way that many developers use (at least until WebSockets become popular) is to poll with AJAX for changes, but this has a high bandwidth overhead and a longer latency.
From AJAX view you should use timers in javascript like this...
// First parameter is an expression and second is a interval as miliseconds.
setTimeout ( "UpdateFunction()", 2000 );
Also i recommended to you use this code...
setTimeout ( "UpdateFunction()", 5000 );
function UpdateFunction( )
{
// (do something here)
setTimeout ( "UpdateFunction()", 5000 );
}
your UpdateFunction() should call a php or asp page which renew list of orders.
I would think a polling approach would do you well, as server push has many negative implications for the browser.
If going with a polling-route, I would suggest having a timed event occur on your page that will call a web method. The web method would then return data (something small like an ID) about queued orders. Compare the list of IDs to what's currently fleshed out on the page, and assuming you have something in the newly given list that doesn't exist (or vice versa), call a separate method to retrieve the additional details to display display new orders from or delete old entries.
This way, you do not need to keep a steady stream to the server (which can block the user's browser from making additional content requests).
I hope that helped at all.
I was just wondering how the PHP is behaving in the background.
Say I have a PHP which creates an array and populates it with names.
$names = Array("Anna", "Jackson" .... "Owen");
Then I have a input field which will send the value on every keypress to the PHP, to check for names containing the value.
Will the array be created on every call? I also sort the array before looping through it, so the output will be alphabetical. Will this take up time in the AJAX call?
If the answer is yes, is there some way to go around that, so the array is ready to be looped through on every call?
There's no difference between an AJAX request and a "normal" http request. So yes, a new php instance will be created for each request. If the time it takes to parse the script(s) is a problem you can use something like APC.
If those arrays are created at runtime and the time this takes is a problem you might store and share the values between requests in something like memcache
No matter what method you use to create the array, if it's in the code, if you pull it out of a database, a text file or any other source, when the web server gets an http request, ( whether it be Ajax or not ) it will start the execution of the PHP script, create its space in memory, and the array will be created.
There's only one entry point for a PHP script and it's the first line of it, when an http rquest points to it. (or when another script is included, which is the same)
As far as I know then it will have to create the array each time as the AJAX will make a new server request on each key press on the input input field. Each server request will create the array if you create the script to do so.
A better method would be to use a database to store the names.
Yes it will be created and destroyed every time you run the PHP script.
If this is a problem you could look at persisting this data somewhere (e.g. in a Session or in a Database), but I would ask whether it is really causing you so much of a performance problem that you need to do this?
(it's not a direct answer to your question, but it can help, if you are concerned about performances)
You say this :
Then I have a input field which will
send the value on every keypress to
the PHP
In this case, it is common pratice to not send the request to the server as soon as a key is pressed : you generally wait a couple of milliseconds (between 100 ms and 150 ms, I'd say), to see if there is not another keypress in that interval :
if the user types several keys, he usually types faster than the time you are waiting, so, you only send a request for the last keypress, and not every keypress
if the user types 4 letters, you only do 1 request, instead of 4 ; which is great for the health of your server :-)
speaking of time for the user : waiting 100 ms plus the time to go to the server, have the script running, and get back from the server is almost the same as without waiting 100 ms first ; so, not bad for the user
As a sidenote : if your liste of data is not too big (20 names is definitly OK ; 100 names would probably be OK ; 1000 might be too much), you could store it directly as a Javascript array, and not do an Ajax request : it's the fastest way (no client-server call), and it won't load your server at all.