I need to run long (mintues to hours) matlab code on server side and send the user its progress status (0-100%). I can't send the data directly to client-side because the client may disconnect and check the status hours later.
Should I do it through the database? Thought about updating the database through matlab/php while the client side (php via javascript/ajax) can query the database every few seconds but I am afraid its very "expensive" (many read & write operations for only one user).
What should I do?
by the way, its an internal network, dozenes of users, no more.
You did not mention the kind of database you are using.
If it is mysql and since you are only in an internal network with some dozens users: yes you can use the database. If you want to keep read/write-operations low, you can use the MEMORY-Database-Engine for that purpose.
Also, you can use Memcache for interprocess-communication. One process writes into memcache, and another process reads the value out.
Related
I am trying to build a Tracking System where in an android app sends GPS data to a web server using Laravel. I have read tutorials on how to do realtime apps but as how I have understand, most of the guides only receives data in realtime. I haven't seen yet examples of sending data like every second or so.
I guess its not a good practice to POST data every second to a web server specially when you already have a thousand users. I hope anyone could suggest how or what should I do to get this approach?
Also, as much as possible I would only like to use Laravel without any NodeJS server.
Do sending quickly
First you should estimate server capacity. As of fpm, if you have 32 php processes and every post request handles by a server within 0.01sec, capacity can be roughly estimated asN = 32 / 0.01 = 3200 requests per second.
So just do handling fast. If your request handles for 0.1sec, it is too slow to have a lot of clients on a single server. Enable opcache, it can decrease time 5x. Inserting data to mysql is a slow operation, so you probably need to work it out to make it faster. Say, add it to a fast cache (redis\memcached) and when cache already contains 1000 elements or cache is created more than 0.5 seconds ago, move it to a database as a single insert query.
Do sending random
Most of smartphones may have correct time. So it can lead to a thousand of simultaneous requests when next second starts. So, first 0.01sec server will handle 1000 requests, next 0.99sec it will sleep. Insert at mobile code a random delay 0-0.9sec which is fixed for every device and defined at first install or request. It will load server uniformly.
There's at least 2 really important things you should consider:
Client's internet consumption
Server capacity
If you got a thousand users, every second would mean a lot of requests for you server to handle.
You should consider using some pushing techniques, like described in this #Dipin answer:
And when it comes to the server, you should consider using a queue system to handle those jobs. Like described in this article There's probably some package providing the integration to use Firebase or GCM to handle that for you.
Good luck, hope it helps o/
Which one is best when to choose from server-side or client-side?
I have a PHP function something like:
function insert(argument)
{
//do some heavy MySQL work such as sp_call
// that takes near about 1.5 seconds
}
I have to call this function about 500 times.
for(i=1;i<=500;i++)
{
insert(argument);
}
I have two options:
a) call through loop in PHP(server-side)-->server may timed out
b) call through loop in JavaScript(AJAX)-->takes a long time.
Please suggest the the best one, if there is any third one.
If I understand correctly your server still needs to do all the work, so you can't use the clients computer to lessen the power needed on your server, so you have a choice of the following:
Let the client ask the server 500 times. This will easily let you show the process for the client, giving him the satisfactory knowledge that something is happening, or
Let the server do everything to skip the 500 extra round trip times, and extra overhead needed to process the 500 requests.
I would probably go with 1 if it't important that the client don't give up early, or 2 if it's important that the job is done all the way though, as the client might stop the requests after 300.
EDIT: With regard to your comment I would then suggest having a "start work"-button on the client that tells the server to start the job. Your server then tells a background service (which can be created in php) to do the work. And it can update it's process to a file or in a database or something. Then the client and the php server is free to timeout and log out without problems. And then you can update the page to see if the work is completed in the background, which can be collected from the database or file or whatever. Then you minimize both time and dependencies.
You have not given any context for what you are trying to achieve - of key importance here are performance and whether a set of values should be treated as a single transaction.
The further the loop is from the physical storage (not just the DBMS) then the bigger the performance impact. For most web applications the biggest performance bottleneck is the network latency between the client and webserver - even if you are relatively close....say 50 milliseconds away...and have keeaplives working properly, then it will take a minimum of 25 seconds to carry out this operation for 500 data items.
For optimal performance you should be sending the data the DBMS in the least number of DML statements - you've mentioned MySQL which supports multiple row inserts and if you're using MySQLi you can also submit multiple DML statements in the same database call (although the latter just eliminates the chatter between PHP and DBMS while a single DML inserting multiple rows also reduces chatter between the DBMS and the storage). Depending on the data structure and optimiziation this should take in the region of 10s of milliseconds to insert hundreds of rows - both methods will be much, MUCH faster than having the loop running in the client even if the latency were 0.
The length of time the transaction in progress is going to determine the likelihood of the transaction failing - the faster method will therefore be thousands of times more reliable than the Ajax method.
As Krycke suggests, using the client to do some of the work will not save resource on your system - there is an additional overhead of the webserver, PHP instances and DBMS connection. Although these are relatively small, they add up quickly. If you test both approaches you will find that having the loop in PHP or in the database will result in significantly less effort and therefore greater capacity on your server.
Once I had script which was running tens of minutes. My solutions was doing long request through AJAX with timeout 1 second and checking for result in another AJAX threads. Experience for user is better than waiting too long for response from php without ajax.
$.ajax({
...
timeout: 1000
})
So Finally I Got this.
a) Use AJAX if you wanna sure that it will complete. it is also user-friendly as he gets regular responses between AJAX calls.
b) Use Server Side Script if you almost sure that server will not get it down in between and want less load on client.
Now i am using Server Side Script with a waiting message window for the user and user waits for successful submission message else he have to try again.
with a probability that it will succeed in first attempt is 90-95%.
I am creating a web application named Online Exam using PHP+MySQL and AngularJS. Now I am getting some trouble on project like changing the user looged in status. Let us take this condition as a example:
Suppose a authorized user/student successfully logged in online exam section(After successfully logged current time will be inserted in the db in exam_start_time column as unix timestamp format and exam_status will be set as 'ACTIVE`.
1hr(60 min) countdown timer is initialize for him/her as per the inserted exam_start_time in db.
Now suppose after 15 min the system shuts down automatically, then if user logged in again(In same system or other) then the countdown timer should be set for 45 minutes only.
Previously I was updating the last_activity_time in our table in every 10 sec(using ajax calls). but now I want to change this way, Is there any way like(socket or network programming using PHP) to update the column.
Here is my table structure which is managing user logged in status
Please give me some suggestions on it.
A Php socket server programming tutorial : http://www.christophh.net/2012/07/24/php-socket-programming/
Sockets, as Pascal Le Merrer mentioned, is IMO your best option. But beware of Apache! Every WebSocket holds one Apache thread, and Apache wasn't designed to do that. when too many (and by too many I mean few dozen) clients connect to your site, it will crash. (I've been there while trying to implement long polling/comet, ended up using NodeJS. If you're using nginx, it is more likely that it will become low on resources and effective, but there are also other ways. Take a look here:
Using WebSocket on Apache server
If you find this uncomfortable/hard to learn, try also another idea:
try to add hidden iFrame to your exam page, pointing to prepared site that updates database row. Use javascript to refresh this page every 10-15 seconds. Every refresh causes update of specific row in DB, using current date and time. It should work (not tested, but give it a try).
I need to check for updates on a (max) one second interval for updates.
I'm now looking for the fastest way to do that using AJAX for the requests and PHP and MySQL.
Solution 1
Every time new data, that needs to be retreived by other clients, is added to the MySQL database a file.txt is updated with 1. AJAX makes a request to a PHP file which will check if file.txt contains a 1 or 0. If it contains a 1 it will get the data from the MySQL database and return it to the client.
Solution 2
Every AJAX request calls a PHP file which will check directly into MySQL database for new data.
Solution ..?
If there is any faster solution i'd be happy to know! (considering I can only use PHP/MySQL and AJAX)
Avoiding the database will probably not make the process significantly faster, if at all.
You can use a comet-style ajax request to get near real-time polling. Basically, create an ajax request as usual to a php-script, but on the server side you poll the database and sleep for a short interval if there is nothing new. Repeat until there is something of interest for the client. If nothing appears within a timeframe of e.g. 60 seconds, close the connection down. On the client side, you only open a new connection once the first has terminated (either with a response or as a timeout).
See: https://en.wikipedia.org/wiki/Comet_(programming)
I have a db with over 5 million rows and for each row i have to do a http post to a server with some parameters at maximum rate of 500 connections. each post request takes 12 secs to process. so as old connections are completed i have to do new ones and maintain ~500 connection. I have to then update DB with values returned from these webcalls.
How do I make the webcalls as above?
My App is in PHP. Can I use PHP or should I switch to something else for this.
Actually you can definitely do this with PHP using a technique called long-polling. Basically how it works is the client machine pings the server and says "Do you have anything for me" and the server sees that it does not. Instead of responding it holds onto the request and responds when it has something to send.
Long polling is a method that is used by both DrupalChat and the APE project (AJAX Push Engine).
http://drupal.org/project/drupalchat
http://www.ape-project.org/
Here is some more info on push tech: http://en.wikipedia.org/wiki/Push_technology and http://en.wikipedia.org/wiki/Comet_%28programming%29
And here is a stackoverflow post about it: How do I implement basic "Long Polling"?
Now I have to say that 12 seconds is really dang long for a DB query to run. It sounds like either the query needs to be optimized or the DB does (or both). Have you normalized the database and setup good table and inter-table indexing?
Now as for preventing DB update collisions you need to use transactions (which both PostGres and newer versions of MySQL offer along with most enterprise DB systems). Transactions will allow you to rollback db changes and reserve table IDs and things like that.
http://en.wikipedia.org/wiki/Database_transaction
PHP isn't the right tool to make long-running scripts, since it by default has a maximum execution time which is pretty short. You might look into using python for this task. Also note that you can call external scripts from PHP (such as python scripts) using the system() function, if the only reason you're using PHP is to make it easy to integrate a web front-end.
However, you [b]can[/b] do this in php with a cron-job by simply having your php script only handle a single row at a time, and have the cron-job call the php script every second. Just maintain the index into the table elsewhere (either elsewhere in the DB or just write the number to a file)
If you wanted to saturate your 500 connection limit, have your script do 40 rows at a time. 40 rows / second is roughly 500 rows / 12 seconds