Hi there we are developing a website for students for taking Online tests.
We are working on PHP My SQL.
The questions of the all the tests are stored in a table with the test_id associated with the test.
Problem:
Now as the questions of the tests are being loaded from the server it sometime takes time in loading.
As these tests are being TIMED (Online Tests) hence the test taker feels his time is getting wasted.
The loading time may be a result of
slow internet connection
Databse search
Question/s
What is the best way of giving a jerkless experience to the test-taker irrespective of his internet speed and PC configuration.
From your wording, I'm assuming each individual question has its own time limit.
Eliminating a user's slow connection is impossible; if you measure the time on the client to try and avoid that, you open it up to cheating (client can hack the javascript to present a false time).
However you can eliminate database query time: set up a websocket server, have the user connect to it when they start the test, load all of the relevant questions in advance on the server into a queue, and when the user requests a question, immediately record the current time and send the next question from the queue out via the websocket connection.
Also make sure that upon receiving the question, the client side JS displays it immediately and doesn't have to e.g. make further AJAX calls or requests before it can display it. If additional information is needed, that should be looked up by your websocket server and bundled in with the question.
By doing this you should be able to get the response time below 50ms if the user has a decent internet connection and is in the same country as your websocket server.
You can not speed-up loading regardless of the users internet-connection. Of course, you can (and should) optimize all SQL queries and long-running tasks to have them perform as good as possible.
To void issues with test time running out, I would recommend to load all questions before the time starts running. Then, all data can be stored in the clients local storage (refer this link for some more info) - but please take into account, that this will only work if the browser supports local storage.
Another possibility is, to load / generate all data and have some server-side cache (like memcached, or a simple file-cache). On every new action, that cache can be queried without having to query all data from the database. Of course, this will only speedup the process, if the performance issues are in long-running queries, database speed etc - not if the user`s internet connection is too slow.
Related
I am working on a backend application requires some remote API requests, Which take 1~3 second to response depending on load on remote services.
This backend will get ALOT of requests/second, I am trying to achieve best performance I can get out of my server.
Should I close mysql connection before calling the API and reopen it again after receiving the reply to free up some resources ?
If I did so, What wrong may happen?
What I should Do if I can't connect again?
I MUST store and update database after receiving the request.
I am using pure PHP with MySQL (MySQLi)
If your application has to perform some time-consuming task that is not related to the database then you should close the connection so that your PHP process doesn't block the resources on MySQL server.
You can open a new connection after the API call is over. The obvious downside to this is that you will lose the ability to perform transactions. A transaction cannot remain open across two separate MySQL sessions.
An even better option would be to detach the long-running process from your main process. There a number of ways that this can be achieved. If you can run the process in background then it would be the best option. Caching long-running processes is also a viable option. If the user must wait for the whole job to complete then you can employ 2 or 3 step process.
User sends a request to the server. Your PHP code does some DB operations and redirects the user to the second step
Another PHP process calls the API and does only API related logic. No database connection is established. Once it is done, the application redirects user to the last step.
Your PHP application performs another set of DB activities and returns the final response to the user.
Of course, with such complex operations, there is more possibility for your application to lose transactional correctness so you have to evaluate all advantages and disadvantages.
One last point. If your server is set up correctly then your available PHP processes should have a corresponding amount of MySQL processes. This is to ensure that if your PHP processes are utilized at 100% and each one needs to perform some DB operations, then your MySQL server should not be the bottleneck. In this case, why not make it simple and keep the database connection open while performing the long-running API call? Saves you a lot of trouble.
I'm having a web application written in PHP.
One function of this application is a document archive which is a MySQL database on another server. And this archive server is pretty unreliable performance wise, but not under my control. The archive server has got often long table locks often, which results in getting a connection, but not getting any data.
This often leads to open MySQL connections which saturate the resources of the web-application server. As a result the whole web application becomes slow/inacessible.
I would like to decouple the two systems.
I thought the logical way would be for my PHP application to abort a SELECT query if it takes longer than 1 or 2 Seconds to free up resources and present the user with a message that the remote system is not responding in time.
But how is it best to implement such a solution?
UPDATE: the set_time_limit() option looks Promising. but not fully satisfying as im not able to present the user with an "message" but at least it might help to prevent the saturation of the Resources.
I think you should use maximum execution limit function provided in php.
You can set MySQL time out like this
Or you can set it on code like this
I think second solution might be better for you
Then if the timeout error raised, you can tell the server not responded
I have built a small PHP server/client code. When I say client server I mean it acts bought as client and a server alternating for 5 seconds in each mode.
Now the code runs on two servers and is triggered by a cron.
On rare occasions they manage to get in perfect sync with each other and they either establish a connection at the very last microsecond but by then the PHP code has already passed to the client mode or they never manage to establish a connection.
Before this whole dance starts they run some database queries to select some information that can be big or small and never identical on them so adding some randomness in the timings has only made this incidents happen more rarely but not disappear completely.
Anyone ever manage to do something like this successfully? How?
You have designed a race condition here. No matter how you try to synchronize these, you'll get in trouble eventually.
The way to solve this is to have each process acting as a servere all the time, and doing client functionality on demand.
I am trying to write a client-server app.
Basically, there is a Master program that needs to maintain a MySQL database that keeps track of the processing done on the server-side,
and a Slave program that queries the database to see what to do for keeping in sync with the Master. There can be many slaves at the same time.
All the programs must be able to run from anywhere in the world.
For now, I have tried setting up a MySQL database on a shared hosting server as where the DB is hosted
and made C++ programs for the master and slave that use CURL library to make request to a php file (ex.: www.myserver.com/check.php) located on my hosting server.
The master program calls the URL every second and some PHP code is executed to keep the database up to date. I did a test with a single slave program that calls the URL every second also and execute PHP code that queries the database.
With that setup however, my web hoster suspended my account and told me that I was 'using too much CPU resources' and I that would need to use a dedicated server (200$ per month rather than 10$) from their analysis of the CPU resources that were needed. And that was with one Master and only one Slave, so no more than 5-6 MySql queries per second. What would it be with 10 slaves then..?
Am I missing something?
Would there be a better setup than what I was planning to use in order to achieve the syncing mechanism that I need between two and more far apart programs?
I would use Google App Engine for storing the data. You can read about free quotas and pricing here.
I think the syncing approach you are taking is probably fine.
The more significant question you need to ask yourself is, what is the maximum acceptable time between sync's that is acceptable? If you truly need to have virtually realtime syncing happening between two databases on opposite sites of the world, then you will be using significant bandwidth and you will unfortunately have to pay for it, as your host pointed out.
Figure out what is acceptable to you in terms of time. Is it okay for the databases to only sync once a minute? Once every 5 minutes?
Also, when running sync's like this in rapid succession, it is important to make sure you are not overlapping your syncs: Before a sync happens, test to see if a sync is already in process and has not finished yet. If a sync is still happening, then don't start another. If there is not a sync happening, then do one. This will prevent a lot of unnecessary overhead and sync's happening on top of eachother.
Are you using a shared web host? What you are doing sounds like excessive use for a shared (cPanel-type) host - use a VPS instead. You can get an unmanaged VPS with 512M for 10-20USD pcm depending on spec.
Edit: if your bottleneck is CPU rather than bandwidth, have you tried bundling up updates inside a transaction? Let us say you are getting 10 updates per second, and you decide you are happy with a propagation delay of 2 seconds. Rather than opening a connection and a transaction for 20 statements, bundle them together in a single transaction that executes every two seconds. That would substantially reduce your CPU usage.
I have a game running in N ec2 servers, each with its own players inside (lets assume it a self-contained game inside each server).
What is the best way to develop a frontend for this game allowing me to have near real-time information on all the players on all servers.
My initial approach was:
Have a common-purpose shared hosting php website polling data from each server (1 socket for each server). Because most shared solutions don't really offer permanent sockets, this would require me to create and process a connection each 5 seconds or so. Because there isn't cronjob with that granularity, I would end up using the requests of one unfortunate client to make this update. There's so many wrong's here, lets consider this the worst case scenario.
The best scenario (i guess) would be to create small ec2 instance with some python/ruby/php web based frontend, with a server application designed just for polling and saving the data from the servers on the website database. Although this should work fine, I was looking for some solution where I don't need to spend that much money (even a micro instance is expensive for such pet project).
What's the best and cheap solution for this?
Is there a reason you can't have one server poll the others, stash the results in a json file, then push that file to the web server in question? The clients could then use ajax to update the listings in near real time pretty easily.
If you don't control the game servers I'd pass the work on updating the json off to one of the random client requests. it's not as bad as you think though.
Consider the following:
Deliver (now expired) data to client, including timestamp
call flush(); (test to make sure the page is fully rendered, you may need to send whitespace or something to fill the buffer depending on how the webserver is configured. appending flush(); sleep(4); echo "hi"; to a php script should be an easy way to test.
call ignore user abort (http://php.net/manual/en/function.ignore-user-abort.php) so your client will continue execution regardless of what the user does
poll all the servers, update your file
Client waits a suitable amount of time before attempting to update the updated stats via AJAX.
Yes that client does end up with the request taking a long time, but it doesn't affect their page load, so they might not even notice.
You don't provide the information needed to make a decision on this. It depends on the number of players, number of servers, number of games, communication between players, amount of memory and cpu needed per game/player, delay and transfer rate of the communications channels, geographical distribution of your players, update rate needed, allowed movement of the players, mutual visibility. A database should not initially be part of the solution, as it only adds extra delay and complexity. Make it work real-time first.
Really cheap would be to use netnews for this.