I'm having a web application written in PHP.
One function of this application is a document archive which is a MySQL database on another server. And this archive server is pretty unreliable performance wise, but not under my control. The archive server has got often long table locks often, which results in getting a connection, but not getting any data.
This often leads to open MySQL connections which saturate the resources of the web-application server. As a result the whole web application becomes slow/inacessible.
I would like to decouple the two systems.
I thought the logical way would be for my PHP application to abort a SELECT query if it takes longer than 1 or 2 Seconds to free up resources and present the user with a message that the remote system is not responding in time.
But how is it best to implement such a solution?
UPDATE: the set_time_limit() option looks Promising. but not fully satisfying as im not able to present the user with an "message" but at least it might help to prevent the saturation of the Resources.
I think you should use maximum execution limit function provided in php.
You can set MySQL time out like this
Or you can set it on code like this
I think second solution might be better for you
Then if the timeout error raised, you can tell the server not responded
Related
I have a PHP application that is executed up to one hundred times simultaneously, and very often. (its a telegram anti-spam bot with 250k+ users)
The script itself makes various DB calls (tickers update, counters etc.) but it also load each time some more or less 'static' data from the database, like regexes or json config files.
My script is also doing image manipulation, so the server's CPU and RAM are sometimes under pressure.
Some days ago i ran into a problem, the apache2 OOM-Killer was killing the mysql server process due to lack of avaible memory. The mysql server were not restarting automaticaly, leaving my script broken for hours.
I already made some code optimisations that enabled my server to breathe, but what i'm looking now, is to have some caching method to store data between script executions, with the possibility to update them based on a time interval.
First i thought about flat file where i could serialize data, but i would like to know if it is a good idea or not regarding performances.
In my case, is there a benefit of using caching data over mysql queries ?
What are the pro/con, regarding speed of access, speed of execution ?
Finaly, what caching method should i implement ?
I know that the simplest solution is to upgrade my server capacity, I plan to do so anytime soon.
Server is running Debian 11, PHP 8.0
Thank you.
If you could use a NoSQL to provide those queries it would speed up dramatically.
Now if this is a no go, you can go old school and keep that "static" data in the filesystem.
You can then create a timer of your own that runs, for example, every 20 minutes to update the files.
When you ask info regarding speed of access, speed of execution the answer will always be "depends" but from what you said it would be better to access the file system that being constantly querying the database for the same info...
The complexity, consistency, etc, lead me to recommend against a caching layer. Instead, let's work a bit more on other optimizations.
OOM implies that something is tuned improperly. Show us what you have in my.cnf. How much RAM do you have? How much RAM des the image processing take? (PHP's image* library is something of a memory hog.) We need to start by knowing how much RAM can MySQL can have.
For tuning, please provide GLOBAL STATUS and VARIABLES. See http://mysql.rjweb.org/doc.php/mysql_analysis
That link also shows how to gather the slowlog. In it we should be able to find the "worst" queries and work on optimizing them. For "one hundred times simultaneously", even fast queries need to be further optimized. When providing the 'worst' queries, please provide SHOW CREATE TABLE.
Another technique is to decrease the number of children that Apache is allowed to run. Apache will queue up others. "Hundreds" is too many for Apache or MySQL; it is better to wait to start some of them rather than having "hundreds" stumbling over each other.
Hi there we are developing a website for students for taking Online tests.
We are working on PHP My SQL.
The questions of the all the tests are stored in a table with the test_id associated with the test.
Problem:
Now as the questions of the tests are being loaded from the server it sometime takes time in loading.
As these tests are being TIMED (Online Tests) hence the test taker feels his time is getting wasted.
The loading time may be a result of
slow internet connection
Databse search
Question/s
What is the best way of giving a jerkless experience to the test-taker irrespective of his internet speed and PC configuration.
From your wording, I'm assuming each individual question has its own time limit.
Eliminating a user's slow connection is impossible; if you measure the time on the client to try and avoid that, you open it up to cheating (client can hack the javascript to present a false time).
However you can eliminate database query time: set up a websocket server, have the user connect to it when they start the test, load all of the relevant questions in advance on the server into a queue, and when the user requests a question, immediately record the current time and send the next question from the queue out via the websocket connection.
Also make sure that upon receiving the question, the client side JS displays it immediately and doesn't have to e.g. make further AJAX calls or requests before it can display it. If additional information is needed, that should be looked up by your websocket server and bundled in with the question.
By doing this you should be able to get the response time below 50ms if the user has a decent internet connection and is in the same country as your websocket server.
You can not speed-up loading regardless of the users internet-connection. Of course, you can (and should) optimize all SQL queries and long-running tasks to have them perform as good as possible.
To void issues with test time running out, I would recommend to load all questions before the time starts running. Then, all data can be stored in the clients local storage (refer this link for some more info) - but please take into account, that this will only work if the browser supports local storage.
Another possibility is, to load / generate all data and have some server-side cache (like memcached, or a simple file-cache). On every new action, that cache can be queried without having to query all data from the database. Of course, this will only speedup the process, if the performance issues are in long-running queries, database speed etc - not if the user`s internet connection is too slow.
I'm programming c++ service which constantly every 1 second makes SELECT query with LIMIT 1 on mysql server, something computes and then makes INSERT and this in loop forever and ever.
I'd like to detect server overloading to make SELECTs with bigger LIMIT, for example LIMIT 10 and in greater inetrvals, like every 5 seconds or so. Not sure if my solution will lighten server overloads.
My problem is how to detect these overloads and I'm not sure what I mean by overload :) It could be anything, but my application is web application in php (chat) so overload could be detected on Apache2 side, or mysql side, or detecting how many users make how many inputs (chat messages) in time interval. I don't know :/
Thank you!
EDIT: Okay, I made an socket server from my C++ application and its really fast that way. Now I'm struggling with memory leaks, but that's another story.
So thank you #brianbeuning for helpful thoughts about my problem.
Better solve that forever and ever loop, its not good idea.
If that loop is really must, then use some caching technique.
For detecting "overload" (I would call it high MySQL CPU usage), try calling external commands supported by operating system.
For example if you use this on Linux, play with ps command.
EDIT:
I realized now that you are programming chatting server.
Using MySQL as middleman is NOT good idea.
Try solving this without using MySQL, and then if you need to save chat log, occasionally save it to MySQL (eg. every 10 seconds or so).
I bet it is CPU hog right now for just 20 intensive users.
Try to make direct client-to-client communication, without requiring server (use server only to establish communication between 2 clients).
Another approach would be to buffer the data in your app and use a connection pool of sorts to manage the load. Keep a rolling buffer of data that needs to be inserted and manage the 'limit' based on the size of the buffer.
My linux server websites keep going down again and again but SSH, FTP, etc are alive. So I had a look at the server through SSH and used top command which lists all the processes. It shows that when some PHP pages are executed, mysql CPU usage reaches 100%. So is there any command/log which can be used to find out which PHP pages are taking up so much of mysql usage? Thank you...
You may want to take a look at your Apache log format to see if it includes the %D parameter as this indicates the amount of time taken to to serve a request in microseconds.
If you exclude anything but requests to PHP scripts, you should get an idea of which scripts are taking the longest suggesting high execution time. Obviously this could also mean a very large response payload...
There are multiple aspects to resource consumption.
As mobius mentioned, you can use SHOW FULL PROCESSLIST in MySQL to see what is currently running. Look at the processed taking longer than you would expect and check out the query to find hints about where it originates in your application.
The problem may not be with the application. It might simply be a matter of tuning MySQL, which will be about adding or changing indexes most of the time. EXPLAIN is the command that will you help analyze the execution plan MySQL decided to use. Reading EXPLAIN takes some practice. The best reference I have is High Performance MySQL.
You can also use the MySQL slow query log to get information about the slow queries happening when you are not in front of the server.
If MySQL is running at 100%, you will probably find the problem from there. If you really want to track the usage from PHP, you can set up XHProf, a high performance profiler created by Facebook to run on production sites. You can set it up to sample one request out of 100 and get a bigger picture of the performance of your site. There are a few articles out there that explain how to set it up.
Finally, XDebug and KCacheGrind can be used in development to profile one request at a time.
If MySQL is getting stuck at 100% then you've probably got some badly tuned MySQL queries inside one of your PHP applications. This time will clock up in the MySQL daemon and so won't show up in the %D value. This could be indexes out of date.
If you have access to the D/B through at the command prompt through SSH then you could try doing an ANALYZE TABLE and OPTIMIZE TABLE on any large tables. Also look at "The Slow Query Log" in the MySQL documentation.
Unfortunately fixing this will probably need you to get into the Application internals.
mytop - http://jeremy.zawodny.com/mysql/mytop/ (SHOW FULL PROCESSLIST on your mySQL)
Xdebug Profiler - http://xdebug.org/docs/profiler
I am trying to write a client-server app.
Basically, there is a Master program that needs to maintain a MySQL database that keeps track of the processing done on the server-side,
and a Slave program that queries the database to see what to do for keeping in sync with the Master. There can be many slaves at the same time.
All the programs must be able to run from anywhere in the world.
For now, I have tried setting up a MySQL database on a shared hosting server as where the DB is hosted
and made C++ programs for the master and slave that use CURL library to make request to a php file (ex.: www.myserver.com/check.php) located on my hosting server.
The master program calls the URL every second and some PHP code is executed to keep the database up to date. I did a test with a single slave program that calls the URL every second also and execute PHP code that queries the database.
With that setup however, my web hoster suspended my account and told me that I was 'using too much CPU resources' and I that would need to use a dedicated server (200$ per month rather than 10$) from their analysis of the CPU resources that were needed. And that was with one Master and only one Slave, so no more than 5-6 MySql queries per second. What would it be with 10 slaves then..?
Am I missing something?
Would there be a better setup than what I was planning to use in order to achieve the syncing mechanism that I need between two and more far apart programs?
I would use Google App Engine for storing the data. You can read about free quotas and pricing here.
I think the syncing approach you are taking is probably fine.
The more significant question you need to ask yourself is, what is the maximum acceptable time between sync's that is acceptable? If you truly need to have virtually realtime syncing happening between two databases on opposite sites of the world, then you will be using significant bandwidth and you will unfortunately have to pay for it, as your host pointed out.
Figure out what is acceptable to you in terms of time. Is it okay for the databases to only sync once a minute? Once every 5 minutes?
Also, when running sync's like this in rapid succession, it is important to make sure you are not overlapping your syncs: Before a sync happens, test to see if a sync is already in process and has not finished yet. If a sync is still happening, then don't start another. If there is not a sync happening, then do one. This will prevent a lot of unnecessary overhead and sync's happening on top of eachother.
Are you using a shared web host? What you are doing sounds like excessive use for a shared (cPanel-type) host - use a VPS instead. You can get an unmanaged VPS with 512M for 10-20USD pcm depending on spec.
Edit: if your bottleneck is CPU rather than bandwidth, have you tried bundling up updates inside a transaction? Let us say you are getting 10 updates per second, and you decide you are happy with a propagation delay of 2 seconds. Rather than opening a connection and a transaction for 20 statements, bundle them together in a single transaction that executes every two seconds. That would substantially reduce your CPU usage.