I'm running an app that interacts with a mysql database using the native Mysql PDO, I'm not using laravel or any framework.
So most of the APIs logic is to fetch from DB and/or to insert.
when running on production I can see high memory usage from mysql tasks, check below:
I have a couple of questions.
Is such high memory usage normal? and what's the best practice to manage a proper PHP-MysQL connection in a multi-threaded production-level app?
When an intensive query is running (fetching historical data to plot a graph), the CPU usage jumps to 100% until the query execution finishes it returns back to 2-3%. But during that time the system is completely paused.
I'm thinking of hardware based solutions, such as separating the db server from the app server (currently they both run on the same node) And managing a cluster and using read-only nodes.
But i'd like to know if there are other options, and what's the most efficient way to handle PHP-MySQL connections.
You could also check the query written if there are fully optimized, and connection are closed when not used.
Also you can reduce the load on the mysql server by balancing some work to php.
Also take a look at your query_cache_size, innodb_io_capacity, innodb_buffer_pool_size and max_connection setting in your my.cnf.
Also sometimes upgrading php and doing some caching on your apache can help reduce of ram uses.
https://phoenixnap.com/kb/improve-mysql-performance-tuning-optimization
Normally, innodb_buffer_pool_size is set to about 70% of available RAM, and MySQL will consume that, plus other space that it needs. Scratch that; you have only 1GB of RAM? The buffer_pool needs to be a smaller percentage. It seems that the current value is pretty good.
You top show only 37.2% of RAM in used by MySQL -- the many threads are sharing the same memory.
Sort by memory (in top, use < or >). Something else is consuming a bunch of RAM.
query_cache_size should be zero.
max_connections should be, say, 10 or 20.
CPU -- You have only one core? So a big query hogging the CPU is to be expected. It also says that I/O was not an issue. You could show us the query, plus SHOW CREATE TABLE and EXPLAIN SELECT...; perhaps we can suggest how to speed it up.
Related
I have a PHP application that is executed up to one hundred times simultaneously, and very often. (its a telegram anti-spam bot with 250k+ users)
The script itself makes various DB calls (tickers update, counters etc.) but it also load each time some more or less 'static' data from the database, like regexes or json config files.
My script is also doing image manipulation, so the server's CPU and RAM are sometimes under pressure.
Some days ago i ran into a problem, the apache2 OOM-Killer was killing the mysql server process due to lack of avaible memory. The mysql server were not restarting automaticaly, leaving my script broken for hours.
I already made some code optimisations that enabled my server to breathe, but what i'm looking now, is to have some caching method to store data between script executions, with the possibility to update them based on a time interval.
First i thought about flat file where i could serialize data, but i would like to know if it is a good idea or not regarding performances.
In my case, is there a benefit of using caching data over mysql queries ?
What are the pro/con, regarding speed of access, speed of execution ?
Finaly, what caching method should i implement ?
I know that the simplest solution is to upgrade my server capacity, I plan to do so anytime soon.
Server is running Debian 11, PHP 8.0
Thank you.
If you could use a NoSQL to provide those queries it would speed up dramatically.
Now if this is a no go, you can go old school and keep that "static" data in the filesystem.
You can then create a timer of your own that runs, for example, every 20 minutes to update the files.
When you ask info regarding speed of access, speed of execution the answer will always be "depends" but from what you said it would be better to access the file system that being constantly querying the database for the same info...
The complexity, consistency, etc, lead me to recommend against a caching layer. Instead, let's work a bit more on other optimizations.
OOM implies that something is tuned improperly. Show us what you have in my.cnf. How much RAM do you have? How much RAM des the image processing take? (PHP's image* library is something of a memory hog.) We need to start by knowing how much RAM can MySQL can have.
For tuning, please provide GLOBAL STATUS and VARIABLES. See http://mysql.rjweb.org/doc.php/mysql_analysis
That link also shows how to gather the slowlog. In it we should be able to find the "worst" queries and work on optimizing them. For "one hundred times simultaneously", even fast queries need to be further optimized. When providing the 'worst' queries, please provide SHOW CREATE TABLE.
Another technique is to decrease the number of children that Apache is allowed to run. Apache will queue up others. "Hundreds" is too many for Apache or MySQL; it is better to wait to start some of them rather than having "hundreds" stumbling over each other.
I'm using my own PHP script with MySQL and when there are many users on the site I can see that CPU Load is somewhat high and RAM usage is low. For example, CPU Usage is about 45% and used RAM is 3GB out of 64GB.
How can I make it so it would use more RAM and less CPU? I'm using MyISAM as MySQL engine, php 7.0. I don't need an answer that explains step by step on how to do this, but I would appreciate any directions because I don't know how to get on with it.
I have a dedicated server using cPanel, WHM, Apache and I have full control over what is on the server.
One good way to use RAM to relieve CPU load is caching.
That is, if your app needs some data results that are pretty computationally expensive to produce, you should pre-compute them and store them in RAM, then the next time your app needs those results, they can be fetched from the cache, probably a lot more cheaply than recomputing them.
Popular tools for this is Memcached or Redis.
I have a shop on PHP platform (bad developed) where there are a lot of bad queries (long queries without indexes, order by rand(), dynamic counting, ..)
I do not have the possibility to change the queries for now, but I have to tune the server to stay alive.
I tried everything I know, I had very big databases, which I optimized with changing MySQL engine and tune it, but it is first project, where everything goes down.
Current situation:
1. Frontend (PHP) - AWS m4.large instance, Amazon Linux (latest), PHP 7.0 with opcache enabled (apache module), mod_pagespeed enabled with external Memcached on AWS Elasticache (t2.micro, 5% average load)
2. SQL - Percona DB with TokuDB engine on AWS c4.xlarge instance (50% average load)
I need to lower the instances, mainly the c4.xlarge, but if I switch down to c4.large, after a few hours there is a big overload.
Database has only 300MB of data, I tried query cache, InnoDB, XtraDB, ... with no success, and the traffic is also very low. PHP uses MySQLi extension.
Do you know any possible solution, how to lower the load without refactoring the queries?
Use of www.mysqlcalculator.com would be a quick way to get a brain check on about a dozen memory consumption factors in less than 2 minutes.
Would love to see your SHOW GLOBAL STATUS and SHOW GLOBAL VARIABLES, if you could get them posted to allow analysis of activity and configuration requests. 3 minutes of your GENERAL_LOG during a busy time would be interesting and could identify troublesome/time consuming SELECT's, etc.
Lately my site has been getting about 2.5 million hits per day (on average). I record hits to each and every page (it's an adult site), so I'm able to have a Top 10 sort of thing that shows top Websites, Models, Galleries and Images. I record the hit, as well as the users IP so those individual sections only get incremented one time per user, every 24 hours. The problem with this is that it's updating the mysql database each hit. So of course, my site has started getting 504 errors.
I looked around and saw that memcached might be a solution. Store hits in memory and push to the database every X mins. I also saw some people suggest using MongoDB, which to my understanding is also a memory type storage. Would this be the way to go? Would you recommend memcached or MongoDB for what I'm trying to do? Or is this not the way to proceed because it just means more mysql calls in a shorter time frame (1 huge batch, say, every minute would mean 60 seconds worth of hits versus smaller batches every second).
I have both memcached and MongoDB installed on my server, so either is an option.
there may be much easier solutions to obtain better database performance without new software packages. the volumes you mention are not particularly large.
i'll list a just a few of many possibilities.
1. if you are on a version of mysql older than 5.6, then updating to 5.6+ will almost certainly yield a very significant improvement because the storage engine is much better for 5.6 and above.
2. if the busiest tables use a storage engine other than innodb, then switch to innodb. [you can do this with phpmyadmin]
3. get some help tuning buffer sizes in my.ini [it takes some skill] and/or increasing ram on the database server(s).
4. consider spreading the workload across more drives and/or switch part or all of the database to solid state drives [or better conventional drives]
5. if the database server(s) is/are memory or compute bound then bigger or more servers may be needed.
6. make sure the bottleneck is not external to the database server(s).
The way this site (stackoverflow.com) implements it is by maintaining in-memory data structure of question views which gets flushed to DB every 15 minutes or so. There is no need to stress DB by saving each hit - too much IO. This in-memory structure could be just within your application as a map of ip and hits/time or it could be in memcached. I don't think you really need memcached for this purpose.
So the general idea to do batch updates that you had is a good one.
I am having very high CPU spikes on mysqld process (greater than 100%, and even saw a 300% at one point). My load average is around: .25, .34, .28.
I read this great post about this issue: MySQL high CPU usage
One of the main things to do is disable persistent connections. So I checked my php.ini and mysql.allow_persistent = on and mysql.max_persistent = -1 -- which means no limit.
This raises a few questions for me before changing anything just to be sure:
If my mysqld process is spiking over 100% every couple seconds shouldn't my load average be higher then they are?
What will disabling persistent links do - will my scripts continue to function as is?
If I turn this off and reload php what does this mean for my current users as there will be many active users.
EDIT:
CPU Info: Core2Quad q9400 2.6 Ghz
Persistent connections won't use any CPU by themselves - if nothing's using a connection, it's just sitting idle and only consumes a bit of memory and occupies a socket.
Load averages are just that - averages. If you have a process that alternates between 0% and 100% 10 times a second, you'd get a load average of 0.5. They're good for figuring out long-term persistent high cpu, but by their nature hide/obliterate signs of spikes.
Persistent connections with mysql are usually not needed. MySQL has a relatively fast connection protocol and any time savings from using persistent connections is fairly minimal. The downside is that once a connection goes persistent, it can be left in an inconsistent state. e.g. If an app using the connection dies unexpectedly, MySQL will not see that and start cleaning up. This means that any server-side variables created by the app, any locks, any transactions, etc... will be left at the state they were in when the app crashed.
When the connection gets re-used by another app, you'll start out with what amounts to dirty dishes in the sink and an unflushed toilet. It can quite easily cause deadlocks because of the dangling transactions/locks - the new app won't know about them, and the old app is no longer around to relinquish those.
Spikes are fine. This is MySQL doing work. Your load average seems appropriate.
Disabling persistent links simply means that the scripts cannot use an existing connection to the database. I wouldn't recommend disabling this. At the very least, if you want to disable them, do it on the application later, rather than on MySQL. This might even increase load slightly, depending on the conditions.
Finally, DB persistence has nothing to do with the users on your site (generally). Users make a request, and once all of the page resources are loaded, that is it, until the next request. (Except in a few specific cases.) In any case, while the request is happening, the script will still be connected to the DB.