How to Optimize MySQL (CentOS) - php

I have problems to optimize my VPS MySQL to use. I have a plan in RamNode with the following specs:
- Intel® Xeon® CPU E3-1240 V2 # 3.40GHz (4 Cores)
- 4GB de Ram
- 135 GB SSD Raid 10
I have problems in one of the applications that I have hosted, slow, sometimes gives up error "max user connections".
Below the test conducted in MySQLTunner :
Storage Engine Statistics
[--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM
[--] Data in MyISAM tables: 136M (Tables: 300)
[--] Data in InnoDB tables: 44M (Tables: 202)
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)
[!!] Total fragmented tables: 220
Performance Metrics
[--] Up for: 1d 20h 25m 13s (3M q [23.681 qps], 251K conn, TX: 9B, RX: 605M)
[--] Reads / Writes: 57% / 43%
[--] Total buffers: 528.0M global + 3.6M per thread (400 max threads)
[!!] Query cache prunes per day: 7322
[!!] Sorts requiring temporary tables: 69% (144K temp sorts / 208K sorts)
[!!] Joins performed without indexes: 21719
-------- Recommendations -----------------------------------------------------
General recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance. Enable the slow query log to troubleshoot bad queries. Adjust your join queries to always utilize indexes
Variables to adjust:
query_cache_size (> 64M)
sort_buffer_size (> 2M)
read_rnd_buffer_size (> 236K)
join_buffer_size (> 128.0K, or always use indexes with joins)
Below My.CNF
[mysqld]
max_connections = 400
max_user_connections=40
key_buffer_size = 256M
myisam_sort_buffer_size = 16M
read_buffer_size = 1M
table_open_cache = 2048
thread_cache_size = 128
wait_timeout = 20
connect_timeout = 10
tmp_table_size = 128M
max_heap_table_size = 64M
max_allowed_packet=268435456
net_buffer_length = 5500
max_connect_errors = 10
concurrent_insert = 2
read_rnd_buffer_size = 242144
bulk_insert_buffer_size = 2M
query_cache_limit = 2M
query_cache_size = 64M
query_cache_type = 1
query_prealloc_size = 87382
query_alloc_block_size = 21845
transaction_alloc_block_size = 2730
transaction_prealloc_size = 1364
max_write_lock_count = 2
log-error
external-locking=FALSE
open_files_limit=15000
default-storage-engine=MyISAM
innodb_file_per_table=1
[mysqld_safe]
[mysqldump]
quick
max_allowed_packet = 8M
[isamchk]
key_buffer = 128M
sort_buffer = 128M
read_buffer = 64M
write_buffer = 64M
[myisamchk]
key_buffer = 128M
sort_buffer = 128M
read_buffer = 64M
write_buffer = 64M
#### Per connection configuration ####
sort_buffer_size = 2M
join_buffer_size = 2M
thread_stack = 192K
log-slow-queries
If you can help me thank you :)

You seem to be using a mixture of MyISAM and InnoDB tables. It would be best to settle on one or the other - almost certainly InnoDB is going to be better for optimisation purposes.
You should convert all tables in your solution to InnoDB. If your solution can create new tables, change the default-storage-engine type to InnoDB as well.
default-storage-engine=InnoDB
Then, since you have plenty of RAM, tune to ensure that all your data fits in RAM, so MySQL doesn't have to keep doing expensive disk reads all the time. Set innodb_buffer_pool_size to be at least 1G, maybe more if your database is going to grow rapidly. You can probably safely go up to 2G, so long as you don't have anything else that's really ram intensive running on this VPS.
innodb_buffer_pool_size=2G
Currently you are running with the default InnoDB config options, which looks OK (144MB pool size, data size of only 44Mb), but you may as well configure for growth and take advantage of the RAM you have at your disposal. Also if you convert the 140Mb of MyISAM tables to InnoDB (recommended), then you really need this figure to be higher. This setting is probably the most important, and likely to make the biggest difference to performance.
Also, as you are hitting the max number of connections, you need to increase that value too. Your VPS should be able to handle more than the default of 40 - try anywhere between 50-100, depending on how many concurrent users/connections you expect.
max_user_connections=100
The stats also show that you have a large number of queries being executed without indexes
[!!] Joins performed without indexes: 21719
This needs attention. If it's not obvious to work out which fields need indexing (normally any fields used in join statements, and some fields used for searching and filtering, especially if they are numeric and have only a limited selection of values), you can try running queries in your favourite mysql client, with the EXPLAIN statement. This will give you detailed information about which parts of the query are performing poorly.
See the Mysql manual https://dev.mysql.com/doc/refman/5.6/en/explain.html for more information.
It is always worth trying the addition of indexes to see whether they improve query performance. If they don't, remove them again, as indexes can make insert/update operations more expensive in terms of server resource (as the indexes need to be updated as well as the underlying data).
I highly recommend a MySQL GUI such as sqlyog for playing around with indexing. The MySQL client tools are also OK.
For more details see posts such as https://www.percona.com/blog/2007/11/01/innodb-performance-optimization-basics/ (a bit old but still talks sense).
If you have a compelling reason why you'd want to use MyISAM in preference to InnoDB (and I really can't think of one), then you would need different advice as the recommendations for innodb pool size are of no use to you.

Run OPTIMIZE TABLE to defragment tables for better performance.
That's bogus advice. It is almost never useful, especially for InnoDB.
2G for the buffer pool is dangerously large for a tiny 4GB system, especially while you have MyISAM tables in use.
While you have a mixture:
key_buffer_size = 200M
innodb_buffer_pool_size = 500M
After you switch to InnoDB:
key_buffer_size = 30M
innodb_buffer_pool_size = 1000M
Swapping is much worse than lowering a value.
See my tips on converting from MyISAM to InnoDB.
The Query cache is doing a lot of prunes. Every write to a table forces a prune of all entries for that table. In general, the QC should be turned off for production servers. Raising the size above 64M would make things worse.
max_user_connections=40
Is one "user" connecting more than 40 times at once? Or do you have Apache with MaxClients > 40?
Show us a couple of your slow queries, together with SHOW CREATE TABLE; we may be able to speed them up.

Related

Compute Engine MYSQL Server CPU Strange

I couldn't think what else to title this strange problem.
We have a "Worker" Compute Engine which is a MySQL SLAVE. Its primary role is to process a large set of data and then place it back on the Master. All handled via a PHP Script.
Now the processing of data takes roughly 4 hours to complete. During this time we noticed the following CPU pattern.
What you can see above is the 50% solid CPU starts after a server reboot. Then after about 2 hours its starts to produce a ECG style pattern on the CPu. Around every 5/6 minutes CPU spikes to ~48% then drops over the 5 minutes.
My question is, why. Can anyoen please explain why. We ideally want this server to be Maxing out ots cpu at 100% (50% as there are 2 cores)
The spec of the server: 2 VCPU's with 7.5GB Memory.
As mentioned, if we can have this running full throttle it would be great. Below is the my.cnf
symbolic-links=0
max_connections=256
innodb_thread_concurrency = 0
innodb_additional_mem_pool_size = 1G
innodb_buffer_pool_size = 6G
innodb_flush_log_at_trx_commit = 1
innodb_io_capacity = 800
innodb_flush_method = O_DIRECT
innodb_log_file_size = 24M
query_cache_size = 1G
query_cache_limit = 512M
thread_cache_size = 32
key_buffer_size = 128M
max_allowed_packet = 64M
table_open_cache = 8000
table_definition_cache = 8000
sort_buffer_size = 128M
read_buffer_size = 8M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 128M
tmp_table_size = 256M
query_cache_type = 1
join_buffer_size = 256M
wait_timeout = 300
server-id = 2
relay-log = /var/log/mysql/mysql-relay-bin.log
log_bin = /var/log/mysql/mysql-bin.log
log-error=/var/log/mysqld.log
read-only = 1
innodb_flush_log_at_trx_commit=2
I have cleaned up the above to remove any configs with private information which are not relevant to performance.
UPDATE
I have noticed when the VPU starts dropping during the heartbeat section of the graph the PHP script is no longer running. This is impossible, as the script I know takes 4 hours. No errors, and after another 4 hours the data is where I expected it.
Changing innodb_io_capacity = 800 to 1500 will likely reduce your 4 hour elapsed time to process by raising the limit to what you know you can achieve with your slave processing.
For your 7.5G indicated environment, configuration has
innodb_additional_mem_pool_size=1G
innodb_buffer_pool_size=6G
query_cache_size=1G
so before you start, you are overcommitted.
Another angle to consider, with
max_connections=256
max_allowed_packet=64M
could on a fully busy 256 connections need 16GB + just for this function to survive.
It is unlikely max_allowed_packet at 64M is reasonable.
Changing read_rnd_buffer_size = 4M to SET GLOBAL read_rnd_buffer_size=16384; could be significant on your slave then 24 hours later on master. They can be different but if it is significant in reducing your 4 hours on the slave, implement on both instances. Let us know what this single change does for you, please.
The 50% cpu utilization you are seeing is the script maxing out the --- single core that it is capable of utilizing --- . As indicated by PressingOnAlways recently. You can not tune around limit in your running script.
For a more thorough analysis, provide from SLAVE AND MASTER
RAM size (nnG)
SHOW GLOBAL STATUS
SHOW GLOBAL VARIABLES
SHOW INNODB STATUS
CPU % is measured by all the cores - so 100% cpu usage == both cores maxing out. PHP by default runs in a single thread and does not utilize multi-cores. The 50% cpu utilization you are seeing is the script maxing out the single core that it is capable of utilizing.
In order to utilize 100% cpu, consider spawning 2 PHP scripts that work on 2 separate datasets - e.g. script 1 processes records 1-1000000, while script 2 processes 1000001-2000000.
Other option is to rewrite the script to utilize threads. You may want to consider changing the language altogether for something that is more conducive to threads, like Golang? Though this might not be necessary if the main work is done within mysql.
The other issue you're seeing when the graph is below 50% may be due to IO wait. It's hard to tell from a graph though, you may be having a data flow transfer bottleneck where your CPU isn't working and waiting while large bits of data is transferred.
Optimizing CPU utilization is an exercise in finding the bottlenecks and removing them - good luck.
'Monitoring Service' could enabled to periodically capture a 'health check' of your system since it appears to be on a 6 minute cycle when you see spikes.
SHOW GLOBAL STATUS LIKE 'Com_show_%status' may confirm activity of this nature.
Divide your com_show_%status counters by (uptime/3600) to get rate-per-hour.
10 times an hour would be every 6 minutes.

Big database optimisation

I created a web service that generates huge requests every seconds.
Sometimes the MYSQL service seem down few secondes and work again well.
The main table contain more than 4 420 115 entries since one month.
Storage engine : InnoDB
The server configuation :
CPU : Intel(R) Xeon(R) CPU D-1540 # 2.00GHz
Coeurs : 16
Cache : 12288KB
RAM : 4x 32Go
Disques : 2 x 480 Go
The my.cnf :
skip-external-locking
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover = BACKUP
query_cache_limit = 1M
query_cache_size = 16M
Can I have advices to avoid this problem and increase the mysql performances.
Thank's
Sounds like it may be time to iterate on the database structure and application logic to reduce the amount of DB interactions. Another option is adding a memcached or redis layer between the application and the SQL database to provide near millisecond response time for read actions.

Server Bogs down when using eBeSucher traffic exchange

I have 6 sites first off are all on the same dedicated server, 2 of the sites get very steady traffic . I would say their is always at least 90 to 120 people on the 2 sites consistently and 15 to 25 on the rest.
The sites run decent currently until the last 3 days when a paid traffic share company was invoked giving the other 4 sites just as much traffic as the main ones all hosted on the same server - causing the site to instantly time out before making a proper request and if it does decide to load after 2 to 3 refreshes takes a few plus 30 seconds before pulling up. The sites will how ever start loading faster once you start clicking inward but will always drop back to shortly.
I've been using mysqltuner to make changes to the my.cnf file - but the settings are not working as hoped. As well as tweaking them to the right tune. the server is 20TB and has more Bandwidth to support its self.
The sites are all word press with WP- Super Cache installed so it should be more than fast. below is my config settings witch I'm sure have become a bit off balance. Please note the Table Errors for clean up are not tables from the WP sites and have nothing to do with the performance.
mysqltuner results
-------- Performance Metrics -------------------------------------------------
[--] Up for: 10h 23m 10s (11M q [295.373 qps], 666K conn, TX: 77G, RX: 1G)
[--] Reads / Writes: 50% / 50%
[--] Binary logging is disabled
[--] Total buffers: 4.8G global + 4.6M per thread (1000 max threads)
[OK] Maximum reached memory usage: 5.3G (67.58% of installed RAM)
[!!] Maximum possible memory usage: 9.3G (119.39% of installed RAM)
[OK] Slow queries: 0% (678/11M)
[OK] Highest usage of available connections: 10% (104/1000)
[OK] Aborted connections: 0.08% (528/666673)
[OK] Query cache efficiency: 69.3% (2M cached / 3M selects)
[!!] Query cache prunes per day: 130766
[OK] Sorts requiring temporary tables: 0% (109 temp sorts / 179K sorts)
[!!] Temporary tables created on disk: 58% (111K on disk / 191K total)
[OK] Thread cache hit rate: 99% (120 created / 666K connections)
[OK] Table cache hit rate: 40% (2K open / 5K opened)
[OK] Open file limit used: 19% (1K/10K)
[OK] Table locks acquired immediately: 99% (1M immediate / 1M locks)
-------- MyISAM Metrics ------------------------------------------------------
[!!] Key buffer used: 29.0% (19M used / 68M cache)
[OK] Key buffer size / total MyISAM indexes: 65.0M/64.1M
[OK] Read Key buffer hit rate: 100.0% (30M cached / 13K reads)
[!!] Write Key buffer hit rate: 13.0% (239K cached / 208K writes)
-------- InnoDB Metrics ------------------------------------------------------
[--] InnoDB is enabled.
[!!] InnoDB buffer pool / data size: 4.0G/8.7G
[!!] InnoDB buffer pool instances: 1
[!!] InnoDB Used buffer: 23.71% (62152 used/ 262144 total)
[OK] InnoDB Read buffer efficiency: 99.99% (390201875 hits/ 390245584 total)
[!!] InnoDB Write Log efficiency: 85.21% (1596393 hits/ 1873419 total)
[OK] InnoDB log waits: 0.00% (0 waits / 277026 writes)
-------- ThreadPool Metrics --------------------------------------------------
[--] ThreadPool stat is disabled.
-------- AriaDB Metrics ------------------------------------------------------
[--] AriaDB is disabled.
-------- TokuDB Metrics ------------------------------------------------------
[--] TokuDB is disabled.
-------- Galera Metrics ------------------------------------------------------
[--] Galera is disabled.
-------- Replication Metrics -------------------------------------------------
[--] No replication slave(s) for this server.
[--] This is a standalone server..
-------- Recommendations -----------------------------------------------------
General recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance
Set up a Secure Password for user#host ( SET PASSWORD FOR 'user'#'SpecificDNSorIp' = PASSWORD('secure_password'); )
MySQL started within last 24 hours - recommendations may be inaccurate
Reduce your overall MySQL memory footprint for system stability
Increasing the query_cache size over 128M may reduce performance
Temporary table size is already large - reduce result set size
Reduce your SELECT DISTINCT queries without LIMIT clauses
Variables to adjust:
*** MySQL's maximum memory usage is dangerously high ***
*** Add RAM before increasing MySQL buffer variables ***
query_cache_size (> 128M) [see warning above]
innodb_buffer_pool_size (>= 8G) if possible.
innodb_buffer_pool_instances(=4)
And this is the my.cnf file
[mysqld]
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under different user or group,
# customize your systemd unit file for mysqld according to the
# instructions in http://fedoraproject.org/wiki/Systemd
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
tmpdir=/dev/shm
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
open_files_limit=10000
query_cache_size=128M
query_cache_type=1
max_connections=1000
max_user_connections=25
wait_timeout=300
tmp_table_size=512M
max_heap_table_size=512M
thread_cache_size=64
key_buffer_size=65M
max_allowed_packet=268435456
table_cache=2048
table_definition_cache=2048
#delayed_insert_timeout=20 # Turn on if max_connections being reached due to delayed inserts
#delayed_queue_size=300 # Turn on if max_connections being reached due to delayed inserts
myisam_sort_buffer_size=32M # can be increased per sessions if needed for alter tables (indexes, repair)
query_cache_limit=2M # leave at default unless there is a good reason
join_buffer=2M # leave at default unless there is a good reason
sort_buffer_size=2M # leave at default unless there is a good reason
#read_rnd_buffer_size=256K # leave at default unless there is a good reason
#read_buffer_size=2M # leave at default unless there is a good reason
collation_server=utf8_unicode_ci
character_set_server=utf8
general_log=0
slow_query_log=1
log-output=TABLE # select * from mysql.general_log order by event_time desc limit 10;
long_query_time=5 # select * from mysql.slow_log order by start_time desc limit 10;
low_priority_updates=1
innodb_file_per_table=1
innodb_buffer_pool_size=4G # check mysql -e "SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool%';" - free vs total
innodb_additional_mem_pool_size=62M
innodb_log_buffer_size=62M
innodb_thread_concurrency=8 # Number of physical + virtual CPU's, preset when server is provisioned to have correct # of cores
default-storage-engine=MyISAM
[mysqld_safe]
I did here traffic sharing sites can hurt you from over pinging your server to get stats of traffic they send to you - not sure if this a rabbit hole worth going down or its just poor configuration on my end. Any help, thoughts or ideas would be most appreciated.
Many thanks!
"Fragmented tables" -- bogus; don't run optimize.
"dangerously high memory usage" -- no, it's not. But to shut it up, decrease max_connections to 200.
query_cache_size is somewhat high at 128M; do not raise it.
Lots of disk-based tmp tables -- Lower long_query_time to 1 (second) and turn on the SlowLog. Come back after a day or two and let's see what the naughty queries are. Note that this will also raise the "slow queries" above 0%. I see that you have it turned on and sent to a TABLE. So, use select * from mysql.slow_log order by query_time desc limit 5 to get the interesting queries. Let's discuss them, together with SHOW CREATE TABLE.
The MyISAM and InnoDB metrics are not so bad; no action needed.
tmp_table_size=512M and max_heap_table_size=512M are dangerously high; keep them under 1% of RAM.
table_cache=2048 -- there is some thrashing even in the first 10 hours of being up; increase to 4K.
You seem to be using both MyISAM and InnoDB.

Memory usage high on server compared to wamp

Lately my site (with 260000 posts, 12000 images, 2,360,987 mysql rows and 450.7 MiB size) is running slow and at times not loading for many mins
I installed this Debug bar plugin https://wordpress.org/plugins/debug-bar/
Memory usage
on server is: 174,319,288 bytes
Intel(R) Xeon(R) CPU E3-1230 V2 # 3.30GHz , 16 GB
(PHP: 5.5.23, MySQL: 5.6.23, Apache 2.4)
Even tried disabling all plugins it doesnt help much... it comes down 160-163,xxx,xxx bytes
on wamp is : 37,834,920 bytes
(PHP: 5.5.12, MySQL: 5.6.17)
Why the difference is huge? How to detect the problem?
Been using the following plugins
Acunetix WP Security
Akismet
Antispam Bee
CloudFlare
Contact Form 7
Custom Post Type UI
Debug Bar
Login LockDown
Redirection
Theme Test Drive
W3 Total Cache
WordPress SEO
WP-Optimize
WP Missed Schedule
my.cnf values for the above server are
[mysqld]
slow-query-log=1
long-query-time=1
slow-query-log-file="/var/log/mysql-slow.log"
default-storage-engine = MyISAM
local-infile = 0
innodb_buffer_pool_size = 1G
innodb_log_file_size = 256M
innodb_file_per_table=1
innodb_stats_on_metadata=0
max_connections=360
wait_timeout=60
connect_timeout = 15
thread_cache_size=20
thread_concurrency=8
key_buffer_size = 1024M
join_buffer_size = 2M
sort_buffer_size=1M
query_cache_limit=64M
query_cache_size=128M
query_cache_type=1
max_heap_table_size=32M
tmp_table_size=32MB
table_open_cache=1000
table_definition_cache=1024
open_files_limit=10000
max_allowed_packet=268435456
low_priority_updates=1
concurrent_insert=2
#port = 8881
#innodb_force_recovery=0
#innodb_purge_threads=0
The "server" has Apache; that accounts for some (all?) of the difference.
Windows and Unix handle memory differently, and measure it differently. So, the difference may be irrelevant.
The numbers you have quoted are not big; what is the problem?
"Tried restarting the server and checked it in initial moments" -- That's mostly irrelevant. Programs tend to grow over time, up to some limit. Let's see the values in "steady state" with a typical load.
You have enough RAM to cache the entire dataset in RAM. But, due to inactivity, probably most of the data is not touched, hence has not been read into cache.
"High" memory usage is when you are swapping. Actually, that is probably "too high". So, say, 90% is "high". Your numbers are nowhere near that.
innodb_buffer_pool_size=200M -- is not enough to hold the entire 450.7MB dataset, but, as I say, most of the data is probably not actively used.
Edit (after posting of settings)
table_cache=10M
That is terrible! You won't be opening 10 million tables. Set it to 1000.
max_heap_table_size=512M
tmp_table_size=512MB
Those are dangerous. If you have multiple connections, each needing a tmp table (because of a complex query), you could run out of memory fast. Set them down to 32M.
innodb_force_recovery=3
Comment out that line -- It is to be used once, then removed.
The rest of the settings look harmless for this discussion.

Joomla database memory leak

My client has got a pretty large Joomla-based website hosted on Amazon EC2 with 1.5GB of RAM. The server hosts both Apache and MySQL. Right now the database size is around 250MB and the website gets daily traffic of about 5000. It looks like there is a severe memory leak on the website as sometimes MySQL uses about 99% of CPU memory and then crashes. I have tried optimizing database tables, modifying my.cnf, but still there is no improvement.
There are finder tables used by Joomla smart search which occupy over 100MB of db size. I have disabled smart search, but still the problem occurs.
Friends, please throw some suggestions in fixing this.
Thanks.
Below is the my.cnf file
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
bind-address = 127.0.0.1
default-storage-engine=innodb
transaction-isolation = REPEATABLE-READ
character-set-server = UTF8
collation-server = UTF8_general_ci
max_connections = 5000
wait_timeout = 30
connect_timeout = 60
#interactive_timeout = 600
#max_connect_errors = 1000000
#max_allowed_packet = 10M
skip-external-locking
key_buffer_size = 384M
max_allowed_packet = 1M
table_open_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 32M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8
slow_query_log
long_query_time = 2
[mysqld_safe]
log-error=/var/log/mysqld.log
myisam_sort_buffer_size = 64M
My bet would be that you are being hit by a rogue robot - one of the many SEO spiders out there, or tools like 80legs that let people program a network of bots to carry out tasks - often with errors in their programming that result in a heavy bombardment.
I can never remember which of the MySQL settings take memory once and which are per connection - but as you are set to allow up to 5000 simultaneous connections and some of the buffers are 2 and 8 MB I'd bet that the total memory usage under heavy load could easily be in excess of the total ram available.
Your current settings would allow all of your daily traffic to hit simultaneously. I'd knock that down to a setting of a hundred or less and see if that gives more stability.
There are various MySQL tuner scripts out there that can help you spot where too much memory is allocated.
If you have access logs from around the time of the crashes / high load I'd check for malicious bots though - we've had a constant battle to reign them in on some sites we monitor/control.
You might also check the thread_concurrency value - depending upon how many CPUs you have available.

Categories