Lately my site (with 260000 posts, 12000 images, 2,360,987 mysql rows and 450.7 MiB size) is running slow and at times not loading for many mins
I installed this Debug bar plugin https://wordpress.org/plugins/debug-bar/
Memory usage
on server is: 174,319,288 bytes
Intel(R) Xeon(R) CPU E3-1230 V2 # 3.30GHz , 16 GB
(PHP: 5.5.23, MySQL: 5.6.23, Apache 2.4)
Even tried disabling all plugins it doesnt help much... it comes down 160-163,xxx,xxx bytes
on wamp is : 37,834,920 bytes
(PHP: 5.5.12, MySQL: 5.6.17)
Why the difference is huge? How to detect the problem?
Been using the following plugins
Acunetix WP Security
Akismet
Antispam Bee
CloudFlare
Contact Form 7
Custom Post Type UI
Debug Bar
Login LockDown
Redirection
Theme Test Drive
W3 Total Cache
WordPress SEO
WP-Optimize
WP Missed Schedule
my.cnf values for the above server are
[mysqld]
slow-query-log=1
long-query-time=1
slow-query-log-file="/var/log/mysql-slow.log"
default-storage-engine = MyISAM
local-infile = 0
innodb_buffer_pool_size = 1G
innodb_log_file_size = 256M
innodb_file_per_table=1
innodb_stats_on_metadata=0
max_connections=360
wait_timeout=60
connect_timeout = 15
thread_cache_size=20
thread_concurrency=8
key_buffer_size = 1024M
join_buffer_size = 2M
sort_buffer_size=1M
query_cache_limit=64M
query_cache_size=128M
query_cache_type=1
max_heap_table_size=32M
tmp_table_size=32MB
table_open_cache=1000
table_definition_cache=1024
open_files_limit=10000
max_allowed_packet=268435456
low_priority_updates=1
concurrent_insert=2
#port = 8881
#innodb_force_recovery=0
#innodb_purge_threads=0
The "server" has Apache; that accounts for some (all?) of the difference.
Windows and Unix handle memory differently, and measure it differently. So, the difference may be irrelevant.
The numbers you have quoted are not big; what is the problem?
"Tried restarting the server and checked it in initial moments" -- That's mostly irrelevant. Programs tend to grow over time, up to some limit. Let's see the values in "steady state" with a typical load.
You have enough RAM to cache the entire dataset in RAM. But, due to inactivity, probably most of the data is not touched, hence has not been read into cache.
"High" memory usage is when you are swapping. Actually, that is probably "too high". So, say, 90% is "high". Your numbers are nowhere near that.
innodb_buffer_pool_size=200M -- is not enough to hold the entire 450.7MB dataset, but, as I say, most of the data is probably not actively used.
Edit (after posting of settings)
table_cache=10M
That is terrible! You won't be opening 10 million tables. Set it to 1000.
max_heap_table_size=512M
tmp_table_size=512MB
Those are dangerous. If you have multiple connections, each needing a tmp table (because of a complex query), you could run out of memory fast. Set them down to 32M.
innodb_force_recovery=3
Comment out that line -- It is to be used once, then removed.
The rest of the settings look harmless for this discussion.
Related
My software makes a lot of MySQL queries to my server, and I have never had any issues in the past with it, but just recently nothing was loading, no webpages, no SQL was running, nothing. I managed to get on WHM for my server and kill the process, only to watch it spike back up to 300%. Nothing I have been able to do has made it go down. What information do I need to share to get help with this? I am not a sys admin nor do I have one or resources for one. I wouldn't usually be asking for help and just optimize all my queries for something like this as it wasn't a problem for the past 3 months but suddenly became one out of nowhere, at least not that I noticed. At this point my program is saying that one of my database tables has crashed and needs repaired... What can I do? Thanks in advance for any help...
I have already considered optimization but I was hoping for a quick solution to implement as I have customers waiting, then I can spend a few days working on optimizing my SQL that, like I said, wasn't having any issues before. I am confused about it.
Also I am not sure if this helps but tracing the process in WHM prints this repeatedly and nothing else:
fcntl(16, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(16, F_SETFL, O_RDWR|O_NONBLOCK) = 0
accept(16, {sa_family=AF_LOCAL, NULL}, [2]) = 35
fcntl(16, F_SETFL, O_RDWR) = 0
setsockopt(35, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported)
futex(0x13298a4, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x13298a0, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x1327240, FUTEX_WAKE_PRIVATE, 1) = 1
poll([{fd=14, events=POLLIN}, {fd=16, events=POLLIN}], 2, -1) = 1 ([{fd=16, revents=POLLIN}])
/etc/my.conf
innodb_file_per_table=1
default-storage-engine=MyISAM
performance-schema=0
max_allowed_packet=268435456
open_files_limit=10000
This is all that is available to me as far as my.conf file. The error log doesn't exist in /var/log so I don't have anything to give in that regard...
SQL version:
[Server] # mysql -V
mysql Ver 14.14 Distrib 5.6.41, for Linux (x86_64) using EditLine wrapper
I have an additional question or add-on to this. I don't know if it makes much of a difference but, say my code is running using 30% CPU on the mysql process, I can actually turn off the code and the mysql process CPU usage will not change. What does this mean?
Edit: (these are all expiring within a week from 12/09/2018)
Global Status
Current Settings
ulimit -a
df -h
mysqltuner report
The my.cnf file contents that I listed is all that was there. Nothing else. I will get the top command and iostat -xm 5 3 when I am running the software full speed again to see the results.
Rate Per Second=RPS Suggestions to consider based on your Linux ulimit -a report.
ulimit -n 16384 to raise Open Files limit from 1024 to support your activities.
For this to persist over Linux Shutdown/Restart, review this url.
https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/
Your specifics may be slightly different due to version of Linux.
Suggestions to consider for your my.cnf [mysqld] section
innodb_lru_scan_depth=100 # from 1024 to reduce CPU busy every second. 93% savings for this one function.
thread_cache_size=32 # from 9 for thread breathing room and growth.
innodb_io_capacity=1800 # from 200 to take advantage of your HDD IOPS capacity
key_cache_age_threshold=7200 # from 300 seconds to reduce key_reads RPS of 16
query_cache_size=0 # from 1M to conserve RAM - QC is OFF and not used
query_cache_limit=0 # from 1M to conserve RAM - QC is OFF and not used
key_buffer_size=128M # from 8M which had NO free space at the end of your work day
For additional suggestions, see my profile, Network profile for contact information.
I couldn't think what else to title this strange problem.
We have a "Worker" Compute Engine which is a MySQL SLAVE. Its primary role is to process a large set of data and then place it back on the Master. All handled via a PHP Script.
Now the processing of data takes roughly 4 hours to complete. During this time we noticed the following CPU pattern.
What you can see above is the 50% solid CPU starts after a server reboot. Then after about 2 hours its starts to produce a ECG style pattern on the CPu. Around every 5/6 minutes CPU spikes to ~48% then drops over the 5 minutes.
My question is, why. Can anyoen please explain why. We ideally want this server to be Maxing out ots cpu at 100% (50% as there are 2 cores)
The spec of the server: 2 VCPU's with 7.5GB Memory.
As mentioned, if we can have this running full throttle it would be great. Below is the my.cnf
symbolic-links=0
max_connections=256
innodb_thread_concurrency = 0
innodb_additional_mem_pool_size = 1G
innodb_buffer_pool_size = 6G
innodb_flush_log_at_trx_commit = 1
innodb_io_capacity = 800
innodb_flush_method = O_DIRECT
innodb_log_file_size = 24M
query_cache_size = 1G
query_cache_limit = 512M
thread_cache_size = 32
key_buffer_size = 128M
max_allowed_packet = 64M
table_open_cache = 8000
table_definition_cache = 8000
sort_buffer_size = 128M
read_buffer_size = 8M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 128M
tmp_table_size = 256M
query_cache_type = 1
join_buffer_size = 256M
wait_timeout = 300
server-id = 2
relay-log = /var/log/mysql/mysql-relay-bin.log
log_bin = /var/log/mysql/mysql-bin.log
log-error=/var/log/mysqld.log
read-only = 1
innodb_flush_log_at_trx_commit=2
I have cleaned up the above to remove any configs with private information which are not relevant to performance.
UPDATE
I have noticed when the VPU starts dropping during the heartbeat section of the graph the PHP script is no longer running. This is impossible, as the script I know takes 4 hours. No errors, and after another 4 hours the data is where I expected it.
Changing innodb_io_capacity = 800 to 1500 will likely reduce your 4 hour elapsed time to process by raising the limit to what you know you can achieve with your slave processing.
For your 7.5G indicated environment, configuration has
innodb_additional_mem_pool_size=1G
innodb_buffer_pool_size=6G
query_cache_size=1G
so before you start, you are overcommitted.
Another angle to consider, with
max_connections=256
max_allowed_packet=64M
could on a fully busy 256 connections need 16GB + just for this function to survive.
It is unlikely max_allowed_packet at 64M is reasonable.
Changing read_rnd_buffer_size = 4M to SET GLOBAL read_rnd_buffer_size=16384; could be significant on your slave then 24 hours later on master. They can be different but if it is significant in reducing your 4 hours on the slave, implement on both instances. Let us know what this single change does for you, please.
The 50% cpu utilization you are seeing is the script maxing out the --- single core that it is capable of utilizing --- . As indicated by PressingOnAlways recently. You can not tune around limit in your running script.
For a more thorough analysis, provide from SLAVE AND MASTER
RAM size (nnG)
SHOW GLOBAL STATUS
SHOW GLOBAL VARIABLES
SHOW INNODB STATUS
CPU % is measured by all the cores - so 100% cpu usage == both cores maxing out. PHP by default runs in a single thread and does not utilize multi-cores. The 50% cpu utilization you are seeing is the script maxing out the single core that it is capable of utilizing.
In order to utilize 100% cpu, consider spawning 2 PHP scripts that work on 2 separate datasets - e.g. script 1 processes records 1-1000000, while script 2 processes 1000001-2000000.
Other option is to rewrite the script to utilize threads. You may want to consider changing the language altogether for something that is more conducive to threads, like Golang? Though this might not be necessary if the main work is done within mysql.
The other issue you're seeing when the graph is below 50% may be due to IO wait. It's hard to tell from a graph though, you may be having a data flow transfer bottleneck where your CPU isn't working and waiting while large bits of data is transferred.
Optimizing CPU utilization is an exercise in finding the bottlenecks and removing them - good luck.
'Monitoring Service' could enabled to periodically capture a 'health check' of your system since it appears to be on a 6 minute cycle when you see spikes.
SHOW GLOBAL STATUS LIKE 'Com_show_%status' may confirm activity of this nature.
Divide your com_show_%status counters by (uptime/3600) to get rate-per-hour.
10 times an hour would be every 6 minutes.
I've done quite a bit of reading before asking this, so let me preface by saying I am not running out of connections, or memory, or cpu, and from what I can tell, I am not running out of file descriptors either.
Here's what PHP throws at me when MySQL is under heavy load:
Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (11 "Resource temporarily unavailable")
This happens randomly under load - but the more I push, the more frequently php throws this at me. While this is happening I can always connect locally through the console and from PHP through 127.0.0.1 instead of "localhost" which uses the faster unix socket.
Here's a few system variables to weed out the usual problems:
cat /proc/sys/fs/file-max = 4895952
lsof | wc -l = 215778 (during "outages")
Highest usage of available connections: 26% (261/1000)
InnoDB buffer pool / data size: 10.0G/3.7G (plenty o room)
soft nofile 999999
hard nofile 999999
I am actually running MariaDB (Server version: 10.0.17-MariaDB MariaDB Server)
These results are generated both under normal load, and by running mysqlslap during off hours, so, slow queries are not an issue - just high connections.
Any advice? I can report additional settings/data if necessary - mysqltuner.pl says everything is a-ok
and again, the revealing thing here is that connecting via IP works just fine and is fast during these outages - I just can't figure out why.
Edit: here is my my.ini (some values may seem a bit high from my recent troubleshooting changes, and please keep in mind that there are no errors in the MySQL logs, system logs, or dmesg)
socket=/var/lib/mysql/mysql.sock
skip-external-locking
skip-name-resolve
table_open_cache=8092
thread_cache_size=16
back_log=3000
max_connect_errors=10000
interactive_timeout=3600
wait_timeout=600
max_connections=1000
max_allowed_packet=16M
tmp_table_size=64M
max_heap_table_size=64M
sort_buffer_size=1M
read_buffer_size=1M
read_rnd_buffer_size=8M
join_buffer_size=1M
innodb_log_file_size=256M
innodb_log_buffer_size=8M
innodb_buffer_pool_size=10G
[mysql.server]
user=mysql
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
open-files-limit=65535
Most likely it is due to net.core.somaxconn
What is the value of /proc/sys/net/core/somaxconn
net.core.somaxconn
# The maximum number of "backlogged sockets". Default is 128.
Connections in the queue which are not yet connected. Any thing above that queue will be rejected. I suspect this in your case. Try increasing it according to your load.
as root user run
echo 1024 > /proc/sys/net/core/somaxconn
This is something that can and should be solved by analysis. Learning how to do this is a great skill to have.
Analysis to find out just what is happening under a heavy load...number of queries, execution time should be your first step. Determine the load and then make the proper db config settings. You might find you need to optimize the sql queries instead!
Then make sure the PHP db driver settings are in alignment as well to fully utilize the database connections.
Here is a link to the MariaDB threadpool documentation. I know it says version 5.5, but its still relevant and the page does reference version 10. There are settings listed that may not be in your .cnf file that you can use.
https://mariadb.com/kb/en/mariadb/threadpool-in-55/
From the top of my head, I can think of max_connections as a possible source of the problem. I'd increase the limit, to at least eliminate the possibility.
Hope it helps.
My client has got a pretty large Joomla-based website hosted on Amazon EC2 with 1.5GB of RAM. The server hosts both Apache and MySQL. Right now the database size is around 250MB and the website gets daily traffic of about 5000. It looks like there is a severe memory leak on the website as sometimes MySQL uses about 99% of CPU memory and then crashes. I have tried optimizing database tables, modifying my.cnf, but still there is no improvement.
There are finder tables used by Joomla smart search which occupy over 100MB of db size. I have disabled smart search, but still the problem occurs.
Friends, please throw some suggestions in fixing this.
Thanks.
Below is the my.cnf file
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
bind-address = 127.0.0.1
default-storage-engine=innodb
transaction-isolation = REPEATABLE-READ
character-set-server = UTF8
collation-server = UTF8_general_ci
max_connections = 5000
wait_timeout = 30
connect_timeout = 60
#interactive_timeout = 600
#max_connect_errors = 1000000
#max_allowed_packet = 10M
skip-external-locking
key_buffer_size = 384M
max_allowed_packet = 1M
table_open_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 32M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8
slow_query_log
long_query_time = 2
[mysqld_safe]
log-error=/var/log/mysqld.log
myisam_sort_buffer_size = 64M
My bet would be that you are being hit by a rogue robot - one of the many SEO spiders out there, or tools like 80legs that let people program a network of bots to carry out tasks - often with errors in their programming that result in a heavy bombardment.
I can never remember which of the MySQL settings take memory once and which are per connection - but as you are set to allow up to 5000 simultaneous connections and some of the buffers are 2 and 8 MB I'd bet that the total memory usage under heavy load could easily be in excess of the total ram available.
Your current settings would allow all of your daily traffic to hit simultaneously. I'd knock that down to a setting of a hundred or less and see if that gives more stability.
There are various MySQL tuner scripts out there that can help you spot where too much memory is allocated.
If you have access logs from around the time of the crashes / high load I'd check for malicious bots though - we've had a constant battle to reign them in on some sites we monitor/control.
You might also check the thread_concurrency value - depending upon how many CPUs you have available.
I'd like to ask your help on a longstanding issue with php/mysql connections.
Every time I execute a "SHOW PROCESSLIST" command it shows me about 400 idle (Status: Sleep) connections to the database Server emerging from our 5 Webservers.
That never was much of a problem (and I didn't find a quick solution) until recently traffic numbers increased and since then MySQL reports the "to many connections" Problems repeatedly, even so 350+ of those connections are in "sleep" state. Also a server can't get a MySQL connection even if there are sleeping connection to that same server.
All those connections vanish when an apache server is restated.
The PHP Code used to create the Database connections uses the normal "mysql" Module, the "mysqli" Module, PEAR::DB and Zend Framework Db Adapter. (Different projects). NONE of the projects uses persistent connections.
Raising the connection-limit is possible but doesn't seem like a good solution since it's 450 now and there are only 20-100 "real" connections at a time anyways.
My question:
Why are there so many connections in sleep state and how can I prevent that?
-- Update:
The Number of Apache requests running at a time never exceeds 50 concurrent requests, so i guess there is a problem with closing the connection or apache keeps the port open without a phpscript attached or something (?)
my.cnf in case it's helpful:
innodb_buffer_pool_size = 1024M
max_allowed_packet = 5M
net_buffer_length = 8K
read_buffer_size = 2M
read_rnd_buffer_size = 8M
query_cache_size = 512M
myisam_sort_buffer_size = 128M
max_connections = 450
thread_cache = 50
key_buffer_size = 1280M
join_buffer_size = 16M
table_cache = 2048
sort_buffer_size = 64M
tmp_table_size = 512M
max_heap_table_size = 512M
thread_concurrency = 8
log-slow-queries = /daten/mysql-log/slow-log
long_query_time = 1
log_queries_not_using_indexes
innodb_additional_mem_pool_size = 64M
innodb_log_file_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table
Basically, you get connections in the Sleep state when :
a PHP script connects to MySQL
some queries are executed
then, the PHP script does some stuff that takes time
without disconnecting from the DB
and, finally, the PHP script ends
which means it disconnects from the MySQL server
So, you generally end up with many processes in a Sleep state when you have a lot of PHP processes that stay connected, without actually doing anything on the database-side.
A basic idea, so : make sure you don't have PHP processes that run for too long -- or force them to disconnect as soon as they don't need to access the database anymore.
Another thing, that I often see when there is some load on the server :
There are more and more requests coming to Apache
which means many pages to generate
Each PHP script, in order to generate a page, connects to the DB and does some queries
These queries take more and more time, as the load on the DB server increases
Which means more processes keep stacking up
A solution that can help is to reduce the time your queries take -- optimizing the longest ones.
The above solutions like run a query
SET session wait_timeout=600;
Will only work until mysql is restarted. For a persistant solution, edit mysql.conf and add after [mysqld]:
wait_timeout=300
interactive_timeout = 300
Where 300 is the number of seconds you want.
Increasing number of max-connections will not solve the problem.
We were experiencing the same situation on our servers. This is what happens
User open a page/view, that connect to the database, query the database, still query(queries) were not finished and user leave the page or move to some other page.
So the connection that was open, will remains open, and keep increasing number of connections, if there are more users connecting with the db and doing something similar.
You can set interactive_timeout MySQL, bydefault it is 28800 (8hours) to 1 hour
SET interactive_timeout=3600
Before increasing the max_connections variable, you have to check how many non-interactive connection you have by running show processlist command.
If you have many sleep connection, you have to decrease the value of the "wait_timeout" variable to close non-interactive connection after waiting some times.
To show the wait_timeout value:
SHOW SESSION VARIABLES LIKE 'wait_timeout';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wait_timeout | 28800 |
+---------------+-------+
the value is in second, it means that non-interactive connection still up to 8 hours.
To change the value of "wait_timeout" variable:
SET session wait_timeout=600;
Query OK, 0 rows affected (0.00 sec)
After 10 minutes if the sleep connection still sleeping the mysql or MariaDB drop that connection.
Alright so after trying every solution out there to solve this exact issues on a wordpress blog, I might have done something either really stupid or genius... With no idea why there's an increase in Mysql connections, I used the php script below in my header to kill all sleeping processes..
So every visitor to my site helps in killing the sleeping processes..
<?php
$result = mysql_query("SHOW processlist");
while ($myrow = mysql_fetch_assoc($result)) {
if ($myrow['Command'] == "Sleep") {
mysql_query("KILL {$myrow['Id']}");}
}
?>
So I was running 300 PHP processes simulatenously and was getting a rate of between 60 - 90 per second (my process involves 3x queries). I upped it to 400 and this fell to about 40-50 per second. I dropped it to 200 and am back to between 60 and 90!
So my advice to anyone with this problem is experiment with running less than more and see if it improves. There will be less memory and CPU being used so the processes that do run will have greater ability and the speed may improve.
Look into persistent MySQL connections: I connected using mysqli('p:$HOSTNAME') and had Laravel database.php settings like:
'options' => [
PDO::ATTR_PERSISTENT => true,
],
For some reason, for some time, I believed it was smart to keep connections persistent as I thought my applications would share them. They didn't. They just opened connections and left them unused until they timed out.
After I removed my mad dream of persistency I went from 120-150+ connections from several hosts to only a handful, most of the time actually just one (being the one that runs SHOW PROCESSLIST).