Mysqltuner optimize config and tables, report - php

Report from mysqltuner, can you help me ? how to disable InnoDB ist last Linux debian mysql version on stable, and conection aborted on more than 30%?
[!!] InnoDB is enabled but isn't being used
[OK] Total fragmented tables: 0
-------- Performance Metrics -------------------------------------------------
[--] Up for: 13h 49m 59s (18K q [0.368 qps], 2K conn, TX: 5M, RX: 1M)
[--] Reads / Writes: 63% / 37%
[--] Total buffers: 176.0M global + 2.7M per thread (25 max threads)
[OK] Maximum possible memory usage: 243.2M (23% of installed RAM)
[!!] Slow queries: 7% (1K/18K)
[OK] Highest usage of available connections: 20% (5/25)
[OK] Key buffer size / total MyISAM indexes: 8.0M/185.0K
[OK] Key buffer hit rate: 96.0% (27K cached / 1K reads)
[OK] Query cache efficiency: 41.0% (3K cached / 8K selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 291 sorts)
[OK] Temporary tables created on disk: 25% (204 on disk / 794 total)
[OK] Thread cache hit rate: 99% (5 created / 2K connections)
[!!] Table cache hit rate: 2% (64 open / 2K opened)
[OK] Open file limit used: 9% (94/1K)
[OK] Table locks acquired immediately: 99% (9K immediate / 9K locks)
[!!] Connections aborted: 32%
-------- Recommendations -----------------------------------------------------
General recommendations:
Add skip-innodb to MySQL configuration to disable InnoDB
MySQL started within last 24 hours - recommendations may be inaccurate
Increase table_open_cache gradually to avoid file descriptor limits
Read this before increasing table_open_cache over 64: http://bit.ly/1mi7c4C
Your applications are not closing MySQL connections properly
Variables to adjust:
table_open_cache (> 64)

Innodb is needed in the last MySQL version 5.6 as default engine.
aborted connections is > 0, this mmeans that connexions are not closed correctly.
https://dev.mysql.com/doc/refman/5.6/en/common-errors.html

Related

mariadb Error Reading Communication Packets

I have a problem. When I split my MariaDB from main server to another server (my database server is running MariaDB docker from latest tag) I got an error:
Got an error writing communication packets
i have 2 server the one is a webserver (no db) the other is a ubuntu 20.04 with 4gig ram and 4 core(2gh per core)
The port is open and my PING is less than 1ms.
I tried with a basic WP site DB and the connection is OK, there was no problem, but my database is about 1GB and I guess this made this problem.
I also try to connect over private network (192.168.100.25) instead of public IP, but the problem is same.
Here is my MariaDB log
Aborted connection 3 to db: 'wpdb' user: 'root' host: 'myip' (Got an error reading communication packets)
Aborted connection 5 to db: 'wpdb' user: 'root' host: 'myip' (Got an error writing communication packets)
I also edited MariaDB config:
increased max_allowed_packet to 1GB
increased net_buffer_length to 1000000
but nothing changes!
here is mariadb variable :
https://pastebin.ubuntu.com/p/yHFRh7CnVC/
SHOW GLOBAL STATUS:
https://pastebin.pl/view/b3db2b91
show process list:
8,root,31.56.66.249:60612,,Query,0,starting,SHOW FULL PROCESSLIST,0
ulimit on server root:
ubuntu#rangoabzar:~$ ulimit -a
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15608
max locked memory (kbytes, -l) 65536
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 15608
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
ulimit in docker container:
root#63aa95764534:/# ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15608
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
iostat
htop
I got this same error by doing a query that returned multiple rows, then processing those rows using rows.Next() (in Golang), and exiting early because of an unrelated error without calling rows.Close(). The confusing part was that it worked the first few times, but eventually failed, indicating some resource (connections?) was being used up.
I was able to take advantage of Golang's defer statement, and just do
defer rows.Close()
before ever calling rows.Next(), but calling rows.Close() before every early exit from the rows.Next() loop works as well.

How to reduce PHP FPM and MySQL memory usage in Debian 9

I'm managing a server on AWS, t2.micro (1 Mem GiB) instance type with Debian 9.
Main services installed are:
Nginx (active)
MySQL (active)
Supervisor (stopped)
Redis (active)
These programs are for 10 Laravel (PHP) projects enabled.
The problem is that free memory is always between 60MB-75MB and I can't even start supervisor service or install new project dependencies via composer without crashing everything (including SSH session):
$ free -m
total used free shared buff/cache available
Mem: 994 477 71 140 444 233
Swap: 0 0 0
The processes consuming memory are:
$ ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n
...
10.9492 MB php-fpm:
104.473 MB php-fpm:
120.109 MB php-fpm:
144.262 MB php-fpm:
380.344 MB /usr/sbin/mysqld
Actually I have only 2 MySQL (not large) databases. Why MySQL is consuming 380MB? There's a way to optimise it?
And what about PHP-FPM, there is a need to run 4 different processes with ~100MB each? How to reduce this?
Default MySQL settings are optimized and suitable for general situations. If it consumes 380 MB (in our days it is a small amount of memory), it is probably normal. Still, you could make some things with MySQL:
use MyISAM instead of InnoDB (you could turn off InnoDB engine - refer to MySQL docs)
Change some memory caches parameters (please refer to http://www.tocker.ca/2014/03/10/configuring-mysql-to-use-minimal-memory.html and MySQL documentation) but in this case, you might get performance degradation of your MySQL server.
Best of all is to use cheaper hosting because AWS is overpriced. You can buy more powerful server for the same money.

MediaWiki's file cache is ignored after migration

Problem: I have set up a MediaWiki with file caching enabled, but when I migrate the file cache to another MediaWiki, the cache is bypassed.
Background: I have set up MediaWiki 1.26.2 with apache2 as the front-end web server and mariadb as the MySQL database, populated with the Danish wikipedia.
I have enabled the file cache to accelerate performance in LocalSettings.php:
# Enable file caching.
$wgUseFileCache = true;
$wgFileCacheDirectory = "/tmp/wikicache";
$wgShowIPinHeader = false;
# Enable sidebar caching.
$wgEnableSidebarCache=true;
# Enable page compression.
$wgUseGzip = true;
# Disable pageview counters.
$wgUseGzip = true;
# Enable miser mode.
$wgMiserMode = true;
Goal: Migrate the file cache, which is located under /tmp/wikicache, to another MediaWiki server. This does not seem to work, as the cache is skipped.
Use case: node server hosts MediaWiki, where I have migrated (copied) the file cache from another MediaWiki server, as well as the same LocalSettings.php.
Here is a cached page:
root#server:~# find /tmp/ -name DNA*
/tmp/wikicache/3/39/DNA.html.gz
On another node, client, I use the apache benchmark ab to measure the connection time when requesting that page. TL;DR; only 10% of the requests succeed with a time of ~20 sec, which is roughly the time needed to query the database and retrieve the whole page.
root#client:~# ab -n 100 -c 10 http://172.16.100.3/wiki/index.php/DNA
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 172.16.100.3 (be patient).....done
Server Software: Apache/2.4.7
Server Hostname: 172.16.100.3
Server Port: 80
Document Path: /wiki/index.php/DNA
Document Length: 1184182 bytes
Concurrency Level: 10
Time taken for tests: 27.744 seconds
Complete requests: 100
Failed requests: 90
(Connect: 0, Receive: 0, Length: 90, Exceptions: 0)
Total transferred: 118456568 bytes
HTML transferred: 118417968 bytes
Requests per second: 3.60 [#/sec] (mean)
Time per request: 2774.370 [ms] (mean)
Time per request: 277.437 [ms] (mean, across all concurrent requests)
Transfer rate: 4169.60 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 123 2743 7837.1 145 27743
Waiting: 118 2735 7835.6 137 27723
Total: 123 2743 7837.2 145 27744
Percentage of the requests served within a certain time (ms)
50% 145
66% 165
75% 168
80% 170
90% 24788
95% 26741
98% 27625
99% 27744
100% 27744 (longest request)
If I subsequently request again the same page, it is served in ~0.15 seconds. I observe the same performance even if I flush MySQL's cache with RESET QUERY CACHE:
root#client:~# ab -n 100 -c 10 http://172.16.100.3/wiki/index.php/DNA
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 172.16.100.3 (be patient).....done
Server Software: Apache/2.4.7
Server Hostname: 172.16.100.3
Server Port: 80
Document Path: /wiki/index.php/DNA
Document Length: 1184179 bytes
Concurrency Level: 10
Time taken for tests: 1.564 seconds
Complete requests: 100
Failed requests: 41
(Connect: 0, Receive: 0, Length: 41, Exceptions: 0)
Total transferred: 118456541 bytes
HTML transferred: 118417941 bytes
Requests per second: 63.93 [#/sec] (mean)
Time per request: 156.414 [ms] (mean)
Time per request: 15.641 [ms] (mean, across all concurrent requests)
Transfer rate: 73957.62 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 0
Processing: 129 150 18.8 140 189
Waiting: 120 140 18.0 130 171
Total: 129 150 18.8 141 189
Percentage of the requests served within a certain time (ms)
50% 141
66% 165
75% 169
80% 170
90% 175
95% 181
98% 188
99% 189
100% 189 (longest request)
So, why isn't the file cache working when I migrate it to another MediaWiki server?

Wordpress: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)

Trying to fix the error due to hitting the max_connections limit i used mysqltuner to fix a few more things and apparently messed up something else very very badly.
I tried switching to 127.0.0.1:3306 but that did not work either i started getting same error but with (111) in the end.
Also whenever i got this error and i restarted mysql straight after it i got this " ERROR! MySQL server PID file could not be found! "
Error (2) means the file is missing but it is surely there , i did a search.
Can the file somehow get deleted?
I have looked a lot at other same questions but i did not find a fix.
I am just a newbie so not much experience in hand.
On a side note,
I also enabled APC and started using it with W3 Total cache.
Also here is the current output from mysqltuner, the temporary tables increased alot too after i edited the my.cnf file but alas i did not take a backup!
>> MySQLTuner 1.4.0 - Major Hayden
>> Bug reports, feature requests, and downloads at http://mysqltuner.com/
>> Run with '--help' for additional options and output filtering
[OK] Currently running supported MySQL version 5.6.23
[!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM
-------- Storage Engine Statistics -------------------------------------------
[--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM
[--] Data in MyISAM tables: 109M (Tables: 159)
[--] Data in InnoDB tables: 168M (Tables: 71)
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 52)
[!!] Total fragmented tables: 9
-------- Security Recommendations -------------------------------------------
[OK] All database users have passwords assigned
-------- Performance Metrics -------------------------------------------------
[--] Up for: 8m 30s (44K q [86.443 qps], 953 conn, TX: 4B, RX: 5M)
[--] Reads / Writes: 96% / 4%
[--] Total buffers: 345.0M global + 1.8M per thread (100 max threads)
[OK] Maximum possible memory usage: 526.2M (22% of installed RAM)
[OK] Slow queries: 0% (0/44K)
[OK] Highest usage of available connections: 34% (34/100)
[OK] Key buffer size / total MyISAM indexes: 16.0M/21.7M
[OK] Key buffer hit rate: 99.9% (2M cached / 3K reads)
[OK] Query cache efficiency: 76.3% (29K cached / 39K selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 1% (47 temp sorts / 3K sorts)
[!!] Temporary tables created on disk: 39% (389 on disk / 997 total)
[OK] Thread cache hit rate: 89% (96 created / 953 connections)
[OK] Table cache hit rate: 97% (325 open / 332 opened)
[OK] Open file limit used: 3% (380/10K)
[OK] Table locks acquired immediately: 99% (9K immediate / 9K locks)
[OK] InnoDB buffer pool / data size: 185.0M/169.0M
[OK] InnoDB log waits: 0
Edit:- Here is the output from mysqltuner just after i hit the error, the temp tables are pretty high.
>> MySQLTuner 1.4.0 - Major Hayden <major#mhtx.net>
>> Bug reports, feature requests, and downloads at http://mysqltuner.com/
>> Run with '--help' for additional options and output filtering
[OK] Currently running supported MySQL version 5.6.23
[!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM
-------- Storage Engine Statistics -------------------------------------------
[--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM
[--] Data in MyISAM tables: 109M (Tables: 159)
[--] Data in InnoDB tables: 168M (Tables: 71)
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 52)
[!!] Total fragmented tables: 7
-------- Security Recommendations -------------------------------------------
[OK] All database users have passwords assigned
-------- Performance Metrics -------------------------------------------------
[--] Up for: 1m 59s (30K q [259.672 qps], 426 conn, TX: 2B, RX: 3M)
[--] Reads / Writes: 99% / 1%
[--] Total buffers: 345.0M global + 1.8M per thread (100 max threads)
[OK] Maximum possible memory usage: 526.2M (22% of installed RAM)
[OK] Slow queries: 0% (0/30K)
[OK] Highest usage of available connections: 28% (28/100)
[OK] Key buffer size / total MyISAM indexes: 16.0M/22.1M
[OK] Key buffer hit rate: 99.7% (947K cached / 2K reads)
[OK] Query cache efficiency: 88.4% (25K cached / 28K selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 1% (13 temp sorts / 1K sorts)
[!!] Temporary tables created on disk: 46% (193 on disk / 415 total)
[OK] Thread cache hit rate: 83% (69 created / 426 connections)
[OK] Table cache hit rate: 94% (114 open / 121 opened)
[OK] Open file limit used: 1% (101/10K)
[OK] Table locks acquired immediately: 100% (3K immediate / 3K locks)
[OK] InnoDB buffer pool / data size: 185.0M/169.0M
[OK] InnoDB log waits: 0

APC making PHP 5.3 slower?

I recently learned about APC (I know, I'm late to the show) and decided to try it out on my development server. I did some benchmarking with ApacheBench, and to my surprise I've found that things are running slower than before.
I haven't made any code optimizations to use apc_fetch or anything, but I was under the impression the opcode caching should make a positive impact on its own?
C:\Apache24\bin>ab -n 1000 http://localhost/
This is ApacheBench, Version 2.3 <$Revision: 1178079 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Finished 1000 requests
Server Software: Apache/2.4.2
Server Hostname: localhost
Server Port: 80
Document Path: /
Document Length: 22820 bytes
Concurrency Level: 1
Time taken for tests: 120.910 seconds
Complete requests: 1000
Failed requests: 95
(Connect: 0, Receive: 0, Length: 95, Exceptions: 0)
Write errors: 0
Total transferred: 23181893 bytes
HTML transferred: 22819893 bytes
Requests per second: 8.27 [#/sec] (mean)
Time per request: 120.910 [ms] (mean)
Time per request: 120.910 [ms] (mean, across all concurrent requests)
Transfer rate: 187.23 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.4 0 1
Processing: 110 120 7.2 121 156
Waiting: 61 71 7.1 72 103
Total: 110 121 7.2 121 156
Percentage of the requests served within a certain time (ms)
50% 121
66% 122
75% 123
80% 130
90% 131
95% 132
98% 132
99% 137
100% 156 (longest request)
Here's the APC section of my php.ini. I've left most things at the default except for expanding the default size to 128MB instead of 32.
[APC]
apc.enabled = 1
apc.enable_cli = 1
apc.ttl=3600
apc.user_ttl=3600
apc.shm_size = 128M
apc.slam_defense = 0
Am I doing something wrong, or do I just need to use apc_fetch/store to really get a benefit from APC?
Thanks for any insight you guys can give.
Enabling APC with default settings will make a noticeable (to say the least) difference in response times for your PHP script. You don't have to use any of its specific store/fetch functions to get benefits from APC. In fact, normally you don't even need a benchmark to tell the difference; the difference should be apparent by simply navigating through your site.
If you don't see any difference and your benchmarks don't have some kind of error, then I'd suggest that you start debugging the issue (enable error reporting, check the logs, etc).

Categories