Mysql Config Optimization Suggestion - php

I have server with 8 GB RAM and 8 Core, Running one website, I have done mysql configuration setting, on which server working normally, but in some times (1 to 2 times in 24 hours), it takes so much load. Os running CentOS
Need suggestion.
[mysqld]
connection settings
max_connect_errors=10
max_connections=500
max_user_connections=700
wait_timeout=60
connect_timeout=10
interactive_timeout=60
innodb_buffer_pool_size=6GB
cache settings
query_cache_limit=128M
query_cache_size=10M
query_cache_type=1
table_open_cache=5000
thread_cache_size=250
buffer sizes
key_buffer=1288M
sort_buffer_size=20M
read_buffer_size=20M
join_buffer_size=20M
tmpdir / temp table sizes
tmp_table_size=256M
max_heap_table_size=256M
misc. settings
default-storage-engine = MYISAM
datadir=/var/lib/mysql
skip-external-locking
server-id = 1
open-files-limit = 96384
max_allowed_packet = 300M
innodb settings
innodb_data_file_path = ibdata1:10M:autoextend
innodb_thread_concurrency = 10
innodb_buffer_pool_size = 1000M
innodb_log_buffer_size = 500M
innodb_file_per_table = 1
[mysqld_safe]
open-files-limit = 16384
[mysqldump]
quick
max_allowed_packet=600M
[myisamchk]
key_buffer = 128M
sort_buffer = 128M
read_buffer = 128M
write_buffer = 16M
[mysql]
no-auto-rehash

Right now you have assigned more RAM to mysql than you have physically on server. Please check my comment below against variables-
[mysqld]
#connection setting
max_connect_errors=10
max_connections=500 #each connection consumes server resources, so keep how many required.
max_user_connections=700 #It should be always less than max_connections,even you should remove it.
wait_timeout=60
connect_timeout=10
interactive_timeout=60
innodb_buffer_pool_size=6GB #duplicate as already mentioned below
#cache setting
query_cache_limit=128M #this should be less than query_cache_size, 2M will be sufficient we should not cache heavy queries as it will slow the process.
query_cache_size=10M #can keep 20M or increase as per requirement.
query_cache_type=1
table_open_cache=5000
thread_cache_size=250 --even not impact much to performance but still should not more than 100.
#buffer sizes
key_buffer=1288M # keep it 20M as your server is innodb
sort_buffer_size=20M #this variable consumes server resources for each connection, 2M should be sufficient but can increase based on your queries.
read_buffer_size=20M #this variable consumes server resources for each connection, 2M should be sufficient but can increase based on your queries.
join_buffer_size=20M #this variable consumes server resources for each connection, 2M should be sufficient but can increase based on your queries.
#tmpdir / temp table sizes
tmp_table_size=256M
max_heap_table_size=256M
#misc. settings
default-storage-engine = MYISAM # change to innoDB
datadir=/var/lib/mysql
skip-external-locking
server-id = 1
open-files-limit = 96384
max_allowed_packet = 300M # You can reduce it to 64M or how much really required, fine if your application is sending large packet.
#innodb settings
innodb_data_file_path = ibdata1:10M:autoextend
innodb_thread_concurrency = 0 #mysql can utilize it better read this > http://dba.stackexchange.com/questions/5666/possible-to-make-mysql-use-more-than-one-core
innodb_buffer_pool_size = 1000M # It should be how much you can increase in innodb upto 80% of RAM.
innodb_log_buffer_size = 500M #8M is fine can increase as per server write in your environment but not 500M, as consumes server resources based on no of connections.
innodb_file_per_table = 1
[mysqld_safe]
log-error=/var/log/mysql/mysqld.log
[mysqldump]
quick
Note: You can leave other sections except [mysqld], [mysqld_Safe] nd [mysqldump]
Finally you can test your server with below configuration-
[mysqld]
#connection setting
max_connect_errors=10
max_connections=400
wait_timeout=60
connect_timeout=10
interactive_timeout=60
#cache setting
query_cache_limit=2M
query_cache_size=50M
query_cache_type=1
table_open_cache=5000
thread_cache_size=100
#buffer sizes
key_buffer_size=20M
sort_buffer_size=2M
read_buffer_size=2M
join_buffer_size=2M
#tmpdir / temp table sizes
tmp_table_size=256M
max_heap_table_size=256M
#misc. settings
default-storage-engine = innoDB
datadir=/var/lib/mysql
skip-external-locking
server-id = 1
open-files-limit = 65535
max_allowed_packet = 64M
#innodb settings
innodb_data_file_path = ibdata1:10M:autoextend
innodb_thread_concurrency = 0
innodb_buffer_pool_size = 4G
innodb_log_buffer_size = 8M
innodb_file_per_table = 1
[mysqldump]
quick
Note: If you can reduce total_connections then you can increase innodb_buffer_pool_size and can get better performance. with 250 users you can increase innodb_buffer_pool_size from 4GB to 5GB.
Still first you need to test your server and can +/- as per requirement/performance.

Related

How to make faster queries on remote mysql server

I have 2 servers on OVH, they are on different datacenter in the Europe (SBG1 and GRA1). My controlled ping is 10ms.
My websites run many insert and read queries.
When I try my local mysql server it is very fast but when I use the remote mysql server the queries run is delayed.
My Remote Mysql Configuration
# Percona Server template configuration
[mysqld]
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
pid-file=/var/run/mysqld/mysqld.pid
skip-name-resolve
bind-address = 5.196.77.XXX
# skip-networking
sql-mode = 'NO_AUTO_CREATE_USER'
explicit_defaults_for_timestamp = 1
# MyISAM #
key_buffer_size = 2G
# SAFETY #
max_allowed_packet = 10G
# CACHES AND LIMITS #
tmp_table_size = 32M
max_heap_table_size = 32M
query_cache_type = 1
query_cache_size = 2M
query_cache_limit = 1M
join_buffer_size = 6M
max_connections = 600
thread_cache_size = 100
open_files_limit = 65535
table_definition_cache = 4096
table_open_cache = 4096
# INNODB #
innodb_flush_method = O_DIRECT
innodb_log_files_in_group = 2
innodb_log_file_size = 512M
innodb_flush_log_at_trx_commit = 1
innodb_file_per_table = 1
innodb_buffer_pool_size = 20G
innodb_data_file_path =ibdata1:20M:autoextend
# LOGGING #
log_error = /var/log/mysql/mysql-error.log
log-queries-not-using-indexes = 1
slow_query_log = 1
long_query_time = 3
slow_query_log_file = /var/log/mysql/mysql-slow.log
I don't know how can i connect local connection each servers.
Do i have to use vRack? (I have but i don't know have can i use it)
OR
Do i have to move my servers to same location?
What should i do
Replication enables data from one MySQL database server (the master)
to be copied to one or more MySQL database servers (the slaves).
Replication is asynchronous by default; slaves do not need to be
connected permanently to receive updates from the master.
If I understood you have a server S-A width database D-A and a server B with a database D-B and a local Server S-Local with database D-Local.
You can replicate the D-A and D-B databaseson your local Server S-Local. So your query will be faster.
I'm not sure if there is communication between S-A and S-B, but you can even replicate database D-A on Server S-B and database D-B on Server S-A.
My team has to query a distant server. It is very slow. With replication, our query are connected to a local replicated server and it is very useful

How to optimize SQL settings for fully cached wordpress site

I have a site with mysql 5.5, I have installed wordpress and a cache plugin that generates html pages. The site has about 50k visitors per day, and sometimes my site "crashes" (backend only or sometimes also frontend) so I need to stop sql server, and reboot it. (this crashes can be 1 in months or more than 1 in a week, they happen randomly)
(when I reboot the sql server, the site still fully working because of the html cache + "cloudflare always on") but I want to avoid this crashes. This is my sql config (ovh)
ps: my site dont have WP users, only admins, so cache is always on for every visitor
[mysqld]
tmp_table_size=400M
query_cache_size=1M
skip-external-locking
key_buffer_size = 12M
max_allowed_packet = 1M
table_cache = 4
table_open_cache = 96
sort_buffer_size = 64K
read_buffer_size = 256K
read_rnd_buffer_size = 256K
net_buffer_length = 2K
thread_stack = 128K
thread_cache_size = 4
max_heap_table_size = 600M
max_binlog_cache_size = 1M
max_join_size = 1M
max_seeks_for_key = 2M
max_write_lock_count = 512K
myisam_max_sort_file_size = 1M
########################
##Configuration Innodb##
##Uncomment the next line to disable Innodb
skip-innodb
default-storage-engine=myisam
innodb_buffer_pool_size = 16M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 10M
innodb_log_buffer_size = 4M
innodb_flush_log_at_trx_commit=1
How I can optimize this configuration?
Or maybe I need to upgrade my offer?
actually I have this: https://www.ovh.co.uk/web-hosting/performance-web-hosting.xml web performance 1 + private sql with 128mb ram + cloudflare cdn/cache
also, the plugin used for cache is this: https://wordpress.org/plugins/wp-fastest-cache/faq/ and I run "optimize/repair" query very often
I think you should increase the log file size to > 2BG
I config for a Magento site with > 20k products and 3k users/days with values my.cnf bellow:
innodb_buffer_pool_size = 8G
innodb_change_buffering=all
innodb_log_buffer_size=16M
innodb_additional_mem_pool_size=20M
innodb_log_file_size = 1536M
innodb_autoextend_increment=512
thread_concurrency = 3
thread_cache_size = 32
table_cache = 1024
query_cache_size = 512M
query_cache_limit = 512M
join_buffer_size = 256M
tmp_table_size = 512M
key_buffer = 256M
max_heap_table_size = 512M
read_buffer_size = 512M
read_rnd_buffer_size = 512M
bulk_insert_buffer_size = 128M
The important values are
innodb_log_file_size
innodb_buffer_pool_size
Hope that make sense.
At 50k visitors a day, you are receiving 0.5787037037 visitors a second. If that's the case, I suggest either upgrading your server, to a SQL database that has more than just 128MB of RAM. If that's not an option, caching is your last resort.
Note though, no matter how hard you push caching, if your server is weak or poor to begin with, caching is still very limited.

Mysql do not release memory

I have a project that uses 5 different data bases(oracle, mssql, ibm Informix , etc, ... and ...mysql)
Every timeout cron imports data into MySQL. Unfortunatelly we have one very huge cron script, that imports nearly 400 k rows. I made it to import data by small packs, but still, when cron runs this script, MySQL takes nearly 2 GB og memory, and wait for next call. Than takes 2 GB more, and this repeats before MySQL crashes.
We have already faced to the problem that MySQL innodb was corrupted and ran recovery mode from 1 to 4 to restart data base several times.
Total server virt memory is 12 GB.
Our configuration of /etc/my.cnf.d/ server.cnf looks like this:
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
default_storage_engine=innodb
character-set-server=utf8
back_log = 50
max_connections = 100
max_connect_errors = 10
table_open_cache = 2048
max_allowed_packet = 64M
binlog_cache_size = 16M
max_heap_table_size = 1G
read_buffer_size = 8M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M
thread_cache_size = 32
# You should try [number of CPUs]*(2..4) for thread_concurrency
thread_concurrency = 8
query_cache_size = 64M
query_cache_limit = 1M
tmp_table_size = 1G
thread_stack = 240K
transaction_isolation = REPEATABLE-READ
ft_min_word_len = 4
log-bin=mysql-bin
binlog_format=mixed
#slow_query_log
#long_query_time = 10
group_concat_max_len = 10000
#*** MyISAM Specific options
key_buffer_size = 16M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 64M
myisam_max_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover
# *** INNODB Specific options ***
innodb_file_per_table
innodb_buffer_pool_size=8G
#innodb_data_file_path = ibdata1:10M:autoextend
innodb_write_io_threads = 8
innodb_read_io_threads = 8
innodb_thread_concurrency = 16
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 8M
innodb_log_file_size = 512M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
expire_logs_days=5
[mysqldump]
# Do not buffer the whole result set in memory before writing it to
# file. Required for dumping very large tables
quick
max_allowed_packet = 256M
[mysql]
no-auto-rehash
# Only allow UPDATEs and DELETEs that use keys.
safe-updates
[myisamchk]
key_buffer_size = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M
[mysqlhotcopy]
interactive-timeout
[mysqld_safe]
# Increase the amount of open files allowed per process. Warning: Make
# sure you have set the global system limit high enough! The high value
# is required for a large number of opened tables
open-files-limit = 8192
If any info is necessary to find out why this happens I will provide it.
Please! help me to find out the solution ! Should I reinstall MySQL, or it is still possible to resolve by config modifications???

Remote connect to mysql server [PHP]

I manage a file sharing website (like mediafire,hotFile, etc.) and there is a problem when people try to download from one of the servers.
It goes like this: I have one main server - there I have the mysql database and the website himself, and I have more servers to host the files.
The download process goes like this: user get a link to the outside server. This page connect remotely to the main server and make querys with the database. After the querys, the download get started.
Now, all of the downloads works fine, except from one of the servers. The weirdest thing is that sometimes the download from this server is works, and sometimes not!
It's about 70% of times the download from the problematic server is works, and 30% of times the download isn't works.
When the download isn't working the error message is:
Connect failed: Lost connection to MySQL server at 'reading authorization packet', system error: 0
The my.cnf looks like this:
[mysqld]
skip-name-resolve
bulk_insert_buffer_size = 8M
concurrent_insert = 2
connect_timeout = 10
default-storage-engine = MyISAM
innodb_buffer_pool_size=16M
interactive_timeout = 35
join_buffer_size = 2M
key_buffer_size = 192M
local-infile=0
log-error=/var/log/mysql/error.log
log-slow-queries
log-slow-queries=/var/log/mysql/mysql-slow.log
long_query_time=1
max_allowed_packet = 32M
max_connections = 3000
max_heap_table_size = 256M
max_user_connections= 400
max_write_lock_count = 8
myisam_max_sort_file_size = 256M
myisam_sort_buffer_size = 64M
open_files_limit=128K
query_alloc_block_size = 65536
query_cache_limit = 16M
query_cache_size = 128M
query_cache_type = 1
query_prealloc_size = 262144
range_alloc_block_size = 4096
read_buffer_size = 2M
read_rnd_buffer_size = 1M
sort_buffer_size = 2M
table_cache = 48K
thread_cache_size = 512
tmp_table_size = 256M
transaction_alloc_block_size = 4096
transaction_prealloc_size = 4096
wait_timeout = 100
max_connect_errors = 5000
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
How can I fix the problem?
Thank you very much, and sorry for my English.

Updating 11M rows database and my.cnf optimisation

I have to update 11M rows from a database in a PHP script.
After some time, the script freezes or crashes. I have to restart EasyPHP 12, and reload it.
My configuration:
Windows 7 Pro 64 bits
Intel Core i7 860 2.8Ghz
8G RAM
My my.cnf file :
port = 3306
socket = /tmp/mysql.sock
[mysqld]
port = 3306
socket = /tmp/mysql.sock
skip-external-locking
key_buffer_size = 384M
max_allowed_packet = 1M
table_open_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 32M
thread_concurrency = 8
log-bin=mysql-bin
server-id = 1
[mysqldump]
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer_size = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout
Here is the pseudo code that crash.
`For i to 100000 { do magic (check content on the web); UPDATE table; }`
You have to look into php.ini, not my.cnf.
I suppose you are performing some logic on a single record and updating that record, then on to the next one. In this case the update of a single record (or a subset of records) should not take that long.
The freeze or crash is either because your script hits the memory limit or its execution time limit.
You are probably running into an execution time error. You could create a shell script an run that via your command line.
for example (pseudo):
<?php
SELECT * FROM table
LIMIT $x to $y
FOR every result
do some magic, SAVE to record with ID $id
If you call this file my_update.php, run it by typing php path/to/my_update.php and watch the magic. (where php is the php executable).
Be smart and log every action! So when the script fails you have a nice trail and don't have to start all over again! That is exactly why I added the LIMIT in the query so it doesn't have to buffer 11M of rows, but just a few. After the first bunch of rows it will simply go to the next LIMIT. Sort of like pagination, but without visual output.
Sources:
http://php.net/manual/en/features.commandline.php
you need to change the maximum execution time of script from php.ini file.

Categories