I am using MySQL and PHP with 2 application servers and 1 database server.
With the increase of the number of users (around 1000 by now), I'm getting the following error :
SQLSTATE[08004] [1040] Too many connections
The parameter max_connections is set to 1000 in my.cnf and mysql.max_persistent is set to -1 in php.ini.
There are at most 1500 apache processes running at a time since the MaxClients apache parameter is equal to 750 and we have 2 application servers.
Should I raise the max_connections to 1500 as indicated here?
Or should I set mysql.max_persistent to 750 (we use PDO with persistent connections for performance reasons since the database server is not the same as the application servers)?
Or should I try something else?
Thanks in advance!
I think your connections aren't closing fast enough and they stack until the default time has reached. I had same problem and with wait_timeout I solved things out.
You can try to setup in my.cnf
set-variable = max_connections=1000 // max connection
set-variable = max_user_connections=100 // max user connection per hour
wait_timeout = 60 // wait timeout for connection in seconds
as will terminate any existing connections after 60 seconds has passed
I think you should check out the PHP code if you can get rid of the persistent connections.
The problem with persistent connections is that the php instance keeps them open even after script exits until the data has been sent to the client and php instance is freed to next customer.
Another problem with persistent connections is that some PHP code might leave the socket with different settings than in the startup, with different locales or with temporary tables.
If you can rewrite the code that for every connection theres only one or few mysql_connects and the database handle is passed to different parts of the code or kept in GLOBAL the performance impact of losing persistent connections is negligible.
And, of course, there's little harm in doubling the max_connections. It's not very useful with PHP anyway as PHP/Apache child exits quite often anyway and closes the handles. The max_connections is more useful in other environments.
Related
I have a large PHP/MySQL application where infrequently it will break with the error:
"already has more than max_user_connections active connections in xyz.php"
The max_user_connections limit is already set to 100 but I would prefer to see if there is an issue with my application, such as open connections or bad connections that is causing this.
I've monitored the active processes in PHPMyAdmin and can not see any that particularly hang or seem an issue.
Is there any suggestions on how to debug my code or find any possible causes to this? Would it be specifically in the xyz.php file mentioned, or is this just because my mysql connection classes are in there? The error is so temporary, only for seconds, I'm at a loss on how to hunt the cause down.
Connections beyond max_user_connections configuration in mysql always leading to mysql reject new connections.For dealing with this situation
You could use a connection pool to manage connections with mysql,which connection is reuseable.If you do not want to use a pool, remmbering closing connection properly!
Turning up max_user_connections in my.cnf,you should measuring your host's capacity for a good configuration.Nerver set a infinite or unlimited value. Overly high number of connections will run out of host's resources,and make poor performance query because race condition.
I'm tuning my project MYSQL database, as many people I got suggestion to reduce
wait_timeout
, but it is unclear for me, does this session variable exclude query execution time, or it includes it? I have set it for 5 seconds, taking into account that I may have queries which are being executed for 3-5 seconds sometimes (yea that is slow there are few of them, but they still exist), so mysql connections have at least 1-2 seconds to be taken by PHP scripts before they are closed by MYSQL.
In MySQL docs there is no clear explanation about how it starts counting that timeout and if it includes execution time. Perhaps your experience may help. Thanks.
Does the wait_timeout session variable exclude query execution time?
Yes, it excludes query time.
max_execution_time controls how long the server will keep a long-running query alive before stopping it.
Do you use php connection pools? If so 5 seconds is an extremely short wait_time. Make it longer. 60 seconds might be good. Why? the whole point of connection pools is to hold some idle connections open from php to MySQL, so php can handle requests from users without the overhead of opening connections.
Here's how it works.
php sits there listening for incoming requests.
A request arrives, and the php script starts running.
The php script asks for a database connection.
php (the mysqli or PDO module) looks to see whether it has an idle connection waiting in the connection pool. If so it passes the connection to the php script to use.
If there's no idle connection php creates one and passes it to the php script to use. This connection creation starts the wait_timeout countdown.
The php script uses the connection for a query. This stops the wait_timeout countdown, and starts the max_execution_time countdown.
The query completes. This stops the max_execution_time countdown and restarts the wait_timeout countdown. Repeat 6 and 7 as often as needed.
The php script releases the connection, and php inserts it into the connection pool. Go back to step 1. The wait_time is now counting down for that connection while it is in the pool.
If the connection's wait_time expires, php removes it from the connection pool.
If step 9 happens a lot, then step 5 also must happen a lot and php will respond more slowly to requests. You can make step 9 happen less often by increasing wait_timeout.
(Note: this is simplified: there's also provision for a maximum number of connections in the connection pool.)
MySQL also has an interactive_timeout variable. It's like wait_timeout but used for interactive sessions via the mysql command line program.
What happens when a web user makes a request and then abandons it before completion? For example, a user might stop waiting for a report and go to another page. In some cases, the host language processor detects the closing of the user connection, kills the MySQL query, and returns the connection to the pool. In other cases the query either completes or hits the max_execution_timeout barrier. Then the connection is returned to the pool. In all cases the wait_timeout countdown only is active when a connection is open but has no query active on it.
A MySQL server timeout can occur for many reasons, but happens most often when a command is sent to MySQL over a closed connection. The connection could have been closed by the MySQL server because of an idle-timeout; however, in most cases it is caused by either an application bug, a network timeout issue (on a firewall, router, etc.), or due to the MySQL server restarting.
It is clear from the documentation that it does not include the query execution time. It is basically the maximum idle-time allowed between two activities. If it crosses that limit, server closes the connection automatically.
The number of seconds the server waits for activity on a
noninteractive connection before closing it.
From https://www.digitalocean.com/community/questions/how-to-set-no-timeout-to-mysql :
Configure the wait_timeout to be slightly longer than the application connection pool's expected connection lifetime. This is a good safety check.
Consider changing the waittimeout value online. This does not require a MySQL restart, and the waittimeout can be adjusted in the running server without incurring downtime. You would issue set global waittimeout=60 and any new sessions created would inherit this value. Be sure to preserve the setting in my.cnf. Any existing connections will need to hit the old value of waittimeout if the application abandoned the connection. If you do have reporting jobs that will do longer local processing while in a transaction, you might consider having such jobs issue set session wait_timeout=3600 upon connecting.
From the reference manual,
Time in seconds that the server waits for a connection to become active before closing it.
In elementary terms, How many SECONDS will you tolerate someone reserving your resources and doing nothing? It is likely your 'think time' before you decide which action to take is frequently more than 5 seconds. Be liberal. For a web application, some processes take more than 5 seconds to run a query and you are causing them to be terminated at 5 seconds. The default is 28800 seconds which is not reasonable. 60 seconds could be a reasonable time to expect any web based process to be completed. If your application is also used in a traditional workplace environment, a 15 minute break is not unreasonable. be liberal to avoid 'bad feedback'.
Online project running on a period of time, notify the monitoring can see, every night a MySQL connection number is particularly high, the project development using PHP, I don't know how to control the number of connections, after all, it doesn't like memory resident type language that can realizes the connection pool at the code level.
By default 151 is the maximum permitted number of simultaneous client connections in MySQL 5.5. If you reach the limit of max_connections you will get the “Too many connections” error when you to try to connect to your MySQL server – which means all available connections are in use by other clients.
MySQL permits one extra connection on top of the max_connections limit which is reserved for the database user having SUPER privilege in order to diagnose connection problems. Normally the administrator user has this SUPER privilege. You should avoid granting SUPER privilege to app users.
MySQL uses one thread per client connection and many active threads are performance killer. Usually a high number of concurrent connections executing queries in parallel can cause significant slowdown and increase chances for deadlocks. Prior to MySQL 5.5, it doesn’t scale well although MySQL is getting better and better since then – but still if you have hundreds of active connections doing real work (this doesn’t count sleeping [idle] connections) then the memory usage will grow. Each connection requires per thread buffers. Also implicit in memory tables require more memory plus memory requirement for global buffers. On top of that, tmp_table_size/max_heap_table_size that each connection may use, although they are not allocated immediately per new connection.
Most of the time, an overly high number of connections is the result of either bugs in applications not closing connections properly or because of wrong design, like the connection to mysql is established, but then the application is busy doing something else before closing MySQL handler. In cases where an application doesn’t close connections properly, wait_timeout is an important parameter to tune and discard unused or idle connections to minimize the number of active connections to your MySQL server – and this will ultimately help to avoid the “Too many connections” error. Although some systems are running alright with even a high number of connected threads, most of the connections are idle. In general sleeping threads do not take too much memory – 512 KB or less. Threads_running is a valuable metric to monitor as it doesn’t count sleeping threads – it shows active and the amount of queries currently processing, while threads_connected status variables show all connected threads value including idle connections as well.
If you are using connection pool on the application side, max_connections should be bigger than max connections on the pool side. Connection pooling is also a good alternative if you are expecting a high number of connections. Now what should be the recommended value for max_connections ?
There is no single right answer for that question. It depends on the amount of RAM available and memory usage for each connection. Increasing max_connections value increases the number of file descriptors that mysqld requires. Note: there is no hard limit to setting up maximum max_connections value. So, you have to choose max_connections wisely as per your workload, number of simultaneous connections to MySQL server etc. In general allowing too high of a max_connections value is not recommended because in case of some locking conditions or slowdowns if all those connections running huge contention issue may raise. In case of active connections using temporary/memory tables memory usage can go even higher. On systems with small RAM or with hard number of connections control on the application side we can use small max_connections values like 100-300. Systems with 16G RAM or higher max_connections=1000 is a good idea, of course per-connection buffer should have good/default values while on some systems we can see up to 8k max connections, but such systems usually became down in case of load spikes.
To deal with it, Oracle and the MariaDB team implemented thread pool. With a properly configured thread pool you may expect throughput to NOT decrease for up to few thousand concurrent connections, for some types of workload at least.
NOTE: Beware, that in MySQL 5.6 a lot of memory is allocated when you set max_connections value too high.
The web startup I'm working at gets a spike in number of concurrent web users from 5000 on a normal day to 10,000 on weekends. This Saturday the traffic was so high that we started getting a "too many connections" error intermittently. Our CTO fixed this by simply increasing the max_connections value on the tatabase servers. I want to know if using one persistent connection is a better solution here?
i.e. instead of using:
$db = new mysqli('db_server_ip', 'db_user', 'db_user_pass', 'db_name');
We use:
$db = new mysqli('p:db_server_ip', 'db_user', 'db_user_pass', 'db_name');
We're already using multiple MySQL servers and as well as multiple web servers (Apache + mod_php).
You should share the database connection across multiple web requests. Every process that is running on the application server should get an own mysql connection, that is kept open as long as the process is running and reused for every web request that comes in.
From the PHP Docs:
Persistent connections are good if the overhead to create a link to your SQL server is high.
And
Note, however, that this can have some drawbacks if you are using a database with connection limits that are exceeded by persistent child connections. If your database has a limit of 16 simultaneous connections, and in the course of a busy server session, 17 child threads attempt to connect, one will not be able to.
Persistent connections aren't the solution to your problem. Your problem is that your burst usage is beyond the limits set in your database configuration, and potentially your infrastructure. What your CTO did, increasing the connection limit, is a good first step. Now you need to monitor the resource utilization on your database servers to make sure they can handle the increased load from additional connections. If they can, you're fine. If you start seeing the database server running out of resources, you'll need to set up additional servers to handle the burst in traffic.
Too Many Connections
Cause
This is error is caused by
a lot of simultaneous connections, or
by old connections not being released soon enough
You already did SHOW VARIABLES LIKE "max_connections"; and increased the value.
Permanent Connections
If you use permanent or persistent database connections, you have to always take the MySQL directive wait_timeout into account. Closing won't work, but you could lower the timeout. So used resources will be faster available again. Utilize netstat to find out whats going on exactly as described here https://serverfault.com/questions/355750/mysql-lowering-wait-timeout-value-to-lower-number-of-open-connections.
Do not forget to free your result sets to reduce wasting of db server resources.
Be advised to use temporary, short lived connections instead of persistent connections.
Introducing persistence is pretty much against the whole web request-response flow, because it's stateless. You know: 1 pconnect request, causes an 8 hour persistant connection dangling around at the db server, waiting for the next request, which never comes. Multiply by number of users and look at your resources.
Temporary connections
If you use mysql_connect() - do not forget to mysql_close().
Set new_link set to false and pass the CLIENT_INTERACTIVE flag.
You might adjusting interactive_timeout, which helps in stopping old connections blocking up the work.
If the problem persists, scale
If the problem remains, then decide to scale.
Either by adding another DB server and putting a proxy in front,
(MySQL works well with HAProxy) or by switching to an automatically scaling cloud-service.
I really doubt, that your stuff is correctly configured.
How can this be a problem, when you are already running multiple MySQL servers, as well as multiple web servers? Please describe your load balancing setup.
Sounds like Apache 2.2 + mod_php + MySQL + unknown balancer, right?
Maybe try
Apache 2.4 + mod_proxy_fcgi + PHP 5.5/5.6 (php-fpm) + MySQL (InnoDb) + HAProxy or
Nginx + PHP 5.5/5.6 (php-fpm) + MySQL (InnoDb) + HAProxy.
In a load test of our PHP based web application we can easily reach our DBs hard limit of 150 max connections. We run Kohana with ORM to manage the DB connections.
This causes connection exceptions (and thus failed transactions), mysql_pconnect seems to perform even worse.
We're looking for a solution to have graceful degradation under load. Options considered:
A DB connection pool (uh, that's not possible with PHP right?)
Re-try a failed connection when the failure was due to max
connections reached
2 seems logical, but Kohana/ORM manages the DB connection process. Can we configure this somehow?
Is there something I'm not thinking of?
EDIT
This is an Amazon AWS RDS database instance, Amazon sets the 150 limit for me, and the server is most certainly configured correctly. I just want to ensure graceful degradation under load with whichever database I'm using. Clearly I can always upgrade the DB and have a higher connection limit, but I want to guard against a failure situation in case we do hit our limit unexpectedly. Graceful degradation under load.
When you say load testing, I am assuming you are pushing roughly 150 concurrent requests and not that you are hitting the connection limit because you make multiple connections within the same request. If so, check out mysql_pconnect. To enable it in Kohana, simply enable persistent = true in the config/database file for your connections.
If that doesn't work, then you'll have to find an Amazon product that allows more connections since PHP does not share resources between threads.
This answers your question about PHP database connection pooling.
If the limit is 150 for connections (default for max_connections is 151), you are most likely running mysql without a config file
You will need to create a config file to raise that number
Create /etc/my.cnf and put in these two lines
[mysqld]
max_connections=300
You do not have to restart mysql (you could if you wish)
You could just run this MySQL command to raise it dynamically
SET GLOBAL max_connections = 300;
UPDATE 2012-04-06 12:39 EDT
Try using mysql_pconnect instead of mysql_connect. If Kohana can be configured to use mysql_pconnect, you are good to go.