I have a large PHP/MySQL application where infrequently it will break with the error:
"already has more than max_user_connections active connections in xyz.php"
The max_user_connections limit is already set to 100 but I would prefer to see if there is an issue with my application, such as open connections or bad connections that is causing this.
I've monitored the active processes in PHPMyAdmin and can not see any that particularly hang or seem an issue.
Is there any suggestions on how to debug my code or find any possible causes to this? Would it be specifically in the xyz.php file mentioned, or is this just because my mysql connection classes are in there? The error is so temporary, only for seconds, I'm at a loss on how to hunt the cause down.
Connections beyond max_user_connections configuration in mysql always leading to mysql reject new connections.For dealing with this situation
You could use a connection pool to manage connections with mysql,which connection is reuseable.If you do not want to use a pool, remmbering closing connection properly!
Turning up max_user_connections in my.cnf,you should measuring your host's capacity for a good configuration.Nerver set a infinite or unlimited value. Overly high number of connections will run out of host's resources,and make poor performance query because race condition.
Related
My host has a really, really low number of max connections for a database user. This is the error my users are getting:
User 'username_here' has exceeded the 'max_user_connections' resource (current value: 15).
I don't think it's in my power to raise that value, unless I upgrade to a much more expensive plan, so I'm looking for a way to use these 15 connections effectively.
My way of handling connections is the following: I connect to the database in a script I load at the start of every page load and then I pass that connection to a function that runs the queries. I thought I could minimize the time a connection is open by opening the connection inside the query function and closing it right after the return statement, is that fine or am I making things more complicated for no reason?
As a last resort, I was thinking of putting the connection inside of a try/catch and attempt to reconnect every few seconds for a few more times. Would that be something wise to do, or is it even worse?
Here's how you can optimize the number of connections:
Make sure that you are not using persistent connection anywhere. This is the easiest way to lose track of open connections and the most common reason for running out of available connections. In mysqli the persistent connection is opened by prepending p: to the hostname when connecting.
Make sure that you are only opening a single connection on each HTTP request. Don't open and close them as this can quickly get out of hand and will have bad performance impact on your application. Have single global connection that you pass around to functions that need it.
Optimize your queries so that they are processed faster and free up the connection quicker. This also applies to optimizing indexes and getting rid of the N+1 problem. (From experience I can say that PDO helps a lot in refactoring your code to avoid poorly designed queries.)
If you need to perform some other time-demanding task in the same process, do all your SQL operations first and then close the connection. Same applies to opening the connection. Open it only when you know you will need it.
If you find yourself running into a problem of exceeding the 'max_user_connections' limit then it means that your web server is not configured properly. In an ideal scenario the MySQL connection would be unlimited, but on shared hosting this limitation has to be put in place to protect against resource abuse (either accidental or on purpose). However, the number of available MySQL connection should match the number of available server threads. This can be a very opinionated topic, but I would say that if your application needs to perform some SQL operation on every request then the number of available server connections should not exceed the number of available MySQL connections. On apache, you can calculate the number of possible connections as shown in this link.
On a reasonably designed application even with 15 concurrent MySQL connections, you should still be able to handle a satisfactory amount of requests per second. For example, if each request takes 100ms to complete, you could handle 150 requests per second.
Online project running on a period of time, notify the monitoring can see, every night a MySQL connection number is particularly high, the project development using PHP, I don't know how to control the number of connections, after all, it doesn't like memory resident type language that can realizes the connection pool at the code level.
By default 151 is the maximum permitted number of simultaneous client connections in MySQL 5.5. If you reach the limit of max_connections you will get the “Too many connections” error when you to try to connect to your MySQL server – which means all available connections are in use by other clients.
MySQL permits one extra connection on top of the max_connections limit which is reserved for the database user having SUPER privilege in order to diagnose connection problems. Normally the administrator user has this SUPER privilege. You should avoid granting SUPER privilege to app users.
MySQL uses one thread per client connection and many active threads are performance killer. Usually a high number of concurrent connections executing queries in parallel can cause significant slowdown and increase chances for deadlocks. Prior to MySQL 5.5, it doesn’t scale well although MySQL is getting better and better since then – but still if you have hundreds of active connections doing real work (this doesn’t count sleeping [idle] connections) then the memory usage will grow. Each connection requires per thread buffers. Also implicit in memory tables require more memory plus memory requirement for global buffers. On top of that, tmp_table_size/max_heap_table_size that each connection may use, although they are not allocated immediately per new connection.
Most of the time, an overly high number of connections is the result of either bugs in applications not closing connections properly or because of wrong design, like the connection to mysql is established, but then the application is busy doing something else before closing MySQL handler. In cases where an application doesn’t close connections properly, wait_timeout is an important parameter to tune and discard unused or idle connections to minimize the number of active connections to your MySQL server – and this will ultimately help to avoid the “Too many connections” error. Although some systems are running alright with even a high number of connected threads, most of the connections are idle. In general sleeping threads do not take too much memory – 512 KB or less. Threads_running is a valuable metric to monitor as it doesn’t count sleeping threads – it shows active and the amount of queries currently processing, while threads_connected status variables show all connected threads value including idle connections as well.
If you are using connection pool on the application side, max_connections should be bigger than max connections on the pool side. Connection pooling is also a good alternative if you are expecting a high number of connections. Now what should be the recommended value for max_connections ?
There is no single right answer for that question. It depends on the amount of RAM available and memory usage for each connection. Increasing max_connections value increases the number of file descriptors that mysqld requires. Note: there is no hard limit to setting up maximum max_connections value. So, you have to choose max_connections wisely as per your workload, number of simultaneous connections to MySQL server etc. In general allowing too high of a max_connections value is not recommended because in case of some locking conditions or slowdowns if all those connections running huge contention issue may raise. In case of active connections using temporary/memory tables memory usage can go even higher. On systems with small RAM or with hard number of connections control on the application side we can use small max_connections values like 100-300. Systems with 16G RAM or higher max_connections=1000 is a good idea, of course per-connection buffer should have good/default values while on some systems we can see up to 8k max connections, but such systems usually became down in case of load spikes.
To deal with it, Oracle and the MariaDB team implemented thread pool. With a properly configured thread pool you may expect throughput to NOT decrease for up to few thousand concurrent connections, for some types of workload at least.
NOTE: Beware, that in MySQL 5.6 a lot of memory is allocated when you set max_connections value too high.
The web startup I'm working at gets a spike in number of concurrent web users from 5000 on a normal day to 10,000 on weekends. This Saturday the traffic was so high that we started getting a "too many connections" error intermittently. Our CTO fixed this by simply increasing the max_connections value on the tatabase servers. I want to know if using one persistent connection is a better solution here?
i.e. instead of using:
$db = new mysqli('db_server_ip', 'db_user', 'db_user_pass', 'db_name');
We use:
$db = new mysqli('p:db_server_ip', 'db_user', 'db_user_pass', 'db_name');
We're already using multiple MySQL servers and as well as multiple web servers (Apache + mod_php).
You should share the database connection across multiple web requests. Every process that is running on the application server should get an own mysql connection, that is kept open as long as the process is running and reused for every web request that comes in.
From the PHP Docs:
Persistent connections are good if the overhead to create a link to your SQL server is high.
And
Note, however, that this can have some drawbacks if you are using a database with connection limits that are exceeded by persistent child connections. If your database has a limit of 16 simultaneous connections, and in the course of a busy server session, 17 child threads attempt to connect, one will not be able to.
Persistent connections aren't the solution to your problem. Your problem is that your burst usage is beyond the limits set in your database configuration, and potentially your infrastructure. What your CTO did, increasing the connection limit, is a good first step. Now you need to monitor the resource utilization on your database servers to make sure they can handle the increased load from additional connections. If they can, you're fine. If you start seeing the database server running out of resources, you'll need to set up additional servers to handle the burst in traffic.
Too Many Connections
Cause
This is error is caused by
a lot of simultaneous connections, or
by old connections not being released soon enough
You already did SHOW VARIABLES LIKE "max_connections"; and increased the value.
Permanent Connections
If you use permanent or persistent database connections, you have to always take the MySQL directive wait_timeout into account. Closing won't work, but you could lower the timeout. So used resources will be faster available again. Utilize netstat to find out whats going on exactly as described here https://serverfault.com/questions/355750/mysql-lowering-wait-timeout-value-to-lower-number-of-open-connections.
Do not forget to free your result sets to reduce wasting of db server resources.
Be advised to use temporary, short lived connections instead of persistent connections.
Introducing persistence is pretty much against the whole web request-response flow, because it's stateless. You know: 1 pconnect request, causes an 8 hour persistant connection dangling around at the db server, waiting for the next request, which never comes. Multiply by number of users and look at your resources.
Temporary connections
If you use mysql_connect() - do not forget to mysql_close().
Set new_link set to false and pass the CLIENT_INTERACTIVE flag.
You might adjusting interactive_timeout, which helps in stopping old connections blocking up the work.
If the problem persists, scale
If the problem remains, then decide to scale.
Either by adding another DB server and putting a proxy in front,
(MySQL works well with HAProxy) or by switching to an automatically scaling cloud-service.
I really doubt, that your stuff is correctly configured.
How can this be a problem, when you are already running multiple MySQL servers, as well as multiple web servers? Please describe your load balancing setup.
Sounds like Apache 2.2 + mod_php + MySQL + unknown balancer, right?
Maybe try
Apache 2.4 + mod_proxy_fcgi + PHP 5.5/5.6 (php-fpm) + MySQL (InnoDb) + HAProxy or
Nginx + PHP 5.5/5.6 (php-fpm) + MySQL (InnoDb) + HAProxy.
I am using MySQL and PHP with 2 application servers and 1 database server.
With the increase of the number of users (around 1000 by now), I'm getting the following error :
SQLSTATE[08004] [1040] Too many connections
The parameter max_connections is set to 1000 in my.cnf and mysql.max_persistent is set to -1 in php.ini.
There are at most 1500 apache processes running at a time since the MaxClients apache parameter is equal to 750 and we have 2 application servers.
Should I raise the max_connections to 1500 as indicated here?
Or should I set mysql.max_persistent to 750 (we use PDO with persistent connections for performance reasons since the database server is not the same as the application servers)?
Or should I try something else?
Thanks in advance!
I think your connections aren't closing fast enough and they stack until the default time has reached. I had same problem and with wait_timeout I solved things out.
You can try to setup in my.cnf
set-variable = max_connections=1000 // max connection
set-variable = max_user_connections=100 // max user connection per hour
wait_timeout = 60 // wait timeout for connection in seconds
as will terminate any existing connections after 60 seconds has passed
I think you should check out the PHP code if you can get rid of the persistent connections.
The problem with persistent connections is that the php instance keeps them open even after script exits until the data has been sent to the client and php instance is freed to next customer.
Another problem with persistent connections is that some PHP code might leave the socket with different settings than in the startup, with different locales or with temporary tables.
If you can rewrite the code that for every connection theres only one or few mysql_connects and the database handle is passed to different parts of the code or kept in GLOBAL the performance impact of losing persistent connections is negligible.
And, of course, there's little harm in doubling the max_connections. It's not very useful with PHP anyway as PHP/Apache child exits quite often anyway and closes the handles. The max_connections is more useful in other environments.
Trying to separate out my LAMP application into two servers, one for php and one for mysql. So far the application connects locally through a file socket and works fine.
I'm worried about the number connections I can establish if it is over the network. I have been testing tcp connections on unix for benchmark purposes and I know that you cannot exceed a certain amount of connections per second otherwise it halts due to the lack of resources (be it sockets, or file handles or whatever). I also understand that php does not implement connection pooling so for each page load a new connection over the network must be made. I also looked into pconnect for php and it seems to bring more problems.
I know this is a very very common setup (php+mysql), can anyone provide some typical usage and statistics they get out of their servers? Thanks!
The problem is not related to running out of connections allowed my MySQL. The main problem is that unix cannot very quickly create and tear down tcp connections. Sockets end up in TIME_WAIT and you have to wait for a period before you free up more sockets to connect again. These two screenshots clearly shows this pattern. MySQL does work up to a certain point and then pauses because the web server ran out of sockets. After certain amount of time passed, the web server was able to make new connections.
alt text http://img35.imageshack.us/img35/3809/picture4k.png
alt text http://img35.imageshack.us/img35/4580/picture2uyw.png
I think the limit is at 65535. So you'd have to have 65535 connections at the same time to hit that limit since a regular mysql connection closes automatically.
mysql_connect()
Note: The link to the server will be closed as soon as the execution of the script ends, unless it's closed earlier by explicitly calling mysql_close().
But if you're using a persistent mysql connection, then you can run into trouble.
Using persistent connections can require a bit of tuning of your Apache and MySQL configurations to ensure that you do not exceed the number of connections allowed by MySQL.
Each MySQL connection actually uses several meg of ram for various buffers, and takes a while to set up, which is why MySQL is limited to 100 concurrent open connections by default. You can up that limit, but it's better to spend your time trying to limit concurrent connections, via various methods.
Beware of raising the connection limit too high, as you can run out of memory (which, I believe, crashes mysql), or you may push important things out of memory. e.g. MySQL's performance is highly dependent on the OS automatically caching the data it reads from disk in memory; if you set your connection limit too high, you'll be contending for memory with the cache.
If you don't up your connection limit, you'll run out of connections long before your run out of sockets/file handles/etc. If you do increase your connection limit, you'll run out of RAM long before you run out of sockets/file handles/etc.
Regarding limiting concurrent connections:
Use a connection pooling solution. You're right, there isn't one built in to PHP, but there are plenty of standalone ones out there to choose from. This saves expensive connection setup/tear down time.
Only open database connections when you absolutely need them. In my current project, we automatically open a database connection when the first query is issued, and not a moment before; we also release the connection after we've done all our database work, but before the page's HTML is actually generated. The shorter the period of time you hold connections open, the fewer connections will be open simultaneously.
Cache what you can in a lighter-weight solution like memcached. My current project temporarily caches pages displayed to anonymous users (since every anonymous user gets the same HTML, in the end -- why bother running the same database queries all over again a few scant milliseconds later?), meaning no database connection is necessary at all. This is especially useful for bursts of anonymous traffic, like a front-page digg.