I've already read all the questions / answers about this subject here on stack overflow but unfortunately none have resolved my problem.
In the last few days the mysql error "too many connections" keeps showing up on the website logs and it hangs the entire website for every client. In fact it hangs all the websites on the server.
So here are my questions / remarks:
there are about 50 different client databases, besides 2 which are common to all clients
pconnect is already = FALSE for all connections
On the file "php.ini" the variable "allowpersistent" is ON. Does this make the mysql connections permanent even if I write pconnect = FALSE? (I can't change the "allowpersistent" variable, would have to ask the hosting company)
There are 3 files that load databases, one loads the client's DB and the other two load common databases for all clients, they are called on the construct method of every model, but CI supposedly should close the mysql connections after its done with them AND ignore any "load->database" already loaded
"db->close" apparently does nothing, because this->db->database keeps its value after I close it :P
Threads_connected are up to 1000 as I write this and the website is down :(
mysql configuration has max_connections = 1000, can it be further increased? I see no change in free memory, so what can happen?
Should I change to PDO? I'm using dbdriver "mysqli"
Should I ask the hosting company to lower the mysql variable 'wait_timeout', so as to rapidly close the DB connection?
Should I update CodeIgniter? I have version 3.1.4 and it's now at 3.1.9
Many thanks for your help!
In our case the solution was lowering the mysql variable "wait_timeout" from 8 hours (the default, wtf!?) to 180 seconds. And it can still be lowered more if needed. This had to be done by the hosting company, as we do not have root access to our server.
All the other solutions I mentioned in the question were not working, like "pconnect = false" and "db->close".
"Threads_connected" are now always under 100 or 200, instead of the almost 1000 from before this fix.
My team wrestled with this problem for two days, and there's lots of people on the Web asking for solutions but without any (working) answers.
Cheers :)
I also encounter the same problem. When I check the current number of active connections in MySQL with this query:
show processlist
There are so many connections in sleep mode. After searching around, I found out that:
When a database connection is created, a session is also created on the database server simultaneously, but if that connection and session is not closed properly, then the query goes into sleep mode after the wait time gets over.
To resolve this problem, I do as #Pedro, change the wait_ timeout to 180 by running this command in mysql:
SET GLOBAL wait_timeout = 180;
With this method, you do not need to restart mysql service to take effect.
You can read more here: https://www.znetlive.com/blog/what-is-sleep-query-in-mysql-database/
Related
This is my first nervous question on SO because all of my questions in the last decade have already had excellent answers.
I have searched all the terms that I can think of with no hits that appear to address the problem - either on SO or Google generally...
For the last 15 years we have used phpMyAdmin to administer a linux MySQL manufacturing database of about 100 tables, some of which are now 50 to 300 million records each. Ongoing development is constant, and manual lookup of various tables to correct erroneous data, or to modify table indexes etc are frequent as the size of the data grows. All of this is internal to our fast network - i.e. accessed via our intranet. Most queries are short, and the database runs responsively at a low average loading.
As may be understood, DBA mistakes happen. For example to speed up a slow query, an additional index may be added to a large table without enough thought. At this point, the re-indexing may take 30 minutes, and the manufacturing applications (written in php for Apache2 also on a linux server) come to an immediate halt. This is not appreciated in the factory.
And here is the real problem. I cannot then from my development PC open a second instance of phpMyAdmin to kill the unwanted MySQL process while it is still busy. Which is the very time I need to most :-) The browser just goes into waiting for the phpMyAdmin page to load until after the long query is finished.
If I happen to have a second instance pf phpMyAdmin open already, I can look up the process and kill it satisfactorily. Normallly, my only resort is to restart Apache2 and/or MySQL on the server. This is too drastic and requires re-starting many client machines as well in order to re-establish necessary manufacturing connections to the database.
I have seen reference on SO that Apache will queue requests from the same IP address in the case of php programs using file-based session management, but it seems to me that I have no control over how phpMyAdmin uses its sessions.
I also read some time ago that if multiple CPU cores were brought into play on the database server, multiple simultaneous connections could be made despite one such query still being busy. I cannot now find any reference to this concept.
Does anyone please know how to permit or force a second phpMyAdmin connection from the same PC to the same database server using phpMyAdmin while the first instance of phpMyAdmin is still tied up with a previous slow query?
Many thanks, Jem Stanners
Try mySQL Workbench
https://dev.mysql.com/downloads/workbench/
Try upgrading servers RAMs an processors
Consider cleaning the tables and delete rows if possible
Consider shifting to Oracle (cost is to be considered)
This question already has answers here:
How to solve MySQL max_user_connections error
(6 answers)
Closed 7 years ago.
Why I am getting this error on my website http://elancemarket.com/ again and again ?
Error establishing a database connection
SQL ERROR [ mysqli ]
User elancema_user already has more than 'max_user_connections' active connections [1203]
Warning: mysqli::mysqli(): (HY000/1203): User elancema_user already has more than 'max_user_connections' active connections in /home/elancemarket/public_html/ask/qa-include/qa-db.php on line 66
I am on very expensive VPS !
Your best bet is to increase max_user_connections. For a MySQL instance serving three different web apps (raw php, WordPress, phpBB), you probably want a value of at least 60 for this.
Issue this command and you'll find out how many global connections you have available:
show global variables like '%connections%'
You can find out how many connections are in use at any given moment like this:
show status like '%connected%'
You can find out what each connection is doing like this:
show full processlist
I would try for a global value of at least 100 connections if I were you. Your service provider ought to be able to help you if you don't have access to do this. It needs to be done in the my.cnf file configuration for MySQL. Don't set it too high or you run the risk of your MySQL server process gobbling up all your RAM.
A second approach allows you to allocate those overall connections to your different MySQL users. If you have different MySQL usernames for each of your web apps, this approach will work for you. This approach is written up here. https://www.percona.com/blog/2014/07/29/prevent-mysql-downtime-set-max_user_connections/
The final approach to controlling this problem is more subtle. You're probably using the Apache web server as underlying tech. You can reduce the number of Apache tasks running at the same time to, paradoxically, increase throughput. That's because Apache queues up requests. If it has a few tasks efficiently banging through the queue, that is often faster than lots of tasks because there's less contention. It also requires fewer MySQL connections, which will solve your immediate problem. That's explained here: Restart Mysql automatically when ubuntu on EC2 micro instance kills it when running out of memory
By the way, web apps like WordPress use a persistent connection pool. That is, they establish connections to the MySQL data base, hold them open, and reuse them. If your apps are busy, each connection's lifetime ought to be several minutes. (Based on the oversimplified statement db connections are created and deleted in fraction of seconds, your hosting provider's support tech doesn't understand the subtlety of this part of web app operation.)
The server throws you this error:
SQL ERROR [ mysqli ] User database_user already has more than 'max_user_connections' active connections [1203]
That means that the specific database user has already used up all the concurrent connections at that moment and more cannot be processed.
The only option to fix without doing any programming changes is to change the max_user_connections value in MySQL configuration. The configuration file is usually /etc/my.cnf on Linux.
Other solutions might be:
Doing less queries in the website itself
Creating separate database users for different projects if they are using the same database user.
You can check the current value for the user by running the command SHOW VARIABLES LIKE 'max_user_connections';.
I'm troubleshooting a series of reoccurring errors: WordPress database error MySQL server has gone away for query ...
I think I've found a solution here but it's a few years old and I want to better understand the MySQL wait_timeout and it's relationship to Wordpress before I start monkeying with core files or reconfiguring my server. (I'm on a virtual dedicated server, so I have the option to change the wait_timeout on the server.)
I checked by running SHOW VARIABLES; from phpMyAdmin and wait_timeout is currently set to 35. That seems low to me, but I don't fully understand what it does. I'm considering changing it to 600.
My main question is whether this is a responsible thing to do or not. But I think that broader question can be divided into smaller parts:
1. Do I have the option to override this setting with PHP (Wordpress)?
2. What is the optimal setting is for medium-large Wordpress site?
3. Are there any Wordpress configuration options or filters that I could use to change the setting without modifying core files?
Thanks.
wait_timeout is the time mysql will hold a non-interactive connection open for before closing it basically.
So increasing it to 600 seconds could solve your problem, however, if you set it to 600 seconds and you have lots of people running a slow page on your site at the same time you can get to a point where mysql starts refusing connections and then apache will start queueing requests until it subsequently refuses requests and your server takes a dive.
My suggestion would be to try and find out why a single request is taking over 35 seconds because to be honest, that seems a rather long load time on a single page from a blog to me.
I'm having a problem that I hope someone can help me out with.
Currently, every now and again we receive an error when our scripts (Java and PHP) try to connect to the localhost mysql database.
Host 'myhost' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'.
This issue appears to mainly occur in the early hours of the morning. After alot of searching to figure out why this may be occuring I have finally come to the conclusion that it may be due to the fact our hosting company runs their backup processes around this time. My theory is that during this backup process (this is also our busiest period) we end up using up all our connections and so this error occurs.
I have talked to our hosts about changing the times these backups occur but they have stated that this is not possible and that is simply the times the backups start to ensure they are finished in time for the day (Even though we have informed them our critical period is at the precise times the backups occur).
The things I have connecting to the server are:
PHP website
PHP files run using chron jobs
A couple of java applications to run as socket listeners that listen for incoming port connections and uses the mysql database for checking user credentials and checking outstanding messages.
We typically have anywhere from 300 - 600 socket connections open at any one time and the average activity on these are about 1-3 request per second.
I have also installed monit and munin with some mysql plugins on the server in the hope they may help auto resolve this issue however these do not see to resolve the issue.
My questions are:
Is there something I can do to auto poll the mysql database so if this occurs I can auto flush the database to clear
Is this potentially even related to the server backup. It seems a coincidence it happens 95% of the time during the period the backups occur.
Any other ideas that may help. Links to other websites, or questions I could put to our host to help out.
We are currently running on a PHP Version 5.2.6-1+lenny9 server with Apache.
If any more information is required to help, please let me know. Thanks.
UPDATE:
I am operating on a shared virtual host and am pretty sure I close my website connections as I have this code in my database class
function __destruct() {
#mysql_close($this->link);
}
I'm pretty sure I'm not using persistant connections via my PHP script as I connect to the db the #mysql_connect command.
UPDATE:
So I changed the max_connections limit from 100 - 200 and I changed the mysql.persistant variable from On to Off in php.ini. Now for two nights running the server has gone done and mainly the connection to the mySql database. I have one 1GB of RAM on the server but it never seems to get close to that. Also looking at my munin logs the connections never seem to hit the 200 mark and yet I get errors in my log files something like
SQLException: Too many connections
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
SQLException: null, message from server: "Can't create a new thread (errno 12); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug.
SQLState :: SQLException: HY000, VendorError :: SQLException: 1135
We've had a similar problem with out large ecommerce installation using MySQL as a backend. I'd suggest you alter the "max_connections" setting of the MySQL instance, then (if necessary) alter the number of file descriptors using "ulimit" before starting MySQL (we use "ulimit -n 32768" in /etc/init.d/mysql).
It's been suggestion I post an answer to this question although I never really got it sorted.
In the end I ended up implementing a Java connection pooling class which enabled me to share connections whilst maintaining a upper limit on the number of max connections I wanted. It was also suggested I increase the RAM and increase the number of max connections. I did both these things although they were just bandaids to the problem. We also ended up moving hosting providers as the ones we were with were not very co-ooperative.
After these minor implementations I haven't noticed this issue occur for at least 8 months which is good enough for me.
Other suggestions over time have to also implement a Thread pooling facility, however current demand does not require this need.
We have an application that is comprised of a couple of off the shelf PHP applications (ExpressionEngine and XCart) as well as our own custom code.
I did not do the actual analysis so I don't know precisely how it was determined, but am not surprised to hear that too many MySQL connections are being left unclosed (I am not surprised because I have been seeing significant memory leakage on our dev server, where over the course of a day or two, starting from 100MB upon initial boot, the entire gig of ram gets consumed, and very little of it is cached).
So, how do we go about determining precisely which PHP code is the culprit? I've got prior experience with XDebug, and have suggested that, when we've gotten our separate, staging environment reasonably stable, that we retrofit XDebug on dev and use that to do some analysis. Is this reasonable, and/or does anybody else have more specific and/or additional suggestions?
You can use the
SHOW PROCESSLIST
SQL command to see what processes are running. That will tell you the username, host, database, etc that are in use by each process. That should give you some idea what's going on, especially if you have a number of databases being accessed.
More here: https://dev.mysql.com/doc/refman/8.0/en/show-processlist.html
This should not be caused by a php code because mysql connections are supposed to be automatically closed.
cf : http://www.php.net/manual/function.mysql-connect.php :
The link to the server will be closed
as soon as the execution of the script
ends, unless it's closed earlier by
explicitly calling mysql_close().
Some suggestions :
does your developper has technically a direct access to your production mysql server ? if yes, then they probably just leave their Mysql Manager open :)
do you have some daily batch process ? if yes, maybe that there are some zombi process in memory
PHP automatically closes any mysql connections when the page ends. the only reason that a PHP web application would have too many unclosed mysql connections is either 1) you're using connection pooling, or 2) there's a bug in the mysql server or the connector.
but if you really want to look at your code to find where it's connecting, see http://xdebug.org/docs/profiler
As others said, PHP terminates MySQL connections created through mysql_connect or the msqli/PDO equivalents.
However, you can create persistent connections with mysql_pconnect. It will look for existing connections open and use those; if it can't find one, it will open a new one. If you had a lot of requests at once, it could have caused loads of connections to open and stay open.
You could lower the maximum number of connections, or lower the timeout for persistent connections. See the comments at the bottom of the man page for more details.
I used to run a script that polled SHOW STATUS for thread count and I noticed that using mysql_pconnect always encouraged high numbers of threads. I found that very disconcerting because then I couldn't tell when my connection rate was actually dropping. So I made sure to centralize all the places where mysql_connect() was called and eliminate mysql_pconnect().
The next thing I did was look at the connection timeouts and adjust them to more like 30 seconds because. So I adjusted my my.cnf with
connect-timeout=30
so I could actually see the number of connections drop off. To determine the number of connections you need open is dependent on how many apache workers you're running times the number of database connections they each will open.
The other thing I started doing was adding a note to my queries in order to spot them in SHOW PROCESSLIST or mytop, I would add a note column to my results like:
$q = "SELECT '".__FILE__.'.'.__LINE__."' as _info, * FROM table ...";
This would show me the file issuing the query when I looked at mytop, and it didn't foil the MySQL query cache like using
/* __FILE__.'.'.__LINE__ */
at the start of my query would.
I suppose another couple of things I can do, with regard to the general memory issue, as opposed specifically to MySQL, and particularly within the context of our own custom code, would be to wrap our code with calls to one or the other of the following PHP built-in functions:
memory_get_usage
memory_get_peak_usage
In particular since I am currently working on logging from some custom code, I can log the memory usage while I'm at it