Why have I got MySQL sleeping connections when not using mysql_pconnect() - php

An Apache Bench test revealed that high throughput on the database caused the following...
Apache threads stuck in "Sending Reply" state, all related to a particular PHP file (as seen in Apache extended status).
MySQL sleeping connections with the same user as used by the PHP file.
Note: Apache Bench was used from a remote location as a mechanism for stressing the database only, ie a script just connected to the DB and ran 5 queries per load.
Running the same Apache Bench tests, but introducing a mysql_close() at the end of the script solved the problem. What I'd like to understand is why this happens.
A popular theory internally we have is that:
The increased throughput on the database somehow prevented Apache from serving its requests properly. A buffer somewhere, either in MySQL or at the OS level got filled up.
The Apache requests therefore got stuck in a Sending Reply state, and because there was no mysql_close() at the end of the PHP file a database connection remained open - probably only to expire once MySQL's connection timeout limit was reached and the MySQL connection was closed. To be clear, we are not using mysql_pconnect(), but we are using mysql_connect().
As above, whilst we've 'solved' the problem we'd love to get to the bottom of this with a solid answer rather than an educated guess.
Anyone experienced this before and know of a solution? Any tricks/thoughts/methods of identifying this.

Related

MySQL "Sending to client" processes backing up

I'm currently trying to fix an issue with our production server not being able to handle SQL queries.
Looking at the process list, MySQL is taking 120 seconds plus to complete processes that are running queries, that when I run them myself through Heidi, are completing in less than a second. So, why would queries that are being processed coming from PHP take significantly longer (and in most cases timing out) than when the same query goes through straight away from HeidiSQL?
You are probably using a persistent connection, and it can cause such problems, in case the previous PHP code that used this connection had been stopped in the middle and never ended.
read more here: What are the disadvantages of using persistent connection in PDO
Turns out the problem was that the server where PHP was running (on a different hosting provider as we're migrating to cloud) had a throttled network connection and was unable to handle all of the data being sent back from MySQL. Turning on caching on the PHP side solved the problem.

I am receiving this error? Error : (1226) User 'qe' has exceeded the 'max_user_connections' resource (current value: 30)

So, after researching about this alot i am seeking help for somebody who encountered this and got a way out.
We developed a PTC script for a client and it worked fine, but as the users grew it starting displaying an error which is as below:
Error : (1226) User 'qe' has exceeded the 'max_user_connections' resource (current value: 30)
Now after seeking help somebody said its a server related issue and other people pointed that it was an issue related to the database design of the script.
Looking forward for a way to solve this problem. Have tried tons of things.
Using godaddy hosting at the moment, they increased the Limit from 30 to 50, but im sure the problem is going to show up again.
There's no problem with the database, the problem is in how you handle database connections from your software.
The way your script is set up is that every connection to your web server also opens a connection towards MySQL. That's not the scenario you want.
Raising the limit won't fix the issue, it will just delay yet another error. What you should do is use persistent connections.
One of the reasons why using php-fpm instead of server API's such as mod_php is preferred is because a set number of PHP processes is booted and a pool of connections to services is created.
The flow would be the following:
use php-fpm. Apache and nginx can use FCGI interface to speak to php-fpm processes
raise a relatively low amount of child processes for php-fpm. This shouldn't be overly large, default config usually works out, I'll make a guess that you don't run a hexacore system so 4-6 child processes should be fine
use persistent MySQL connections
What does this do? Your server accepts the request and sends it to php-fpm, which processes it when it becomes free. Each process uses 1 connection to MySQL. This means you can never hit some sort of hard limit like you have.
If your server is busy, the server should queue up the requests until PHP is capable of handling them. Be it Apache or nginx that you use, this approach will work well.
If your site is busy, it's likely that web server is working faster to accept connections and serve static content that PHP is to process dynamic content. In this case you have an option of adding another physical machine (or more) that runs php-fpm. Instructing your web server to round-robin between machines that serve PHP is trivial, for both of mentioned web servers.
Bottom line is that you want to utilize your resources in an optimal way. Opening and closing MySQL connections on every request isn't optimal. Pooling connections is.

MongoDB - Too Many Connection Error

We have developed chat module using node.js() and mongo sharding and gone live to production server. But today its reached 20000 connection in mongodb and getting error "Too many connection" in logs. After that we have restarted the node server and started again. now its comes normal. But we have to know how will solve this problem immediately.
Any configuration are there to set it in mongodb to kill the connection if not used or set the expire time while establish the connection.
Please help us to close this issue.
Regards,
Kumaran
You're probably not running into a MongoDB issue. There's a cap to the amount of connections you can make to MongoDB that's usually roughly equal to the maximum number of file descriptors available to it.
It sounds like there is a bug in your code (likely) or mongoose (less likely) that either creates more connections than it closes or never closes connections in the first place. In Java for example creating a new "Mongo" class instance for each query would result in this sort of problem but I don't work with node.js/mongoose so I do not know what the JS equivalent of that is.
Keep an eye on mongostat and check to see if the connection count always increases or if it decreases sometimes. If it's the former your code never releases connections for whatever reason. If it's the latter you're simply creating them faster than idle connections are disconnected. That's usually due to doing something heavy weight (like the driver initialising it's connection pool) for every query rather than once.

MySQL database needs a flush tables every now and again. Can I script something to resolve this?

I'm having a problem that I hope someone can help me out with.
Currently, every now and again we receive an error when our scripts (Java and PHP) try to connect to the localhost mysql database.
Host 'myhost' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'.
This issue appears to mainly occur in the early hours of the morning. After alot of searching to figure out why this may be occuring I have finally come to the conclusion that it may be due to the fact our hosting company runs their backup processes around this time. My theory is that during this backup process (this is also our busiest period) we end up using up all our connections and so this error occurs.
I have talked to our hosts about changing the times these backups occur but they have stated that this is not possible and that is simply the times the backups start to ensure they are finished in time for the day (Even though we have informed them our critical period is at the precise times the backups occur).
The things I have connecting to the server are:
PHP website
PHP files run using chron jobs
A couple of java applications to run as socket listeners that listen for incoming port connections and uses the mysql database for checking user credentials and checking outstanding messages.
We typically have anywhere from 300 - 600 socket connections open at any one time and the average activity on these are about 1-3 request per second.
I have also installed monit and munin with some mysql plugins on the server in the hope they may help auto resolve this issue however these do not see to resolve the issue.
My questions are:
Is there something I can do to auto poll the mysql database so if this occurs I can auto flush the database to clear
Is this potentially even related to the server backup. It seems a coincidence it happens 95% of the time during the period the backups occur.
Any other ideas that may help. Links to other websites, or questions I could put to our host to help out.
We are currently running on a PHP Version 5.2.6-1+lenny9 server with Apache.
If any more information is required to help, please let me know. Thanks.
UPDATE:
I am operating on a shared virtual host and am pretty sure I close my website connections as I have this code in my database class
function __destruct() {
#mysql_close($this->link);
}
I'm pretty sure I'm not using persistant connections via my PHP script as I connect to the db the #mysql_connect command.
UPDATE:
So I changed the max_connections limit from 100 - 200 and I changed the mysql.persistant variable from On to Off in php.ini. Now for two nights running the server has gone done and mainly the connection to the mySql database. I have one 1GB of RAM on the server but it never seems to get close to that. Also looking at my munin logs the connections never seem to hit the 200 mark and yet I get errors in my log files something like
SQLException: Too many connections
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
SQLException: null, message from server: "Can't create a new thread (errno 12); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug.
SQLState :: SQLException: HY000, VendorError :: SQLException: 1135
We've had a similar problem with out large ecommerce installation using MySQL as a backend. I'd suggest you alter the "max_connections" setting of the MySQL instance, then (if necessary) alter the number of file descriptors using "ulimit" before starting MySQL (we use "ulimit -n 32768" in /etc/init.d/mysql).
It's been suggestion I post an answer to this question although I never really got it sorted.
In the end I ended up implementing a Java connection pooling class which enabled me to share connections whilst maintaining a upper limit on the number of max connections I wanted. It was also suggested I increase the RAM and increase the number of max connections. I did both these things although they were just bandaids to the problem. We also ended up moving hosting providers as the ones we were with were not very co-ooperative.
After these minor implementations I haven't noticed this issue occur for at least 8 months which is good enough for me.
Other suggestions over time have to also implement a Thread pooling facility, however current demand does not require this need.

Debug MySQLs "too many connections"

I'm trying to debug an error I got on a production server. Sometimes MySQL gives up and my web app can't connect to the database (I'm getting the "too many connections" error). The server has a few thousand visitors a day and on the night I'm running a few cron jobs which sometimes does some heavy mysql work (Looping through 50 000 rows, inserting and deletes duplicates etc)
The server runs both apache and mysql on the same machine
MySQL has a pretty standard based configuration (max connections)
The web app is using PHP
How do I debug this issue? Which log files should I read? How do I find the "evil" script? The strange this is that if I restart the MySQL server it starts working again.
Edit:
Different apps/scripts is using different connectors to its database (mostly mysqli but also Zend_Db)
First, use innotop (Google for it) to monitor your connections. It's mostly geared to InnoDB statistics, but it can bet set to show all connections including those not in a transaction.
Otherwise, the following are helpful: Use persistent connections / connection pools in your web apps. Increase your max connections.
It's not necessarily a long-running SQL query.
If you open a connection at the start of a page, it won't be released until the PHP script terminates - even if there is no query running.
You should add some stats to your pages to find out the slowest ones, and the most-hit ones. Closing the connection early would help, if possible.
Try using persistent connections (mysql_pconnect), it will help reduce the server load caused by constantly opening and closing MySQL connections.
The starting point is probably to use mysqladmin processlist to get a list of the processes on the mysql server. The next step depends on what you find.

Categories