MySQL "Gone Away" Error with Persistent PHP Connection - php

I'm hosting a website locally on a WAMP stack. I recently switched the PHP connection to be persistent by adding array(PDO::ATTR_PERSISTENT => true) to the PDO constructor options argument. I've noticed a material drop in the response time as a result (hooray!).
The downside seems to be a gone away error when the machine wakes up. This never happened before changing the connection style.
Is it possible that the cached connection is closed, but continues to be returned? Is it possible to reset a PDO connection or reset the connection pool via PHP inside a catch block?

I've kicked this around for a few days and based on the prevalence of similar issues on the web, this appears to be a deficiency of PDO preventing efficient managing of persistent connections.
Answers to the obvious questions:
PHP 5.4.22
Driver settings in php.ini have persistent connections turned on
Session limits are not bounded (set to -1)
Pool limits are not bounded (set to -1)
I can recreate the issue by doing the following:
Issue the following statements on the MySQL database.
set ##GLOBAL.interactive_timeout := 10;
set ##GLOBAL.wait_timeout := 10;
Issue a few requests against the server to generate some cached connections. You can see the thread count increase compared to doing this with non-persistent connections via:
echo $conn->getAttribute(PDO::ATTR_SERVER_INFO);
Wait at least 10 seconds and start issuing more requests. You should start receiving 'gone away' messages.
The issue is SQL closes the connections and subsequent calls to the PDO constructor return these closed connections without reconnecting them.
This is where PDO is deficient. There is no way to force a connection open and no good way to even detect state.
The way I'm currently getting around this (admittedly a bit of a hack) is issuing these MySQL statements
set ##GLOBAL.interactive_timeout := 86400;
set ##GLOBAL.wait_timeout := 86400;
These variables are set to 28800sec (8 hours) by default. Note that you'll want to restart Apache to clear out cached connections or you wont notice a difference until all connections in the pool have been cycled (I have no idea how / when that happens). I chose 86400 which is 24 hours and I'm on this machine daily so this should cover the basic need.
After this update I let my machine sit for at least 12 hours which was how long it sat previously when I started getting 'gone away message'. It looks like problem solved.
I've been thinking that while I cant force open a connection, it may be possible to remove a bad connection from the pool. I haven't tried this, but a slightly more elegant solution might be to detect the 'gone away' message then set the object to NULL telling PHP to destroy the resource. If the database logic made a few attempts like this (there'd have to be a limit in case a more severe error occurred), it might help keep these errors to a minimum.

For what it's worth I'm looking into using persistent connections on php-fpm 7.3 behind nginx and trying to reproduce that behaviour with a static pool of 1 children, and so far I can't.
I can see through SHOW PROCESSLIST on a separate terminal how the database closes the persistent connection after 5 seconds of a request that does a SELECT, but the next just opens a new one and works just as well. On the other hand if I bombard the API with a load testing tool the same connection is maintained and all requests succeed.
Maybe it was because you used Apache+mod_php instead of the php-fpm worker pool, or maybe there's been a genuine fix between PHP 5.4 and 7.3
Versions tested:
PHP-FPM: 7.3.13
mysqlnd (underlying PDO_MYSQL driver): 5.0.12-dev - 20150407
MySQL Server: 5.7.29 and 8.0.19
MariaDB Server (MySQL drop-in replacement): 10.1.43
PD thanks for laying out the reproducing steps and your thought process, it was invaluable.

Yes you will need to reconnect if the connection closes.
http://brady.lucidgene.com/2013/04/handling-pdo-lost-mysql-connection-error/

Related

How should I handle a "too many connections" issue with mysql?

The web startup I'm working at gets a spike in number of concurrent web users from 5000 on a normal day to 10,000 on weekends. This Saturday the traffic was so high that we started getting a "too many connections" error intermittently. Our CTO fixed this by simply increasing the max_connections value on the tatabase servers. I want to know if using one persistent connection is a better solution here?
i.e. instead of using:
$db = new mysqli('db_server_ip', 'db_user', 'db_user_pass', 'db_name');
We use:
$db = new mysqli('p:db_server_ip', 'db_user', 'db_user_pass', 'db_name');
We're already using multiple MySQL servers and as well as multiple web servers (Apache + mod_php).
You should share the database connection across multiple web requests. Every process that is running on the application server should get an own mysql connection, that is kept open as long as the process is running and reused for every web request that comes in.
From the PHP Docs:
Persistent connections are good if the overhead to create a link to your SQL server is high.
And
Note, however, that this can have some drawbacks if you are using a database with connection limits that are exceeded by persistent child connections. If your database has a limit of 16 simultaneous connections, and in the course of a busy server session, 17 child threads attempt to connect, one will not be able to.
Persistent connections aren't the solution to your problem. Your problem is that your burst usage is beyond the limits set in your database configuration, and potentially your infrastructure. What your CTO did, increasing the connection limit, is a good first step. Now you need to monitor the resource utilization on your database servers to make sure they can handle the increased load from additional connections. If they can, you're fine. If you start seeing the database server running out of resources, you'll need to set up additional servers to handle the burst in traffic.
Too Many Connections
Cause
This is error is caused by
a lot of simultaneous connections, or
by old connections not being released soon enough
You already did SHOW VARIABLES LIKE "max_connections"; and increased the value.
Permanent Connections
If you use permanent or persistent database connections, you have to always take the MySQL directive wait_timeout into account. Closing won't work, but you could lower the timeout. So used resources will be faster available again. Utilize netstat to find out whats going on exactly as described here https://serverfault.com/questions/355750/mysql-lowering-wait-timeout-value-to-lower-number-of-open-connections.
Do not forget to free your result sets to reduce wasting of db server resources.
Be advised to use temporary, short lived connections instead of persistent connections.
Introducing persistence is pretty much against the whole web request-response flow, because it's stateless. You know: 1 pconnect request, causes an 8 hour persistant connection dangling around at the db server, waiting for the next request, which never comes. Multiply by number of users and look at your resources.
Temporary connections
If you use mysql_connect() - do not forget to mysql_close().
Set new_link set to false and pass the CLIENT_INTERACTIVE flag.
You might adjusting interactive_timeout, which helps in stopping old connections blocking up the work.
If the problem persists, scale
If the problem remains, then decide to scale.
Either by adding another DB server and putting a proxy in front,
(MySQL works well with HAProxy) or by switching to an automatically scaling cloud-service.
I really doubt, that your stuff is correctly configured.
How can this be a problem, when you are already running multiple MySQL servers, as well as multiple web servers? Please describe your load balancing setup.
Sounds like Apache 2.2 + mod_php + MySQL + unknown balancer, right?
Maybe try
Apache 2.4 + mod_proxy_fcgi + PHP 5.5/5.6 (php-fpm) + MySQL (InnoDb) + HAProxy or
Nginx + PHP 5.5/5.6 (php-fpm) + MySQL (InnoDb) + HAProxy.

MongoDB - Too Many Connection Error

We have developed chat module using node.js() and mongo sharding and gone live to production server. But today its reached 20000 connection in mongodb and getting error "Too many connection" in logs. After that we have restarted the node server and started again. now its comes normal. But we have to know how will solve this problem immediately.
Any configuration are there to set it in mongodb to kill the connection if not used or set the expire time while establish the connection.
Please help us to close this issue.
Regards,
Kumaran
You're probably not running into a MongoDB issue. There's a cap to the amount of connections you can make to MongoDB that's usually roughly equal to the maximum number of file descriptors available to it.
It sounds like there is a bug in your code (likely) or mongoose (less likely) that either creates more connections than it closes or never closes connections in the first place. In Java for example creating a new "Mongo" class instance for each query would result in this sort of problem but I don't work with node.js/mongoose so I do not know what the JS equivalent of that is.
Keep an eye on mongostat and check to see if the connection count always increases or if it decreases sometimes. If it's the former your code never releases connections for whatever reason. If it's the latter you're simply creating them faster than idle connections are disconnected. That's usually due to doing something heavy weight (like the driver initialising it's connection pool) for every query rather than once.

MongoDB Optimal Performance - How Many Persistent Connections

I have a mongodb server in production serving on an EC2 instance. According to the mongodb official documentation, persistent DB connections should ALWAYS be used in production. I've been experimenting with about 50 persistent connections and was getting frequent connection errors (approx 33% of the time) while testing. I'm using this code:
$pid = 'db_'.rand(1,50);
$mongo = new Mongo("mongodb://{$user}:{$pass}#{$host}", array('persist' => $pid) );
Some background on the application, it's a link tracking application that is still ramping up - and is in the range of 500 - 1k writes per hour, nothing too crazy... yet.
I'm wondering if I simply need to allow more persistent connections? How does one determine the right balance of persistent connections versus server resources available?
Thanks in advance everyone.
The persist value is no longer supported as of the most recent driver (1.2.0).
Truth is, it was never really clear what it did in typical Apache+PHP setups. There are several comments on the Google Groups and elsewhere asking for detail, but I did not any evidence that persist or persistent was ever tested with any depth.
Instead, it's all been replaced by connection pooling "out of the box". The connection pooling has obviously been through some changes within the 1.2 line with the addition of the MongoPool class.
There is still no detailed explanation of how the pooling works with Apache, but at least you don't have to worry about persist.
Now despite all of this mess, I have handled 1000 times that traffic on a single MongoDB server via the PHP driver without lots of connection problems.
Are you catching the exceptions?
Can you provide more details about the exact exception?
There may be a code solution.
Are you opening a new connection for each PHP page request, or using a connection pool with 50 persistent connections? If you're opening a new connection each time then you might be quickly running out of resources.
Each connection uses an additional thread on the server, so you could be hitting a limit on the number of threads of network connections, check your server logs in /var/lib/mongodb for errors.
If you're using the official MongoDB PHP driver, then as far as I know it should handle connection pooling for you automatically. If you're connecting to Mongo from 50 separate clients, then consider putting a queue in front of Mongo to buffer the writes.
http://php.net/manual/en/mongo.connecting.php
without Persistent Connections x1000
It takes approximately 18 seconds to execute
Persistent
...it takes less than .02 seconds

MongoDB connection gradually increase. How to resolve this problem?

I'm currently using mongoDB in my development server. Using PHP 1.1.4 driver and connecting to mongoDB with persist command.
But somehow my db connection number is gradually increasing but never seems to end those connections perhaps no timeout?
I'm worried that if I deploy my source, it might cause some problem that full of connection pool won't let people to use mongoDB at all.
How can I set timeout shorter or somehow resolve gradually increasing connection problem though there is only one user.
Each instance of your running code uses its own pool of persistant connections.
Are the operations assigned to each connection getting finished fast ? There might be slow queries in your code. Do share the snapshot of mongostat from your running instance. That would help. It mongostat shows everything is fine, then it might be PHP mongodb driver bug.
See :
http://groups.google.com/group/mongodb-user/browse_thread/thread/ac005be798b6adea?pli=1
What I would suggest is, use non-persistant connections and explicitly close them at the end of your script.
http://php.net/manual/en/mongo.close.php
Though there is a little performance hit as persistant connections are better, but that can be ignored for moderate traffic.

Debug MySQLs "too many connections"

I'm trying to debug an error I got on a production server. Sometimes MySQL gives up and my web app can't connect to the database (I'm getting the "too many connections" error). The server has a few thousand visitors a day and on the night I'm running a few cron jobs which sometimes does some heavy mysql work (Looping through 50 000 rows, inserting and deletes duplicates etc)
The server runs both apache and mysql on the same machine
MySQL has a pretty standard based configuration (max connections)
The web app is using PHP
How do I debug this issue? Which log files should I read? How do I find the "evil" script? The strange this is that if I restart the MySQL server it starts working again.
Edit:
Different apps/scripts is using different connectors to its database (mostly mysqli but also Zend_Db)
First, use innotop (Google for it) to monitor your connections. It's mostly geared to InnoDB statistics, but it can bet set to show all connections including those not in a transaction.
Otherwise, the following are helpful: Use persistent connections / connection pools in your web apps. Increase your max connections.
It's not necessarily a long-running SQL query.
If you open a connection at the start of a page, it won't be released until the PHP script terminates - even if there is no query running.
You should add some stats to your pages to find out the slowest ones, and the most-hit ones. Closing the connection early would help, if possible.
Try using persistent connections (mysql_pconnect), it will help reduce the server load caused by constantly opening and closing MySQL connections.
The starting point is probably to use mysqladmin processlist to get a list of the processes on the mysql server. The next step depends on what you find.

Categories