I am receiving the below error randomly from the php backend jobs and php web page logs. Have a app server which runs php backend jobs and php webservers. Both connect to the same database server. Using php mysqli object oriented library for connecting to the database. Have set max connections to 750 in my.cnf. Dont see that much connections is reached.
PHP Warning: mysqli::mysqli(): (HY000/2003): Can't connect to MySQL server on '77.777.120.81' (99) in /usr/local/dev/classes/Admin.php on line 15
Failed to connect to MySQL: Can't connect to MySQL server on '77.777.120.81' (99)
As described excellently in this Percona Database Performance Blog article, your problem is that your application cannot open another connection to MySQL server. You are running out of local TCP ports. As a solution i would propose to Tweak TCP parameter settings
tcp_tw_reuse (Boolean; default: disabled; since Linux 2.4.19/2.6)
Allow to reuse TIME_WAIT sockets for new connections when it
is safe from protocol viewpoint. It should not be changed
without advice/request of technical experts.
It is possible to force the kernel to reuse a connection hanging in TIME_WAIT state by setting /proc/sys/net/ipv4/tcp_tw_reuse to 1. What happens in practice is that you’ll keep seeing the closed connections hanging in TIME_WAIT until either they expire or a new connection is requested. In the later case, the connection will be “relived”.
tcp_tw_recycle (Boolean; default: disabled; since Linux 2.4)
Enable fast recycling of TIME_WAIT sockets. Enabling this
option is not recommended since this causes problems when
working with NAT (Network Address Translation).
When you enable /proc/sys/net/ipv4/tcp_tw_recycle closed connections will not show under TIME_WAIT anymore – they disappear from netstat altogether. But as soon as you open a new connection (within the 60 seconds mark) it will recycle one of those. But everyone writing about this alternative seems to advise against it’s use. Bottom line is: it’s preferable to reuse a connection than to recycle it.
tcp_max_tw_buckets (integer; default: see below; since Linux 2.4)
The maximum number of sockets in TIME_WAIT state allowed in
the system. This limit exists only to prevent simple denial-
of-service attacks. The default value of NR_FILE*2 is
adjusted depending on the memory in the system. If this
number is exceeded, the socket is closed and a warning is
printed.
This parameter rules how many connections can remain in TIME_WAIT state concurrently: the kernel will
simply kill connections hanging in such state above that number. For example, in a scenario where the server has a TCP port range composed of only 6 ports, if /proc/sys/net/ipv4/tcp_max_tw_buckets is set to 5, then open 6 concurrent connections with MySQL and then immediately close all 6 you would find only 5 of them hanging in the TIME_WAIT state – as with tcp_tw_recycle, one of them would simply disappear from netstat output. This situation allows to immediately open a new connection without needing to wait for a minute*.
When it comes to connecting to database servers, many applications chose to open a new connection for a single request only, closing it right after the request is processed. Even though the connection is closed by the client (application) the local port it was using is not immediately released by the OS to be reused by another connection: it will sit in a TIME_WAIT state for (usually) 60 seconds – this value cannot be easily changed as it is hard coded in the kernel.
However, a second connection won’t be able to open until one of the other 5 connections in TIME_WAIT expire and release the local port it was using. The secret here, then, is to find a compromise between the number of available network ports and the number of connections we allow to remain in TIME_WAIT state. The default value of this setting is 65536, which means by default the system allows all possible connections to go over the TIME_WAIT state when closed.
PS: There more possible solutions to your problem, read the full article for detailed description of the problem.
Update 1:
tcp_tw_reuse looks better solution. Here is described why:
tcp_tw_reuse vs tcp_tw_recycle : Which to use (or both)?
Original answer:
mysql error (99) means that you are running out of the tcp ports.
Enabling tcp recycle should fix it.
echo 1 >/proc/sys/net/ipv4/tcp_tw_recycle
Credits.
Related
I have a site that is about 6 years old, and in the last 6 months, we have been getting the same issue over and over with maxing out the mysql connections.
Now every couple of days the site becomes unavailable because there are no sql connections left. As it's on a shared server, I need to get the host company to flush the connections.
Here's what I tried:
Made sure every page now has a close on it
Upgraded to PHP7 and converted to mysqli
Pulled my hair out
Everywhere I look, it says not to use Persistant connections - I am not explicitly using them - and that PHP will automatically release connections after the page completes - hmmm...
The host company is usually really helpful, but not over this. So I have 2 questions...
What is causing the connections to not close either on my mysqli_close() or on PHP exit? Perhaps pages are aborting or too slow and failing?
Why don't the connections close themselves... It can be hours later that I report it to the host company and they flush the connections... Why are they not bring tidied up? In the mean time the site is unavailable...
Please help..
Cris.
Ah... I have just read the host company responses more clearly... (it's been a hectic few days) They say "The issue that is the database server on the shared hosting platform is designed to lock the database once it hits it's maximum allowed connections, so it never closes any connections once it hits 25 simultaneous connections. "
So the question is why am I hitting 25 connections? Is is that I'm holding connections open for slow loading pages and so limiting the number available?
whenever you open a mysqli connection
$mysqli = connect_db();
you have to explicitly close it or they linger around and mysql server eventually is unhappy
$mysqli->close();
While explicitly closing open connections and freeing result sets is optional, doing so is recommended. This will immediately return resources to PHP and MySQL, which can improve performance. http://php.net/manual/en/mysqli.close.php
Use Ratchet/React.
If I have less than 1000 connections it works good, but when number of connection is growing up - websockets closing automatically after connection.
What is the reason?
cat /proc/sys/fs/file-nr
5696 0 815941
open files (-n) 16384
cat /proc/sys/fs/file-max
815941
On socketo.me this is adressed in the Deployment tab.
A Unix philosophy is "everything is a file". This means each user connecting to your WebSocket application is represented as a file somewhere. A security feature of every Unix based OS is to limit the number of file descriptors one running application may have open at a time. On many systems this default is 1024. This would mean if you had 1024 users currently connected to your WebSocket server anyone else attempting to connect would fail to do so.
They also suggest to change minor configurations to allow more connections. If the problem is not solved you could try to use libevent or disable XDebug although that might not be necessary.
I have two servers (main and database). If too many accesses are made to the MySQL database from main server, I get a "Can't create TCP/IP socket (105)" error. I have try to activate/deactivate a persistent PDO connection and set the max_connections parameter very high, but that does not help. What causes this error?
It sounds like your web server's ("main" server's) TCP stack is running out of resources.
Some things to try:
Configure your web server to restrict the number of simultaneously running client connections. In Apache this is the MaxClients parameter. http://httpd.apache.org/docs/2.0/mod/mpm_common.html#maxclients What happens when the limit is reached? other connection requests are held in the connect / listen queue.
Check your php code to make sure you're correctly releasing your data base resources. In MySQL, it's necessary to actually retrieve your result sets. Some php code does a SELECT, and then just looks at the rowCount() method.
Make sure you aren't constructing PDO objects in a loop.
Use the netstat command to figure out who's hogging ports.
Try to check max_connect_errors parameter. Most likely your host is under attack or there is some bad designed application that could not connect and reaches limit of attempts.
Oh, don't forget to restart mysqld then.
Hope it would helps!
In a load test of our PHP based web application we can easily reach our DBs hard limit of 150 max connections. We run Kohana with ORM to manage the DB connections.
This causes connection exceptions (and thus failed transactions), mysql_pconnect seems to perform even worse.
We're looking for a solution to have graceful degradation under load. Options considered:
A DB connection pool (uh, that's not possible with PHP right?)
Re-try a failed connection when the failure was due to max
connections reached
2 seems logical, but Kohana/ORM manages the DB connection process. Can we configure this somehow?
Is there something I'm not thinking of?
EDIT
This is an Amazon AWS RDS database instance, Amazon sets the 150 limit for me, and the server is most certainly configured correctly. I just want to ensure graceful degradation under load with whichever database I'm using. Clearly I can always upgrade the DB and have a higher connection limit, but I want to guard against a failure situation in case we do hit our limit unexpectedly. Graceful degradation under load.
When you say load testing, I am assuming you are pushing roughly 150 concurrent requests and not that you are hitting the connection limit because you make multiple connections within the same request. If so, check out mysql_pconnect. To enable it in Kohana, simply enable persistent = true in the config/database file for your connections.
If that doesn't work, then you'll have to find an Amazon product that allows more connections since PHP does not share resources between threads.
This answers your question about PHP database connection pooling.
If the limit is 150 for connections (default for max_connections is 151), you are most likely running mysql without a config file
You will need to create a config file to raise that number
Create /etc/my.cnf and put in these two lines
[mysqld]
max_connections=300
You do not have to restart mysql (you could if you wish)
You could just run this MySQL command to raise it dynamically
SET GLOBAL max_connections = 300;
UPDATE 2012-04-06 12:39 EDT
Try using mysql_pconnect instead of mysql_connect. If Kohana can be configured to use mysql_pconnect, you are good to go.
Since about 2 weeks I'm dealing with one of the weirdest problems in LAMP stack.
Long story short randomly connection to MySQL server is failing with error message:
Warning: mysqli::real_connect(): (HY000/2002): Cannot assign requested address in ..
The MySQL is on different "box", hosted at Rackspace Cloud
Today we downgraded it's version to
Ver 14.14 Distrib 5.1.42, for debian-linux-gnu (x86_64).
The DB server is pretty busy dealing with Queries per second avg: 5327.957 according to it's status variable.
MySQL is in log-warnings=9 but no warring for connection refused are logged.
Both site and gearman workers scripts fail with that error at let's say 1% probability.
No server load DO NOT seems to be a factor as we monitor. (CPU load, IO load or MySQL load)
The maximum DB connections (max_connections) are setted to 200 but we have never dealed with more than 100 simultaneous connections to the database
It happens with and without the firewall software.
I suspect TCP Networking problem rather than PHP/MySQL configurationn problem.
Can anyone give me clue how to find it?
UPDATE:
The connection code is:
$this->_mysqli = mysqli_init();
$this->_mysqli->options(MYSQLI_OPT_CONNECT_TIMEOUT, 120);
$this->_mysqli->real_connect($dbHost,$dbUserName, $dbPassword, $dbName);
if (!is_null($this->_mysqli->connect_error)) {
$ping = $this->_mysqli->ping();
if(!$ping){
$error = 'HOST: {'.$dbHost.'};MESSAGE: '. $this->_mysqli->connect_error ."\n";
DataStoreException::raiseHostUnreachable($error);
}
}
I had this problem and solved it using persistent connection mode, which can be activated in mysqli by pre-fixing the database hostname with a 'p:'
$link = mysqli_connect('p:localhost', 'fake_user', 'my_password', 'my_db');
From:
http://php.net/manual/en/mysqli.persistconns.php :
The idea behind persistent connections is that a connection between a
client process and a database can be reused by a client process,
rather than being created and destroyed multiple times. This reduces
the overhead of creating fresh connections every time one is required,
as unused connections are cached and ready to be reused.
...
To open a persistent
connection you must prepend p: to the hostname when connecting.
MySQL: Using giant number of connections
What are dangers of frequent connects ?
It works well, with exception of some extreme cases. If you get hundreds of connects per second from the same box you may get into running out of local port numbers. The way to fix it could be - decrease "/proc/sys/net/ipv4/tcp_fin_timeout" on linux (this breaks TCP/IP standard but you might not care in your local network), increase "/proc/sys/net/ipv4/ip_local_port_range" on the client. Other OS have similar settings. You also may use more web boxes or multiple IP for your same database host to work around this problem. I've realy seen this in production.
Some background about this problem:
TCP/IP connection is identified by localip:localport remoteip:remote port. We have MySQL IP and Port as well as client IP fixed in this case so we can only vary local port which has finite range. Note even after you close connection TCP/IP stack has to keep the port reserved for some time, this is where tcp_fin_timeout comes from.
With Vicidial I have run into the same problem frequently, due to the kind of programming used, new MYSQL connections have to be established (very) frequently from a number of vicidial components, we have systems hammering the db server with over 10000 connections per second, most of which are serviced within a few ms and which are closed within a second or less. From experience I can tell you that in a local network, with close to no lost packages, tcp_fin_timeout can be reduced all the way down to 3 with no problems showing up.
Typical linux commands to diagnose if connections waiting to be closed is your problem are:
netstat -anlp | grep :3306 | grep TIME_WAIT -wc
which will show you the number of connections that are waiting to be closed completely.
netstat -nat | awk {'print $5'} | cut -d ":" -f1 | sort | uniq -c | sort -n
which will show the connections per connected host, allowing you to identify which other host is folding your system if there are multiple candidates.
To test the fix you can just
cat /proc/sys/net/ipv4/tcp_fin_timeout
echo "3" > /proc/sys/net/ipv4/tcp_fin_timeout
which will temporarily set the tcp_fin_timeout to 3 sec and tell you how many seconds it was before, so you can revert to the old value for testing.
As a permanent fix I would suggest you add the following line to /etc/sysctl.conf
net.ipv4.tcp_fin_timeout=3
Within a good local network with should not cause any trouble, if you do run into problems e.g. because of packet loss, you can try
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_tw_recycle=0
net.ipv4.tcp_fin_timeout=10
Wiche allows more time for the connection to close and tries to reuse same ip:port combinations for new connections to the same host:service combination.
OR
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_fin_timeout=10
Which will even more aggressively try to reuse connections, what can however create new problems with other applications for example with your webserver. So you should try the simple solution first, in most cases it will already fix your problem without any bad side effects!
Good Luck!
Vicidial servers regularly require increasing the connection limit in MySQL. Many installations (and we've seen and worked on a lot of them) have had to do this by modifying the limit
Additionally there have been reports of conntract_Max requiring increase in
/sbin/sysctl -w net.netfilter.nf_conntrack_max=196608
when the problem turns out to be networking related.
Also note that Vicidial has some specific suggested settings and even some enterprise settings for mysql configuration. Have a look in my-bigvici.cnf in /usr/src/astguiclient/conf for some configuration ideas that may open your mysql server up a bit.
So far, no problems have resulted from increasing connection limits, just additional resources used. Since the purpose of the server is to make this application work, dedicating resources to this application does not seem like a problem. LOL
We had the same problem. Although "tcp_fin_timeout" and "ip_local_port_range" solutions worked, the real problem was poorly writen PHP script, which just created new connection almost every second query it made to database. Rewriting script to connect just once solved all trouble.
Please be aware that lowering "tcp_fin_timeout" value may be dangerous, as some code may depend on DB connection being still there after some time after connection. It's rather a dirty duct tape and bubble gum path than real solution.