PHP-MySQLi connection randomly fails with "Cannot assign requested address" - php

Since about 2 weeks I'm dealing with one of the weirdest problems in LAMP stack.
Long story short randomly connection to MySQL server is failing with error message:
Warning: mysqli::real_connect(): (HY000/2002): Cannot assign requested address in ..
The MySQL is on different "box", hosted at Rackspace Cloud
Today we downgraded it's version to
Ver 14.14 Distrib 5.1.42, for debian-linux-gnu (x86_64).
The DB server is pretty busy dealing with Queries per second avg: 5327.957 according to it's status variable.
MySQL is in log-warnings=9 but no warring for connection refused are logged.
Both site and gearman workers scripts fail with that error at let's say 1% probability.
No server load DO NOT seems to be a factor as we monitor. (CPU load, IO load or MySQL load)
The maximum DB connections (max_connections) are setted to 200 but we have never dealed with more than 100 simultaneous connections to the database
It happens with and without the firewall software.
I suspect TCP Networking problem rather than PHP/MySQL configurationn problem.
Can anyone give me clue how to find it?
UPDATE:
The connection code is:
$this->_mysqli = mysqli_init();
$this->_mysqli->options(MYSQLI_OPT_CONNECT_TIMEOUT, 120);
$this->_mysqli->real_connect($dbHost,$dbUserName, $dbPassword, $dbName);
if (!is_null($this->_mysqli->connect_error)) {
$ping = $this->_mysqli->ping();
if(!$ping){
$error = 'HOST: {'.$dbHost.'};MESSAGE: '. $this->_mysqli->connect_error ."\n";
DataStoreException::raiseHostUnreachable($error);
}
}

I had this problem and solved it using persistent connection mode, which can be activated in mysqli by pre-fixing the database hostname with a 'p:'
$link = mysqli_connect('p:localhost', 'fake_user', 'my_password', 'my_db');
From:
http://php.net/manual/en/mysqli.persistconns.php :
The idea behind persistent connections is that a connection between a
client process and a database can be reused by a client process,
rather than being created and destroyed multiple times. This reduces
the overhead of creating fresh connections every time one is required,
as unused connections are cached and ready to be reused.
...
To open a persistent
connection you must prepend p: to the hostname when connecting.

MySQL: Using giant number of connections
What are dangers of frequent connects ?
It works well, with exception of some extreme cases. If you get hundreds of connects per second from the same box you may get into running out of local port numbers. The way to fix it could be - decrease "/proc/sys/net/ipv4/tcp_fin_timeout" on linux (this breaks TCP/IP standard but you might not care in your local network), increase "/proc/sys/net/ipv4/ip_local_port_range" on the client. Other OS have similar settings. You also may use more web boxes or multiple IP for your same database host to work around this problem. I've realy seen this in production.
Some background about this problem:
TCP/IP connection is identified by localip:localport remoteip:remote port. We have MySQL IP and Port as well as client IP fixed in this case so we can only vary local port which has finite range. Note even after you close connection TCP/IP stack has to keep the port reserved for some time, this is where tcp_fin_timeout comes from.

With Vicidial I have run into the same problem frequently, due to the kind of programming used, new MYSQL connections have to be established (very) frequently from a number of vicidial components, we have systems hammering the db server with over 10000 connections per second, most of which are serviced within a few ms and which are closed within a second or less. From experience I can tell you that in a local network, with close to no lost packages, tcp_fin_timeout can be reduced all the way down to 3 with no problems showing up.
Typical linux commands to diagnose if connections waiting to be closed is your problem are:
netstat -anlp | grep :3306 | grep TIME_WAIT -wc
which will show you the number of connections that are waiting to be closed completely.
netstat -nat | awk {'print $5'} | cut -d ":" -f1 | sort | uniq -c | sort -n
which will show the connections per connected host, allowing you to identify which other host is folding your system if there are multiple candidates.
To test the fix you can just
cat /proc/sys/net/ipv4/tcp_fin_timeout
echo "3" > /proc/sys/net/ipv4/tcp_fin_timeout
which will temporarily set the tcp_fin_timeout to 3 sec and tell you how many seconds it was before, so you can revert to the old value for testing.
As a permanent fix I would suggest you add the following line to /etc/sysctl.conf
net.ipv4.tcp_fin_timeout=3
Within a good local network with should not cause any trouble, if you do run into problems e.g. because of packet loss, you can try
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_tw_recycle=0
net.ipv4.tcp_fin_timeout=10
Wiche allows more time for the connection to close and tries to reuse same ip:port combinations for new connections to the same host:service combination.
OR
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_fin_timeout=10
Which will even more aggressively try to reuse connections, what can however create new problems with other applications for example with your webserver. So you should try the simple solution first, in most cases it will already fix your problem without any bad side effects!
Good Luck!

Vicidial servers regularly require increasing the connection limit in MySQL. Many installations (and we've seen and worked on a lot of them) have had to do this by modifying the limit
Additionally there have been reports of conntract_Max requiring increase in
/sbin/sysctl -w net.netfilter.nf_conntrack_max=196608
when the problem turns out to be networking related.
Also note that Vicidial has some specific suggested settings and even some enterprise settings for mysql configuration. Have a look in my-bigvici.cnf in /usr/src/astguiclient/conf for some configuration ideas that may open your mysql server up a bit.
So far, no problems have resulted from increasing connection limits, just additional resources used. Since the purpose of the server is to make this application work, dedicating resources to this application does not seem like a problem. LOL

We had the same problem. Although "tcp_fin_timeout" and "ip_local_port_range" solutions worked, the real problem was poorly writen PHP script, which just created new connection almost every second query it made to database. Rewriting script to connect just once solved all trouble.
Please be aware that lowering "tcp_fin_timeout" value may be dangerous, as some code may depend on DB connection being still there after some time after connection. It's rather a dirty duct tape and bubble gum path than real solution.

Related

mysqli php random connect error

I am receiving the below error randomly from the php backend jobs and php web page logs. Have a app server which runs php backend jobs and php webservers. Both connect to the same database server. Using php mysqli object oriented library for connecting to the database. Have set max connections to 750 in my.cnf. Dont see that much connections is reached.
PHP Warning: mysqli::mysqli(): (HY000/2003): Can't connect to MySQL server on '77.777.120.81' (99) in /usr/local/dev/classes/Admin.php on line 15
Failed to connect to MySQL: Can't connect to MySQL server on '77.777.120.81' (99)
As described excellently in this Percona Database Performance Blog article, your problem is that your application cannot open another connection to MySQL server. You are running out of local TCP ports. As a solution i would propose to Tweak TCP parameter settings
tcp_tw_reuse (Boolean; default: disabled; since Linux 2.4.19/2.6)
Allow to reuse TIME_WAIT sockets for new connections when it
is safe from protocol viewpoint. It should not be changed
without advice/request of technical experts.
It is possible to force the kernel to reuse a connection hanging in TIME_WAIT state by setting /proc/sys/net/ipv4/tcp_tw_reuse to 1. What happens in practice is that you’ll keep seeing the closed connections hanging in TIME_WAIT until either they expire or a new connection is requested. In the later case, the connection will be “relived”.
tcp_tw_recycle (Boolean; default: disabled; since Linux 2.4)
Enable fast recycling of TIME_WAIT sockets. Enabling this
option is not recommended since this causes problems when
working with NAT (Network Address Translation).
When you enable /proc/sys/net/ipv4/tcp_tw_recycle closed connections will not show under TIME_WAIT anymore – they disappear from netstat altogether. But as soon as you open a new connection (within the 60 seconds mark) it will recycle one of those. But everyone writing about this alternative seems to advise against it’s use. Bottom line is: it’s preferable to reuse a connection than to recycle it.
tcp_max_tw_buckets (integer; default: see below; since Linux 2.4)
The maximum number of sockets in TIME_WAIT state allowed in
the system. This limit exists only to prevent simple denial-
of-service attacks. The default value of NR_FILE*2 is
adjusted depending on the memory in the system. If this
number is exceeded, the socket is closed and a warning is
printed.
This parameter rules how many connections can remain in TIME_WAIT state concurrently: the kernel will
simply kill connections hanging in such state above that number. For example, in a scenario where the server has a TCP port range composed of only 6 ports, if /proc/sys/net/ipv4/tcp_max_tw_buckets is set to 5, then open 6 concurrent connections with MySQL and then immediately close all 6 you would find only 5 of them hanging in the TIME_WAIT state – as with tcp_tw_recycle, one of them would simply disappear from netstat output. This situation allows to immediately open a new connection without needing to wait for a minute*.
When it comes to connecting to database servers, many applications chose to open a new connection for a single request only, closing it right after the request is processed. Even though the connection is closed by the client (application) the local port it was using is not immediately released by the OS to be reused by another connection: it will sit in a TIME_WAIT state for (usually) 60 seconds – this value cannot be easily changed as it is hard coded in the kernel.
However, a second connection won’t be able to open until one of the other 5 connections in TIME_WAIT expire and release the local port it was using. The secret here, then, is to find a compromise between the number of available network ports and the number of connections we allow to remain in TIME_WAIT state. The default value of this setting is 65536, which means by default the system allows all possible connections to go over the TIME_WAIT state when closed.
PS: There more possible solutions to your problem, read the full article for detailed description of the problem.
Update 1:
tcp_tw_reuse looks better solution. Here is described why:
tcp_tw_reuse vs tcp_tw_recycle : Which to use (or both)?
Original answer:
mysql error (99) means that you are running out of the tcp ports.
Enabling tcp recycle should fix it.
echo 1 >/proc/sys/net/ipv4/tcp_tw_recycle
Credits.

How should I handle a "too many connections" issue with mysql?

The web startup I'm working at gets a spike in number of concurrent web users from 5000 on a normal day to 10,000 on weekends. This Saturday the traffic was so high that we started getting a "too many connections" error intermittently. Our CTO fixed this by simply increasing the max_connections value on the tatabase servers. I want to know if using one persistent connection is a better solution here?
i.e. instead of using:
$db = new mysqli('db_server_ip', 'db_user', 'db_user_pass', 'db_name');
We use:
$db = new mysqli('p:db_server_ip', 'db_user', 'db_user_pass', 'db_name');
We're already using multiple MySQL servers and as well as multiple web servers (Apache + mod_php).
You should share the database connection across multiple web requests. Every process that is running on the application server should get an own mysql connection, that is kept open as long as the process is running and reused for every web request that comes in.
From the PHP Docs:
Persistent connections are good if the overhead to create a link to your SQL server is high.
And
Note, however, that this can have some drawbacks if you are using a database with connection limits that are exceeded by persistent child connections. If your database has a limit of 16 simultaneous connections, and in the course of a busy server session, 17 child threads attempt to connect, one will not be able to.
Persistent connections aren't the solution to your problem. Your problem is that your burst usage is beyond the limits set in your database configuration, and potentially your infrastructure. What your CTO did, increasing the connection limit, is a good first step. Now you need to monitor the resource utilization on your database servers to make sure they can handle the increased load from additional connections. If they can, you're fine. If you start seeing the database server running out of resources, you'll need to set up additional servers to handle the burst in traffic.
Too Many Connections
Cause
This is error is caused by
a lot of simultaneous connections, or
by old connections not being released soon enough
You already did SHOW VARIABLES LIKE "max_connections"; and increased the value.
Permanent Connections
If you use permanent or persistent database connections, you have to always take the MySQL directive wait_timeout into account. Closing won't work, but you could lower the timeout. So used resources will be faster available again. Utilize netstat to find out whats going on exactly as described here https://serverfault.com/questions/355750/mysql-lowering-wait-timeout-value-to-lower-number-of-open-connections.
Do not forget to free your result sets to reduce wasting of db server resources.
Be advised to use temporary, short lived connections instead of persistent connections.
Introducing persistence is pretty much against the whole web request-response flow, because it's stateless. You know: 1 pconnect request, causes an 8 hour persistant connection dangling around at the db server, waiting for the next request, which never comes. Multiply by number of users and look at your resources.
Temporary connections
If you use mysql_connect() - do not forget to mysql_close().
Set new_link set to false and pass the CLIENT_INTERACTIVE flag.
You might adjusting interactive_timeout, which helps in stopping old connections blocking up the work.
If the problem persists, scale
If the problem remains, then decide to scale.
Either by adding another DB server and putting a proxy in front,
(MySQL works well with HAProxy) or by switching to an automatically scaling cloud-service.
I really doubt, that your stuff is correctly configured.
How can this be a problem, when you are already running multiple MySQL servers, as well as multiple web servers? Please describe your load balancing setup.
Sounds like Apache 2.2 + mod_php + MySQL + unknown balancer, right?
Maybe try
Apache 2.4 + mod_proxy_fcgi + PHP 5.5/5.6 (php-fpm) + MySQL (InnoDb) + HAProxy or
Nginx + PHP 5.5/5.6 (php-fpm) + MySQL (InnoDb) + HAProxy.

MySQL database needs a flush tables every now and again. Can I script something to resolve this?

I'm having a problem that I hope someone can help me out with.
Currently, every now and again we receive an error when our scripts (Java and PHP) try to connect to the localhost mysql database.
Host 'myhost' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'.
This issue appears to mainly occur in the early hours of the morning. After alot of searching to figure out why this may be occuring I have finally come to the conclusion that it may be due to the fact our hosting company runs their backup processes around this time. My theory is that during this backup process (this is also our busiest period) we end up using up all our connections and so this error occurs.
I have talked to our hosts about changing the times these backups occur but they have stated that this is not possible and that is simply the times the backups start to ensure they are finished in time for the day (Even though we have informed them our critical period is at the precise times the backups occur).
The things I have connecting to the server are:
PHP website
PHP files run using chron jobs
A couple of java applications to run as socket listeners that listen for incoming port connections and uses the mysql database for checking user credentials and checking outstanding messages.
We typically have anywhere from 300 - 600 socket connections open at any one time and the average activity on these are about 1-3 request per second.
I have also installed monit and munin with some mysql plugins on the server in the hope they may help auto resolve this issue however these do not see to resolve the issue.
My questions are:
Is there something I can do to auto poll the mysql database so if this occurs I can auto flush the database to clear
Is this potentially even related to the server backup. It seems a coincidence it happens 95% of the time during the period the backups occur.
Any other ideas that may help. Links to other websites, or questions I could put to our host to help out.
We are currently running on a PHP Version 5.2.6-1+lenny9 server with Apache.
If any more information is required to help, please let me know. Thanks.
UPDATE:
I am operating on a shared virtual host and am pretty sure I close my website connections as I have this code in my database class
function __destruct() {
#mysql_close($this->link);
}
I'm pretty sure I'm not using persistant connections via my PHP script as I connect to the db the #mysql_connect command.
UPDATE:
So I changed the max_connections limit from 100 - 200 and I changed the mysql.persistant variable from On to Off in php.ini. Now for two nights running the server has gone done and mainly the connection to the mySql database. I have one 1GB of RAM on the server but it never seems to get close to that. Also looking at my munin logs the connections never seem to hit the 200 mark and yet I get errors in my log files something like
SQLException: Too many connections
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
SQLException: null, message from server: "Can't create a new thread (errno 12); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug.
SQLState :: SQLException: HY000, VendorError :: SQLException: 1135
We've had a similar problem with out large ecommerce installation using MySQL as a backend. I'd suggest you alter the "max_connections" setting of the MySQL instance, then (if necessary) alter the number of file descriptors using "ulimit" before starting MySQL (we use "ulimit -n 32768" in /etc/init.d/mysql).
It's been suggestion I post an answer to this question although I never really got it sorted.
In the end I ended up implementing a Java connection pooling class which enabled me to share connections whilst maintaining a upper limit on the number of max connections I wanted. It was also suggested I increase the RAM and increase the number of max connections. I did both these things although they were just bandaids to the problem. We also ended up moving hosting providers as the ones we were with were not very co-ooperative.
After these minor implementations I haven't noticed this issue occur for at least 8 months which is good enough for me.
Other suggestions over time have to also implement a Thread pooling facility, however current demand does not require this need.

MongoDB Optimal Performance - How Many Persistent Connections

I have a mongodb server in production serving on an EC2 instance. According to the mongodb official documentation, persistent DB connections should ALWAYS be used in production. I've been experimenting with about 50 persistent connections and was getting frequent connection errors (approx 33% of the time) while testing. I'm using this code:
$pid = 'db_'.rand(1,50);
$mongo = new Mongo("mongodb://{$user}:{$pass}#{$host}", array('persist' => $pid) );
Some background on the application, it's a link tracking application that is still ramping up - and is in the range of 500 - 1k writes per hour, nothing too crazy... yet.
I'm wondering if I simply need to allow more persistent connections? How does one determine the right balance of persistent connections versus server resources available?
Thanks in advance everyone.
The persist value is no longer supported as of the most recent driver (1.2.0).
Truth is, it was never really clear what it did in typical Apache+PHP setups. There are several comments on the Google Groups and elsewhere asking for detail, but I did not any evidence that persist or persistent was ever tested with any depth.
Instead, it's all been replaced by connection pooling "out of the box". The connection pooling has obviously been through some changes within the 1.2 line with the addition of the MongoPool class.
There is still no detailed explanation of how the pooling works with Apache, but at least you don't have to worry about persist.
Now despite all of this mess, I have handled 1000 times that traffic on a single MongoDB server via the PHP driver without lots of connection problems.
Are you catching the exceptions?
Can you provide more details about the exact exception?
There may be a code solution.
Are you opening a new connection for each PHP page request, or using a connection pool with 50 persistent connections? If you're opening a new connection each time then you might be quickly running out of resources.
Each connection uses an additional thread on the server, so you could be hitting a limit on the number of threads of network connections, check your server logs in /var/lib/mongodb for errors.
If you're using the official MongoDB PHP driver, then as far as I know it should handle connection pooling for you automatically. If you're connecting to Mongo from 50 separate clients, then consider putting a queue in front of Mongo to buffer the writes.
http://php.net/manual/en/mongo.connecting.php
without Persistent Connections x1000
It takes approximately 18 seconds to execute
Persistent
...it takes less than .02 seconds

Debug MySQLs "too many connections"

I'm trying to debug an error I got on a production server. Sometimes MySQL gives up and my web app can't connect to the database (I'm getting the "too many connections" error). The server has a few thousand visitors a day and on the night I'm running a few cron jobs which sometimes does some heavy mysql work (Looping through 50 000 rows, inserting and deletes duplicates etc)
The server runs both apache and mysql on the same machine
MySQL has a pretty standard based configuration (max connections)
The web app is using PHP
How do I debug this issue? Which log files should I read? How do I find the "evil" script? The strange this is that if I restart the MySQL server it starts working again.
Edit:
Different apps/scripts is using different connectors to its database (mostly mysqli but also Zend_Db)
First, use innotop (Google for it) to monitor your connections. It's mostly geared to InnoDB statistics, but it can bet set to show all connections including those not in a transaction.
Otherwise, the following are helpful: Use persistent connections / connection pools in your web apps. Increase your max connections.
It's not necessarily a long-running SQL query.
If you open a connection at the start of a page, it won't be released until the PHP script terminates - even if there is no query running.
You should add some stats to your pages to find out the slowest ones, and the most-hit ones. Closing the connection early would help, if possible.
Try using persistent connections (mysql_pconnect), it will help reduce the server load caused by constantly opening and closing MySQL connections.
The starting point is probably to use mysqladmin processlist to get a list of the processes on the mysql server. The next step depends on what you find.

Categories