I am having a problem that I can't seem to figure out involving a long php script that does not complete due to a Database connection failure.
I am using PHP 5.5.25 and MySQL 5.6.23, with the mysqli interface.
The script uses the TCPDF library to create a PDF of a financial report. Overall it runs fine. However, when the data set gets large (the report can iterate over numerous accounts to create a multiple page report with all the accounts that match criteria) it will fail after about 30 seconds (not exactly 30, sometimes a couple of seconds more by time stamps). It seems to run fine for about 25-35 loops, but more than that causes the problem.
I don't think its an issue of timing out (although it certainly could be). I have PHP set to fairly generous amounts of resources to process this.
max_execution_time = 600
memory_limit = 2048M
The script does hit the DB pretty hard with hundreds of queries per second. As best as I can tell from some stats from the DB, there are only a couple of active connections at a time so it does not appear that the I am anywhere close to the default setting of 150 max connections.
This is the error I get when it eventually fails with a large data set.
Warning: mysqli::mysqli(): (HY000/2002): Can't assign requested address in...
Fatal error: Database connection failed: Can't assign requested address in...
Does anyone have any suggestions on what may be causing the script to eventually not be able to connect to the DB and fail to complete? I've tried searching for some answers but pretty much everything I have found so far about Database Connection failures are not being able to connect at all, rather than not being able to connect midway through a large script.
Thanks in advance for any advice.
I don't think it's an issue of timing out
You should know.
It seems strange that the issue is arising so long after the start of execution. Would it have been so hard to check what the timeout is? To try changing it? To add some logging to your code?
The other thing you should be checking is whether the script is opening a single connection and reusing it or constantly opening new connections.
Without seeing the code, its hard to say for sure, but a single script executing hundreds of queries per second for tens of seconds sounds like the split between SQL and PHP logic has been very poorly thought out.
Related
I have a soap service giving me trouble at a clients. One specific call to the API returns the error Fatal error: Uncaught SoapFault exception: [HTTP] Error Fetching http headers
I've read enough to see that sometimes it's connection issues or execution time running out, but I tried extending run time to the moon to confirm and it doesn't work.
1) Same service version, two databases, one works, the client's one doesn't.
2) On the clients database, takes all other calls except the problem one.
3) Maybe the call and database structure is the problem? Nope, debug the app and the break point gets hit or missed at random depending on how long it takes for the fetch error to drop in.
So it's not a single point of reference in my procedure. I've seen in the past issues where corrupt DB could potentially cause the connection to it to jump, but I don't know how I could potentially investigate that.
I tried restoring a DB backup from the clients, as restoring sometimes fixes the icky stuff but no dice. Any suggestions what to look at next?
So I figured it out, once of the queries done to the database takes too much time and must jump off the connection.
Doesn't really make sense to me because if I put execution time to a million the whole call still never ends. I can still make it past the query aswell with debug before it crashes. So on a logical front I might be misunderstanding the order of operations, error catching or how a service and sql queries are handled, but optimizing one of the SQL queries to one that runs instantly has stopped giving me issues.
First of all, I already went through some related posts, but without luck.
I followed the solution provided in MySQL server has gone away - in exactly 60 seconds
setting this values at the very beginning:
ini_set('mysql.connect_timeout', 300);
ini_set('default_socket_timeout', 300);
but it seems that the error persist.
The error occurs just before perform a query: the database class used for handling the mysql operations, perform a ping (mysqli_ping) in order to refresh the connection (guess that's the meaning of using mysql ping) but in certain point, ~60 it throws this warning:
Warning: mysqli_ping(): MySQL server has gone away in...
Is there something I missing?
UPDATED
I figured out where exactly the issue is.
I will explain further my workflow.
I establish two different DB connections, the first one I do is just for retrieve data, and the second one, is used to insert all data obtained (row by row). Due to the second connection is the one that is performing operations, I thought that that's the one producing the server's gone away, but it turns out that the error is raise by the idle connection (the first one).
The workaround I made is to close the connection (the first one due to it will be no longer used) just after the data was queried.
The second connection has enough time to not reach a timeout.
The first connection is to a remote database and the second one is made to my local server, so I have completely control over the second one.
The weird thing with the connection I do to the remote MySQL server is that, when I connect from my PHP script, after the 60 seconds or so, it reaches the timeout, but if I connect it from the console, the connection is not timing out. Do you know how can I managed to not get that timeout (server has gone away)? As I said above, I already have a workaround to this, but I'd like to know why from PHP is timing out after ~60 seconds whereas from the console I can stay connected per hours.
Those configs that you're setting change the client connection, this problem is related to the server: it's the server that is closing the connection. To change this configuration you must change the value of wait_timeout on the my.cnf
Sources:
http://dev.mysql.com/doc/refman/5.7/en/gone-away.html
http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_wait_timeout
Just one more thing: change this may be not the best thing that you can do. Try to improve you query first.
I'm getting the following errors in my script:
mysqli_connect(): (08004/1040): Too many connections
mysqli_connect(): (HY000/1040): Too many connections
What is the difference and how can I solve this problem?
"Too many connections" indicates, that your script is opening at least more than one connection to the database. Basically, to one server, only one connection is needed. Getting this error is either a misconfiguration of the server (which I assume isn't the case because max connections = zero isn't an option) or some programming errors in your script.
Check for re-openings of your database connections (mysqli_connect). There should only be one per script (!) and usually you should take care of reusing open connections OR close them properly after script execution (mysqli_close)
Steps to resolve that issue:
Check MySQL connection limit in configuration file or my.cnf
Run below command to check:
mysql -e "show variables like '%connection%';"
You will see like this:
max_connections = 500
Increase it as per you want:
max_connections = 50000
Restart the MySQL service:
$ service MySQL restart
Now check your website, I hope the error will not occur!
Thank You!
While I could not tell you the difference between the 2 error numbers above, I can tell you what causes this.
Your MySQL database only allows so many connections at the same time. If you connect to MySQL via PHP, then you generally open a new connection every time a page on your site loads. So if you've got too much traffic to your site this can cause this issue.
I think it is pretty common for people to have one connection to their database per page load, and multiple queries for sure. So really what it comes down to are 3 points:
(Let me just tell you now, persistent connections will not solve your issue.)
If you have access to your server's CLI/SSH, try to increase the limit by modifying your MySQL configuration (don't forget to restart the service for changes to take affect). This will of course consume more system resources on your database server.
If you have a lot of AJAX requests or other internal database connections you should try to get these down to a single script with a single call. Your site may make multiple AJAX calls to various PHP files that pulls MySQL data, which uses a whole database connection for each one. Instead, create a single PHP file to collect all the data you need on a given page, this script can get all the data you need while only using 1 database connection.
As far as the difference between the two, I believe that HY000 is a PDO exception where 08004 is actually coming from MySQL. Error 1040 is the code for "Too Many Connections".
You should also check if your disk is full, this can cause the same error:
df -h
will show you the remaining space on each partition, you probably have to check the root partition / (or /var/ in case you have an extra partition for this):
df -h /
I've a lot of cronjobs (50 -100) which all start at the same time. (Refreshing data for every single client.) There are many different jobs to do in a single hour so I can't differ the times of the jobs. And I decided not to make a loop - but single jobs - to avoid that possible errors affect the refreshs of the others.
At first all was ok - but now - having about 100 clients - nearly 30% of the jobs end up with a
A Database Error Occurred
Unable to connect to your database server using the provided settings.
Filename: core/Loader.php
Line Number: 346
But the max. connections of mySQL are NOT reached. I've already tried to switch between connect and pconnect but thas has no effect.
Any idea where the bottleneck is? And how to avoid this?
The default max connections is set to 150. If you have 100 clients and 50 to 100 cronjobs that do database queries, I come up to 100*100 = at least 10,000 connections.
If you have 10,000 connects at the same time you can get weird errors, for example time outs or concurrency problems (one script locks a table and another tries to access, this should not give an unable to connect error though but in some cases it does). You can try to bundle the queries.
What happens if you raise the max connections to 400 or so? Does it reduce the number or errors?
A workaround might be that when a job fails that you wait a second or what and try it again. More stable would be the use of a queuing mechanism like Gearman. This helps to spread the load.
Edit:
Codeigniter closes connections for you, but you can do it also manually by using
$this->db->close();
We have an application that is comprised of a couple of off the shelf PHP applications (ExpressionEngine and XCart) as well as our own custom code.
I did not do the actual analysis so I don't know precisely how it was determined, but am not surprised to hear that too many MySQL connections are being left unclosed (I am not surprised because I have been seeing significant memory leakage on our dev server, where over the course of a day or two, starting from 100MB upon initial boot, the entire gig of ram gets consumed, and very little of it is cached).
So, how do we go about determining precisely which PHP code is the culprit? I've got prior experience with XDebug, and have suggested that, when we've gotten our separate, staging environment reasonably stable, that we retrofit XDebug on dev and use that to do some analysis. Is this reasonable, and/or does anybody else have more specific and/or additional suggestions?
You can use the
SHOW PROCESSLIST
SQL command to see what processes are running. That will tell you the username, host, database, etc that are in use by each process. That should give you some idea what's going on, especially if you have a number of databases being accessed.
More here: https://dev.mysql.com/doc/refman/8.0/en/show-processlist.html
This should not be caused by a php code because mysql connections are supposed to be automatically closed.
cf : http://www.php.net/manual/function.mysql-connect.php :
The link to the server will be closed
as soon as the execution of the script
ends, unless it's closed earlier by
explicitly calling mysql_close().
Some suggestions :
does your developper has technically a direct access to your production mysql server ? if yes, then they probably just leave their Mysql Manager open :)
do you have some daily batch process ? if yes, maybe that there are some zombi process in memory
PHP automatically closes any mysql connections when the page ends. the only reason that a PHP web application would have too many unclosed mysql connections is either 1) you're using connection pooling, or 2) there's a bug in the mysql server or the connector.
but if you really want to look at your code to find where it's connecting, see http://xdebug.org/docs/profiler
As others said, PHP terminates MySQL connections created through mysql_connect or the msqli/PDO equivalents.
However, you can create persistent connections with mysql_pconnect. It will look for existing connections open and use those; if it can't find one, it will open a new one. If you had a lot of requests at once, it could have caused loads of connections to open and stay open.
You could lower the maximum number of connections, or lower the timeout for persistent connections. See the comments at the bottom of the man page for more details.
I used to run a script that polled SHOW STATUS for thread count and I noticed that using mysql_pconnect always encouraged high numbers of threads. I found that very disconcerting because then I couldn't tell when my connection rate was actually dropping. So I made sure to centralize all the places where mysql_connect() was called and eliminate mysql_pconnect().
The next thing I did was look at the connection timeouts and adjust them to more like 30 seconds because. So I adjusted my my.cnf with
connect-timeout=30
so I could actually see the number of connections drop off. To determine the number of connections you need open is dependent on how many apache workers you're running times the number of database connections they each will open.
The other thing I started doing was adding a note to my queries in order to spot them in SHOW PROCESSLIST or mytop, I would add a note column to my results like:
$q = "SELECT '".__FILE__.'.'.__LINE__."' as _info, * FROM table ...";
This would show me the file issuing the query when I looked at mytop, and it didn't foil the MySQL query cache like using
/* __FILE__.'.'.__LINE__ */
at the start of my query would.
I suppose another couple of things I can do, with regard to the general memory issue, as opposed specifically to MySQL, and particularly within the context of our own custom code, would be to wrap our code with calls to one or the other of the following PHP built-in functions:
memory_get_usage
memory_get_peak_usage
In particular since I am currently working on logging from some custom code, I can log the memory usage while I'm at it