I regularly have the following error:
PHP Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000] [1129] Host 'MY SERVER' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'
It is easy to solve the problem with a regular (like crontab) mysqladmin flush-hosts command or increasing the max_connect_errors system variable, as written here.
BUT ! What are "many successive interrupted connection requests", why is this happening?
I'd rather prevent the problem upstream, rather than correcting blocking.
MySQL version : 5.5.12. I'm using Zend Framework 1.11.10 and Doctrine 2.1.6.
There are no mysql_close() nor mysqli_close() in my PHP Code.
max_connect_errors has the default value, 10, and I don't want to increase it yet, I want to understand why I've got the errors. I use a cron, every 5 minutes which does a mysqladmin flush-hosts command.
This response is by design as a security measure, and is the result of reaching the max_connection_errors value for mysql. Here's a link Oracle provides which details most of the possible causes and solutions.
Ultimately this means that there are so many successive connection failures that MySql stops responding to connection attempts.
I use a cron, every 5 minutes which does a mysqladmin flush-hosts
command.
As you are reaching this limit so quickly, there are only a few likely culprits:
Server is not correctly configured to use PDO.
Running code includes very frequently creating new connections.
Results in quickly reaching the max_connections value, causing all subsequent connection attempts to fail... thus quickly reaching the max_connection_errors limit.
Code is hitting an infinite loop, or cascading failure.
Obvious possibility, but must be mentioned.
(i.e: pageA calls pageB and pageC, and pageC calls PageA)
PDO is running fine, but some scripts take a long time to run, or never end.
Easiest way to catch this is turn down the max_execution_time.
It is likely that whatever the case, this will be difficult to track down.
Log a stack-trace of every mysql connection attempt to find what code is causing this.
Check the mysql.err logfile
While PDO does not require explicitly closing mysql connections, for cases like this there's a few practices that can prevent such ServerAdmin hunts.
Always explicitly close mysql connections.
Build a simple Class to handle all connections. Open, return array, close.
The only time you need to keep a connection open is for cursors.
Always define connection arguments in one and only one file included everywhere it is needed.
Never increase max_execution_time unless you know you need it and you know the server can handle it. IF you need it, explicitly increase the value only for the script that needs it. php.net/manual/en/function.set-time-limit.php
If you increase max_execution_time, increase max_connections.
dev.mysql.com/doc/refman/5.0/en/cursors.html
It means that mysqld has received many connection requests from the given host that were interrupted in the middle. Check out this link from the documentation for more info.
Related
I have a PHP Socket Server that I can connect to via Telnet. Once connected, I am able to send messages in a certain format and they're saved in the database.
What happens is that when the PHP Socket receives a message, it opens a Database connection, executes the query, then closes the connection. When it receives another message, it opens the connection again, executes the query, then closes the connection.
So far, this works when I'm sending messages in an interval of 5-10 minutes. However, when the interval increases for over an hour or so, I get a MySQL Server has gone away error.
Upon doing some research, the common solution seems to be increasing the wait time, which is not an option for me. The PHP Socket server is supposed to be open 24/7, and I doubt there's a way to increase the wait time to infinity.
The other option is to check in PHP itself if the MySQL Server has gone away or not, and depending on the result, reset the MySQL Server, and try to establish the connection again.
Does anyone know how to do this in PHP? Or if anyone has other methods in keeping a MySQL Server constantly alive in a PHP Socket server, I would also be open to that idea.
You'll get this error if the database connection has timed out (perhaps from a long-running process since the last time the connection was used).
You can easily check the connection and restore it if necessary with a single command:
$mysqli->ping()
This error means that the connection has been established to the DBMS but has subsequently been lost. I've run sites with hundreds of concurrent connections to the same database instance handling thousands of queries per minute and have only seen this error when a query has been deliberately killed by an admin. You should check your config for the interactive timeout and read your server logs.
Given your description of the error (you really need to gather more data and characterized the circumstances of the error) the only explanation which springs to mind is that there is a deadlock somewhere.
I would question whether there is any benefit to closing the connection after each message (depending on the usage of the system). Artistic phoenix's comment is somewhat confused. But if there are capacity issues then I'd suggest using persistent connections with your existing open/close model, but I doubt that is relevant to the problem you describe here.
I'm using php-activerecord for a short while now and i absolutely love it. Php-activerecord is an open source ORM library based on the ActiveRecord pattern. However, i recently tried to use it in combination with a websocket application based on Wrench.
This works perfectly but to start the script the application has to run as a daemon on linux in order to make the websockets always availeble. After a short while of not using the application and then trying to use it again it throws some database exceptions:
At first it gives a warning:
PHP Warning: Error while sending QUERY packet. PID=XXXXX in /home/user/domains/example.com/public_html/vendor/php-activerecord/php-activerecord/lib/Connection.php on line 322
Then it throws a fatal error:
PHP Fatal error: Uncaught exception 'ActiveRecord\DatabaseException' with message 'exception 'PDOException' with message 'SQLSTATE[HY000]: General error: 2006 MySQL server has gone away' in /home/user/domains/example.com/public_html/vendor/php-activerecord/php-activerecord/lib/Connection.php:322
Stack trace:
#0 /home/user/domains/example.com/public_html/vendor/php-activerecord/php-activerecord/lib/Connection.php(322): PDOStatement->execute(Array)
#1 /home/user/domains/example.com/public_html/vendor/php-activerecord/php-activerecord/lib/Table.php(218): ActiveRecord\Connection->query('SELECT * FROM ...', Array)
#2 /home/user/domains/example.com/public_html/vendor/php-activerecord/php-activerecord/lib/Table.php(209): ActiveRecord\Table->find_by_sql('SELECT * FROM `...', Array, false, NULL)
#3 /home/user/domains/example.com/public_html/vendor/php-activerecord/php-activerecord/lib/Model.php(1567): ActiveRecord\Table->find(Array)
#4 in /home/user/domains/example.com/public_html/vendor/php-activerecord/lib/Connection.php on line 325
It seems like php-activerecord is keeping the mysql connection open all the time that the websocket server is running, this ofcourse should not be a problem if it then automatically tried to reconnect and run the query again. But it doens't.
I've read something about setting MYSQL_OPT_RECONNECT. But i'm not sure if that works or how to set that option using php-activerecord. Does anybody here have some experience in this area?
Edit: Here are my global timeout config variables
VARIABLE_NAME VARIABLE_VALUE
DELAYED_INSERT_TIMEOUT 300
WAIT_TIMEOUT 28800
CONNECT_TIMEOUT 10
LOCK_WAIT_TIMEOUT 31536000
INNODB_ROLLBACK_ON_TIMEOUT OFF
THREAD_POOL_IDLE_TIMEOUT 60
NET_WRITE_TIMEOUT 60
INNODB_LOCK_WAIT_TIMEOUT 50
INTERACTIVE_TIMEOUT 28800
DEADLOCK_TIMEOUT_LONG 50000000
SLAVE_NET_TIMEOUT 3600
DEADLOCK_TIMEOUT_SHORT 10000
NET_READ_TIMEOUT 30
PHP ActiveRecord uses PDO. There is absolutely no way to close a PDO connection, it is the wrong DB layer for long running background tasks.
You can try to influence the disconnection of a PDO connection with the following snippet.
//if not using ZF2 libraries, disconnect in some other way
$db->getDriver()->getConnection()->disconnect()
$db = NULL;
gc_collect_cycles();
Disconnect, set your reference to null, then run the garbage collector. The hope is that that will call the PDO's internal __destruct method to actually close the connection.
You must manage your DB connections in your own long running script. You must disconnect if your worker hasn't had to process work in a while, and you must reconnect when you have work.
The real solution is to not use PDO and disconnect and reconnect normally.
If you simply set both server and client library timeouts to be infinite, you'll run into problems with out of control scripts that never die, forcing you to restart the entire server (not a good idea to mess with timeouts).
EDIT: I actually had this exact problem and used this exact solution at work last year. This solved 99% of my problems. But still, every once in a while there wass a stray connection exception that I could not catch and try to reconnect. I simply restart the processes once a day to rid myself of those stray connection errors. That's why my answer is, don't use PDO. Switch now and get real control over disconnects and reconnects.
The most common reason for the MySQL server has gone away error is that the server timed out and closed the connection.
Try doing the following change.
max_allowed_packet=64M
If you have a lot of request set this and don't set it to bigger because its related with your environment.
max_connections=1000
Adding this line into my.cnf file might solves your problem. Restart the MySQL service once you are done with the change.
Read more on MySQL server has gone away
If it does not work try this auto-reconnect function as well.
As said, MySQL in PHP scripts times out when there is no communication between the two for some time.
That is a good thing, since idle connections would eat up your server resources.
"Server has gone away" error mostly happens when a relatively lenghty computation happens between two queries.
In order to prevent that, you can
Periodically execute a SELECT 1 query during your execution
Create a wrapper around your queries which checks if connection is valid before executing
Use answer from this post
However, I believe that reconfiguring MySQL to keep connection open for longer encourages careless programming and would advice against it.
It could be also the size of the query, as sometimes ORMs combine the queries to improve performance.
Try setting max_allowed_packet=128M, at least should be useful as a diagnose.
If your DB is not handling multiple, concurrent connections and queries You could set "infinite" timeouts. This won't affect DB resources significantly. Best approach is to send ping packets (SELECT 1) to renew timeout and make connection kept-alive.
In order to solve such problem, I suggest you to:
Distribute your processes using Gearman job server (http://gearman.org/)
Manage those processes easily using Supervisor (http://supervisord.org/)
Here is how.
Run your web socket application as a daemon, just like you already did now (perhaps using cron). Or even better, manage it using Supervisor. Configure it so that Supervisor start it when Supervisor starts and autorestart the daemon if it dies.
Example configuration:
[program:my-daemon]
command=/usr/bin/php /path/to/your/daemon/script
autostart=true
autorestart=true
Next, instead of running the query processing inside the application daemon, create a Gearman Worker to handle it. Once registered, the Worker will be waiting to be run/called. You must call the Worker from your websocket application, together with the necessary workload parameter if necessary (refer to Gearman website for this workload term explanation).
In Worker, set it to stop/exit when it already finish the job requested by the daemon. With this, you won't have the "mysql server has gone away problem" because the connection is immediately closed.
Finally, we have to make the Worker available all time just like the daemon. So, similar to the daemon, configure Supervisor to autostart and autorestart it when the Worker dies/stops, like that:
[program:my-worker]
command=/usr/bin/php /path/to/your/worker/script
autostart=true
autorestart=true
Another interesting thing is that you can add as many Workers as you like to be alive waiting. Just add the following configuration:
numprocs=9 #change it to any number
process_name=%(program_name)s_%(process_num)02d #to identity worker number
Since we told Supervisor to autorestart each processes, we always have constant Workers running in background.
Here is another explanation about this strategy: http://www.masnun.com/2011/11/02/gearman-php-and-supervisor-processing-background-jobs-with-sanity.html
Hope that helps!
I'm getting the following errors in my script:
mysqli_connect(): (08004/1040): Too many connections
mysqli_connect(): (HY000/1040): Too many connections
What is the difference and how can I solve this problem?
"Too many connections" indicates, that your script is opening at least more than one connection to the database. Basically, to one server, only one connection is needed. Getting this error is either a misconfiguration of the server (which I assume isn't the case because max connections = zero isn't an option) or some programming errors in your script.
Check for re-openings of your database connections (mysqli_connect). There should only be one per script (!) and usually you should take care of reusing open connections OR close them properly after script execution (mysqli_close)
Steps to resolve that issue:
Check MySQL connection limit in configuration file or my.cnf
Run below command to check:
mysql -e "show variables like '%connection%';"
You will see like this:
max_connections = 500
Increase it as per you want:
max_connections = 50000
Restart the MySQL service:
$ service MySQL restart
Now check your website, I hope the error will not occur!
Thank You!
While I could not tell you the difference between the 2 error numbers above, I can tell you what causes this.
Your MySQL database only allows so many connections at the same time. If you connect to MySQL via PHP, then you generally open a new connection every time a page on your site loads. So if you've got too much traffic to your site this can cause this issue.
I think it is pretty common for people to have one connection to their database per page load, and multiple queries for sure. So really what it comes down to are 3 points:
(Let me just tell you now, persistent connections will not solve your issue.)
If you have access to your server's CLI/SSH, try to increase the limit by modifying your MySQL configuration (don't forget to restart the service for changes to take affect). This will of course consume more system resources on your database server.
If you have a lot of AJAX requests or other internal database connections you should try to get these down to a single script with a single call. Your site may make multiple AJAX calls to various PHP files that pulls MySQL data, which uses a whole database connection for each one. Instead, create a single PHP file to collect all the data you need on a given page, this script can get all the data you need while only using 1 database connection.
As far as the difference between the two, I believe that HY000 is a PDO exception where 08004 is actually coming from MySQL. Error 1040 is the code for "Too Many Connections".
You should also check if your disk is full, this can cause the same error:
df -h
will show you the remaining space on each partition, you probably have to check the root partition / (or /var/ in case you have an extra partition for this):
df -h /
So I was wondering whether I should or should not ping the mysql server (mysqli_ping) to ensure that the server is always alive before running query?
You shouldn't ping MySQL before a query for three reasons:
Its not a reliable way of checking the server will be up when you attempt to execute your query, it could very well go down in the time between the ping response and query.
Your query may fail even if the server is up.
As the amount traffic to your website scales up, you will be adding a lot of extra overhead to the database. Its not uncommon in enterprise apps that have used this method to see a huge amount of the database's resources getting wasted on pings.
The best way to deal with database connections is error handling (try/catch), retries and transactions.
More on this on the MySQL performance blog:
Checking for a live database connection considered harmful
In that blog post you'll see 73% of the load on that instance of MySQL was caused by applications checking if the DB was up.
I don't do this. I rely on the fact that I'll have a connection error if the server's gone and I try to do something.
Doing the ping might save you a bit of time and appear to be more responsive to the user, but a fast connection error isn't much better than waiting a few seconds followed by a connection error. Either way, the user can't do anything about it.
No.
Do you ping SO before you navigate there in a browser, just to be sure the server is running?
So I was wondering whether I should or
should not ping the mysql server
(mysqli_ping) to ensure that the
server is always alive before running
query?
Not really. If it is not live, you will come to know through the error messages coming through your queries or when connecting to the database. You can get mysql error with:
mysql_error()
Example:
mysql_connect(......) or die(mysql_error());
This is not the standard way of dealing with it... If there's an exception, you'll deal with it then.
It's somewhat similar to the difference between checking that a file exists before trying to open it, or catching the file-not-found exception when it occurs... If it's a very, very common and likely error it may be worth it to check before, but usually execution should try to go normally and exceptions should be caught and handled when they occur.
Generally speaking, no.
However, if you have a long-running script, for example some back-end process that's called as a cron job where that may be a time span between connecting and subsequent queries, mysqli_ping() maybe useful.
Setting mysqli.reconnect to true in php.ini is useful in this case.
No.
Just because the ping succeeds doesn't mean the query will. What if the server becomes unavailable between the time you ping it and the time you execute the query?
For this reason, you'll have to have proper error-catching around the query anyway.
And if you do, you might as well simply rely on this as your primary error trap.
Adding the ping just adds unnecessary round-trips, ultimately slowing down your code.
The only time I can think of to do this is if the database is
1. non-critical to the functioning of your app, and,
2. it has a tendency to be offline.
Other than that, no.
The only time in which it would be worthwhile to use ping would be if you were implementing your own db connection pooling system. Even in that case, I wouldn't ping before every query, just on each "connect" / checkout from the pool.
We have an application that is comprised of a couple of off the shelf PHP applications (ExpressionEngine and XCart) as well as our own custom code.
I did not do the actual analysis so I don't know precisely how it was determined, but am not surprised to hear that too many MySQL connections are being left unclosed (I am not surprised because I have been seeing significant memory leakage on our dev server, where over the course of a day or two, starting from 100MB upon initial boot, the entire gig of ram gets consumed, and very little of it is cached).
So, how do we go about determining precisely which PHP code is the culprit? I've got prior experience with XDebug, and have suggested that, when we've gotten our separate, staging environment reasonably stable, that we retrofit XDebug on dev and use that to do some analysis. Is this reasonable, and/or does anybody else have more specific and/or additional suggestions?
You can use the
SHOW PROCESSLIST
SQL command to see what processes are running. That will tell you the username, host, database, etc that are in use by each process. That should give you some idea what's going on, especially if you have a number of databases being accessed.
More here: https://dev.mysql.com/doc/refman/8.0/en/show-processlist.html
This should not be caused by a php code because mysql connections are supposed to be automatically closed.
cf : http://www.php.net/manual/function.mysql-connect.php :
The link to the server will be closed
as soon as the execution of the script
ends, unless it's closed earlier by
explicitly calling mysql_close().
Some suggestions :
does your developper has technically a direct access to your production mysql server ? if yes, then they probably just leave their Mysql Manager open :)
do you have some daily batch process ? if yes, maybe that there are some zombi process in memory
PHP automatically closes any mysql connections when the page ends. the only reason that a PHP web application would have too many unclosed mysql connections is either 1) you're using connection pooling, or 2) there's a bug in the mysql server or the connector.
but if you really want to look at your code to find where it's connecting, see http://xdebug.org/docs/profiler
As others said, PHP terminates MySQL connections created through mysql_connect or the msqli/PDO equivalents.
However, you can create persistent connections with mysql_pconnect. It will look for existing connections open and use those; if it can't find one, it will open a new one. If you had a lot of requests at once, it could have caused loads of connections to open and stay open.
You could lower the maximum number of connections, or lower the timeout for persistent connections. See the comments at the bottom of the man page for more details.
I used to run a script that polled SHOW STATUS for thread count and I noticed that using mysql_pconnect always encouraged high numbers of threads. I found that very disconcerting because then I couldn't tell when my connection rate was actually dropping. So I made sure to centralize all the places where mysql_connect() was called and eliminate mysql_pconnect().
The next thing I did was look at the connection timeouts and adjust them to more like 30 seconds because. So I adjusted my my.cnf with
connect-timeout=30
so I could actually see the number of connections drop off. To determine the number of connections you need open is dependent on how many apache workers you're running times the number of database connections they each will open.
The other thing I started doing was adding a note to my queries in order to spot them in SHOW PROCESSLIST or mytop, I would add a note column to my results like:
$q = "SELECT '".__FILE__.'.'.__LINE__."' as _info, * FROM table ...";
This would show me the file issuing the query when I looked at mytop, and it didn't foil the MySQL query cache like using
/* __FILE__.'.'.__LINE__ */
at the start of my query would.
I suppose another couple of things I can do, with regard to the general memory issue, as opposed specifically to MySQL, and particularly within the context of our own custom code, would be to wrap our code with calls to one or the other of the following PHP built-in functions:
memory_get_usage
memory_get_peak_usage
In particular since I am currently working on logging from some custom code, I can log the memory usage while I'm at it