I'm using php-activerecord for a short while now and i absolutely love it. Php-activerecord is an open source ORM library based on the ActiveRecord pattern. However, i recently tried to use it in combination with a websocket application based on Wrench.
This works perfectly but to start the script the application has to run as a daemon on linux in order to make the websockets always availeble. After a short while of not using the application and then trying to use it again it throws some database exceptions:
At first it gives a warning:
PHP Warning: Error while sending QUERY packet. PID=XXXXX in /home/user/domains/example.com/public_html/vendor/php-activerecord/php-activerecord/lib/Connection.php on line 322
Then it throws a fatal error:
PHP Fatal error: Uncaught exception 'ActiveRecord\DatabaseException' with message 'exception 'PDOException' with message 'SQLSTATE[HY000]: General error: 2006 MySQL server has gone away' in /home/user/domains/example.com/public_html/vendor/php-activerecord/php-activerecord/lib/Connection.php:322
Stack trace:
#0 /home/user/domains/example.com/public_html/vendor/php-activerecord/php-activerecord/lib/Connection.php(322): PDOStatement->execute(Array)
#1 /home/user/domains/example.com/public_html/vendor/php-activerecord/php-activerecord/lib/Table.php(218): ActiveRecord\Connection->query('SELECT * FROM ...', Array)
#2 /home/user/domains/example.com/public_html/vendor/php-activerecord/php-activerecord/lib/Table.php(209): ActiveRecord\Table->find_by_sql('SELECT * FROM `...', Array, false, NULL)
#3 /home/user/domains/example.com/public_html/vendor/php-activerecord/php-activerecord/lib/Model.php(1567): ActiveRecord\Table->find(Array)
#4 in /home/user/domains/example.com/public_html/vendor/php-activerecord/lib/Connection.php on line 325
It seems like php-activerecord is keeping the mysql connection open all the time that the websocket server is running, this ofcourse should not be a problem if it then automatically tried to reconnect and run the query again. But it doens't.
I've read something about setting MYSQL_OPT_RECONNECT. But i'm not sure if that works or how to set that option using php-activerecord. Does anybody here have some experience in this area?
Edit: Here are my global timeout config variables
VARIABLE_NAME VARIABLE_VALUE
DELAYED_INSERT_TIMEOUT 300
WAIT_TIMEOUT 28800
CONNECT_TIMEOUT 10
LOCK_WAIT_TIMEOUT 31536000
INNODB_ROLLBACK_ON_TIMEOUT OFF
THREAD_POOL_IDLE_TIMEOUT 60
NET_WRITE_TIMEOUT 60
INNODB_LOCK_WAIT_TIMEOUT 50
INTERACTIVE_TIMEOUT 28800
DEADLOCK_TIMEOUT_LONG 50000000
SLAVE_NET_TIMEOUT 3600
DEADLOCK_TIMEOUT_SHORT 10000
NET_READ_TIMEOUT 30
PHP ActiveRecord uses PDO. There is absolutely no way to close a PDO connection, it is the wrong DB layer for long running background tasks.
You can try to influence the disconnection of a PDO connection with the following snippet.
//if not using ZF2 libraries, disconnect in some other way
$db->getDriver()->getConnection()->disconnect()
$db = NULL;
gc_collect_cycles();
Disconnect, set your reference to null, then run the garbage collector. The hope is that that will call the PDO's internal __destruct method to actually close the connection.
You must manage your DB connections in your own long running script. You must disconnect if your worker hasn't had to process work in a while, and you must reconnect when you have work.
The real solution is to not use PDO and disconnect and reconnect normally.
If you simply set both server and client library timeouts to be infinite, you'll run into problems with out of control scripts that never die, forcing you to restart the entire server (not a good idea to mess with timeouts).
EDIT: I actually had this exact problem and used this exact solution at work last year. This solved 99% of my problems. But still, every once in a while there wass a stray connection exception that I could not catch and try to reconnect. I simply restart the processes once a day to rid myself of those stray connection errors. That's why my answer is, don't use PDO. Switch now and get real control over disconnects and reconnects.
The most common reason for the MySQL server has gone away error is that the server timed out and closed the connection.
Try doing the following change.
max_allowed_packet=64M
If you have a lot of request set this and don't set it to bigger because its related with your environment.
max_connections=1000
Adding this line into my.cnf file might solves your problem. Restart the MySQL service once you are done with the change.
Read more on MySQL server has gone away
If it does not work try this auto-reconnect function as well.
As said, MySQL in PHP scripts times out when there is no communication between the two for some time.
That is a good thing, since idle connections would eat up your server resources.
"Server has gone away" error mostly happens when a relatively lenghty computation happens between two queries.
In order to prevent that, you can
Periodically execute a SELECT 1 query during your execution
Create a wrapper around your queries which checks if connection is valid before executing
Use answer from this post
However, I believe that reconfiguring MySQL to keep connection open for longer encourages careless programming and would advice against it.
It could be also the size of the query, as sometimes ORMs combine the queries to improve performance.
Try setting max_allowed_packet=128M, at least should be useful as a diagnose.
If your DB is not handling multiple, concurrent connections and queries You could set "infinite" timeouts. This won't affect DB resources significantly. Best approach is to send ping packets (SELECT 1) to renew timeout and make connection kept-alive.
In order to solve such problem, I suggest you to:
Distribute your processes using Gearman job server (http://gearman.org/)
Manage those processes easily using Supervisor (http://supervisord.org/)
Here is how.
Run your web socket application as a daemon, just like you already did now (perhaps using cron). Or even better, manage it using Supervisor. Configure it so that Supervisor start it when Supervisor starts and autorestart the daemon if it dies.
Example configuration:
[program:my-daemon]
command=/usr/bin/php /path/to/your/daemon/script
autostart=true
autorestart=true
Next, instead of running the query processing inside the application daemon, create a Gearman Worker to handle it. Once registered, the Worker will be waiting to be run/called. You must call the Worker from your websocket application, together with the necessary workload parameter if necessary (refer to Gearman website for this workload term explanation).
In Worker, set it to stop/exit when it already finish the job requested by the daemon. With this, you won't have the "mysql server has gone away problem" because the connection is immediately closed.
Finally, we have to make the Worker available all time just like the daemon. So, similar to the daemon, configure Supervisor to autostart and autorestart it when the Worker dies/stops, like that:
[program:my-worker]
command=/usr/bin/php /path/to/your/worker/script
autostart=true
autorestart=true
Another interesting thing is that you can add as many Workers as you like to be alive waiting. Just add the following configuration:
numprocs=9 #change it to any number
process_name=%(program_name)s_%(process_num)02d #to identity worker number
Since we told Supervisor to autorestart each processes, we always have constant Workers running in background.
Here is another explanation about this strategy: http://www.masnun.com/2011/11/02/gearman-php-and-supervisor-processing-background-jobs-with-sanity.html
Hope that helps!
Related
So I have a script which does the following:
for ($i = $start;$i<=$end;$i++){
shell_exec("php /home/mysql_box/script.php phrases $i {$argv[3]} > /dev/null 2>&1 &");
}
On one of my servers I would like this script to run 400 times. As part of the call it connects to a Mysql server on a different box but for some wierd reason hits about 130 connections and then errors saying "Mysql server has gone away".
I appreciate this is quite common and the MySQL manual states:
You can also encounter this error with applications that fork child processes, all of which try to use the same connection to the MySQL server. This can be avoided by using a separate connection for each child process.
However I wouldn't be using the same connection surely? My wait_timeout is the default - but huge so why would some connections simply not be able to connect?
A couple of other points from the manual:
The connection uses the IP not host name
I didn't start mysql with skip-networking
The query is a single one liner - its not big
I am using Ubuntu
I am connecting with PDO
I hope I've given enough info to help.
So I believe I have discovered the issue and I hope this helps anyone else. The reason being was that the initial query that was run on each loop took aprox 15 seconds. For some reason this reduced the amount of concurrent new connections and as such crashed out. The solution was to therefore reduce the query time required and so if you experience this problem as well - play around with the queries themselves.
You are spawning 400 php processes and letting them to run in background. Having more than 130 simultaneous connections might be hard for your MySQL server handling.
You might want to wait a little to launch all this process:
for ($i = $start;$i<=$end;$i++){
shell_exec("php /home/mysql_box/script.php phrases $i {$argv[3]} > /dev/null 2>&1 &");
if ($i%50 === 0) //every 50 process wait 1 second, so you give a chance that they're finished, and hopefully your db server will be able to handle more connections
sleep(1);
}
I think that firing 400 queries almost simultaneusly by the scripts invoked in the for loop is a bad idea per se.
Even if each query takes little time to execute the loop will act faster.
You may hit the maximun numer of allowed connection set on the server running mysql: max_connections. I'm not 100% sure this is the issue you have because normally the error you would get is Too many connections: see Too many connections
You may also hit the maximum number of network connections allowed by the destination server (this is more likely to lead to a Mysql server has gone away)
Consider also you're spawning 400 processes at once each one running the PHP intepreter. Quite heavy for the local server.
The quickest dirty fix is to set some delay between scripts invocation.
for ($i = $start;$i<=$end;$i++){
shell_exec("php /home/mysql_box/script.php phrases $i {$argv[3]} > /dev/null 2>&1 &");
usleep( 1000 * 1000 * 100 ); // 0.1 SECS
}
However this is leading you versus a "syncronous" approach which may not be what you want.
The proper solution would be to implement a system to know when a spawned script has ended. Then start running the scripts in parallel but avoid more than ex. 50 running simultaneously.
You may consider implementing multithreading directly in your PHP scripts, see:
PThreads / PHP PThreads
PHPThread
GPhpThread
If "hitting the server more gently" (you may even try as a problem-isolation-strategy to run the scripts with high delay or syncronously) doesn't solve the problem (but you should issue less queries at once anyway in my opinion) then the problem falls into the MySQL server has gone away category.
I quote from the linked docs what may be the causes in my opinion (for the full list go to ht elink above)
You tried to run a query after closing the connection to the server. This indicates a logic error in the application that should be corrected.
A client application running on a different host does not have the necessary privileges to connect to the MySQL server from that host.
You have encountered a timeout on the server side and the automatic reconnection in the client is disabled (the reconnect flag in the MYSQL structure is equal to 0).
You may also see the MySQL server has gone away error if MySQL is started with the --skip-networking option.
I'm hosting a website locally on a WAMP stack. I recently switched the PHP connection to be persistent by adding array(PDO::ATTR_PERSISTENT => true) to the PDO constructor options argument. I've noticed a material drop in the response time as a result (hooray!).
The downside seems to be a gone away error when the machine wakes up. This never happened before changing the connection style.
Is it possible that the cached connection is closed, but continues to be returned? Is it possible to reset a PDO connection or reset the connection pool via PHP inside a catch block?
I've kicked this around for a few days and based on the prevalence of similar issues on the web, this appears to be a deficiency of PDO preventing efficient managing of persistent connections.
Answers to the obvious questions:
PHP 5.4.22
Driver settings in php.ini have persistent connections turned on
Session limits are not bounded (set to -1)
Pool limits are not bounded (set to -1)
I can recreate the issue by doing the following:
Issue the following statements on the MySQL database.
set ##GLOBAL.interactive_timeout := 10;
set ##GLOBAL.wait_timeout := 10;
Issue a few requests against the server to generate some cached connections. You can see the thread count increase compared to doing this with non-persistent connections via:
echo $conn->getAttribute(PDO::ATTR_SERVER_INFO);
Wait at least 10 seconds and start issuing more requests. You should start receiving 'gone away' messages.
The issue is SQL closes the connections and subsequent calls to the PDO constructor return these closed connections without reconnecting them.
This is where PDO is deficient. There is no way to force a connection open and no good way to even detect state.
The way I'm currently getting around this (admittedly a bit of a hack) is issuing these MySQL statements
set ##GLOBAL.interactive_timeout := 86400;
set ##GLOBAL.wait_timeout := 86400;
These variables are set to 28800sec (8 hours) by default. Note that you'll want to restart Apache to clear out cached connections or you wont notice a difference until all connections in the pool have been cycled (I have no idea how / when that happens). I chose 86400 which is 24 hours and I'm on this machine daily so this should cover the basic need.
After this update I let my machine sit for at least 12 hours which was how long it sat previously when I started getting 'gone away message'. It looks like problem solved.
I've been thinking that while I cant force open a connection, it may be possible to remove a bad connection from the pool. I haven't tried this, but a slightly more elegant solution might be to detect the 'gone away' message then set the object to NULL telling PHP to destroy the resource. If the database logic made a few attempts like this (there'd have to be a limit in case a more severe error occurred), it might help keep these errors to a minimum.
For what it's worth I'm looking into using persistent connections on php-fpm 7.3 behind nginx and trying to reproduce that behaviour with a static pool of 1 children, and so far I can't.
I can see through SHOW PROCESSLIST on a separate terminal how the database closes the persistent connection after 5 seconds of a request that does a SELECT, but the next just opens a new one and works just as well. On the other hand if I bombard the API with a load testing tool the same connection is maintained and all requests succeed.
Maybe it was because you used Apache+mod_php instead of the php-fpm worker pool, or maybe there's been a genuine fix between PHP 5.4 and 7.3
Versions tested:
PHP-FPM: 7.3.13
mysqlnd (underlying PDO_MYSQL driver): 5.0.12-dev - 20150407
MySQL Server: 5.7.29 and 8.0.19
MariaDB Server (MySQL drop-in replacement): 10.1.43
PD thanks for laying out the reproducing steps and your thought process, it was invaluable.
Yes you will need to reconnect if the connection closes.
http://brady.lucidgene.com/2013/04/handling-pdo-lost-mysql-connection-error/
I regularly have the following error:
PHP Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000] [1129] Host 'MY SERVER' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'
It is easy to solve the problem with a regular (like crontab) mysqladmin flush-hosts command or increasing the max_connect_errors system variable, as written here.
BUT ! What are "many successive interrupted connection requests", why is this happening?
I'd rather prevent the problem upstream, rather than correcting blocking.
MySQL version : 5.5.12. I'm using Zend Framework 1.11.10 and Doctrine 2.1.6.
There are no mysql_close() nor mysqli_close() in my PHP Code.
max_connect_errors has the default value, 10, and I don't want to increase it yet, I want to understand why I've got the errors. I use a cron, every 5 minutes which does a mysqladmin flush-hosts command.
This response is by design as a security measure, and is the result of reaching the max_connection_errors value for mysql. Here's a link Oracle provides which details most of the possible causes and solutions.
Ultimately this means that there are so many successive connection failures that MySql stops responding to connection attempts.
I use a cron, every 5 minutes which does a mysqladmin flush-hosts
command.
As you are reaching this limit so quickly, there are only a few likely culprits:
Server is not correctly configured to use PDO.
Running code includes very frequently creating new connections.
Results in quickly reaching the max_connections value, causing all subsequent connection attempts to fail... thus quickly reaching the max_connection_errors limit.
Code is hitting an infinite loop, or cascading failure.
Obvious possibility, but must be mentioned.
(i.e: pageA calls pageB and pageC, and pageC calls PageA)
PDO is running fine, but some scripts take a long time to run, or never end.
Easiest way to catch this is turn down the max_execution_time.
It is likely that whatever the case, this will be difficult to track down.
Log a stack-trace of every mysql connection attempt to find what code is causing this.
Check the mysql.err logfile
While PDO does not require explicitly closing mysql connections, for cases like this there's a few practices that can prevent such ServerAdmin hunts.
Always explicitly close mysql connections.
Build a simple Class to handle all connections. Open, return array, close.
The only time you need to keep a connection open is for cursors.
Always define connection arguments in one and only one file included everywhere it is needed.
Never increase max_execution_time unless you know you need it and you know the server can handle it. IF you need it, explicitly increase the value only for the script that needs it. php.net/manual/en/function.set-time-limit.php
If you increase max_execution_time, increase max_connections.
dev.mysql.com/doc/refman/5.0/en/cursors.html
It means that mysqld has received many connection requests from the given host that were interrupted in the middle. Check out this link from the documentation for more info.
In a load test of our PHP based web application we can easily reach our DBs hard limit of 150 max connections. We run Kohana with ORM to manage the DB connections.
This causes connection exceptions (and thus failed transactions), mysql_pconnect seems to perform even worse.
We're looking for a solution to have graceful degradation under load. Options considered:
A DB connection pool (uh, that's not possible with PHP right?)
Re-try a failed connection when the failure was due to max
connections reached
2 seems logical, but Kohana/ORM manages the DB connection process. Can we configure this somehow?
Is there something I'm not thinking of?
EDIT
This is an Amazon AWS RDS database instance, Amazon sets the 150 limit for me, and the server is most certainly configured correctly. I just want to ensure graceful degradation under load with whichever database I'm using. Clearly I can always upgrade the DB and have a higher connection limit, but I want to guard against a failure situation in case we do hit our limit unexpectedly. Graceful degradation under load.
When you say load testing, I am assuming you are pushing roughly 150 concurrent requests and not that you are hitting the connection limit because you make multiple connections within the same request. If so, check out mysql_pconnect. To enable it in Kohana, simply enable persistent = true in the config/database file for your connections.
If that doesn't work, then you'll have to find an Amazon product that allows more connections since PHP does not share resources between threads.
This answers your question about PHP database connection pooling.
If the limit is 150 for connections (default for max_connections is 151), you are most likely running mysql without a config file
You will need to create a config file to raise that number
Create /etc/my.cnf and put in these two lines
[mysqld]
max_connections=300
You do not have to restart mysql (you could if you wish)
You could just run this MySQL command to raise it dynamically
SET GLOBAL max_connections = 300;
UPDATE 2012-04-06 12:39 EDT
Try using mysql_pconnect instead of mysql_connect. If Kohana can be configured to use mysql_pconnect, you are good to go.
We have developed chat module using node.js() and mongo sharding and gone live to production server. But today its reached 20000 connection in mongodb and getting error "Too many connection" in logs. After that we have restarted the node server and started again. now its comes normal. But we have to know how will solve this problem immediately.
Any configuration are there to set it in mongodb to kill the connection if not used or set the expire time while establish the connection.
Please help us to close this issue.
Regards,
Kumaran
You're probably not running into a MongoDB issue. There's a cap to the amount of connections you can make to MongoDB that's usually roughly equal to the maximum number of file descriptors available to it.
It sounds like there is a bug in your code (likely) or mongoose (less likely) that either creates more connections than it closes or never closes connections in the first place. In Java for example creating a new "Mongo" class instance for each query would result in this sort of problem but I don't work with node.js/mongoose so I do not know what the JS equivalent of that is.
Keep an eye on mongostat and check to see if the connection count always increases or if it decreases sometimes. If it's the former your code never releases connections for whatever reason. If it's the latter you're simply creating them faster than idle connections are disconnected. That's usually due to doing something heavy weight (like the driver initialising it's connection pool) for every query rather than once.