So I have a script which does the following:
for ($i = $start;$i<=$end;$i++){
shell_exec("php /home/mysql_box/script.php phrases $i {$argv[3]} > /dev/null 2>&1 &");
}
On one of my servers I would like this script to run 400 times. As part of the call it connects to a Mysql server on a different box but for some wierd reason hits about 130 connections and then errors saying "Mysql server has gone away".
I appreciate this is quite common and the MySQL manual states:
You can also encounter this error with applications that fork child processes, all of which try to use the same connection to the MySQL server. This can be avoided by using a separate connection for each child process.
However I wouldn't be using the same connection surely? My wait_timeout is the default - but huge so why would some connections simply not be able to connect?
A couple of other points from the manual:
The connection uses the IP not host name
I didn't start mysql with skip-networking
The query is a single one liner - its not big
I am using Ubuntu
I am connecting with PDO
I hope I've given enough info to help.
So I believe I have discovered the issue and I hope this helps anyone else. The reason being was that the initial query that was run on each loop took aprox 15 seconds. For some reason this reduced the amount of concurrent new connections and as such crashed out. The solution was to therefore reduce the query time required and so if you experience this problem as well - play around with the queries themselves.
You are spawning 400 php processes and letting them to run in background. Having more than 130 simultaneous connections might be hard for your MySQL server handling.
You might want to wait a little to launch all this process:
for ($i = $start;$i<=$end;$i++){
shell_exec("php /home/mysql_box/script.php phrases $i {$argv[3]} > /dev/null 2>&1 &");
if ($i%50 === 0) //every 50 process wait 1 second, so you give a chance that they're finished, and hopefully your db server will be able to handle more connections
sleep(1);
}
I think that firing 400 queries almost simultaneusly by the scripts invoked in the for loop is a bad idea per se.
Even if each query takes little time to execute the loop will act faster.
You may hit the maximun numer of allowed connection set on the server running mysql: max_connections. I'm not 100% sure this is the issue you have because normally the error you would get is Too many connections: see Too many connections
You may also hit the maximum number of network connections allowed by the destination server (this is more likely to lead to a Mysql server has gone away)
Consider also you're spawning 400 processes at once each one running the PHP intepreter. Quite heavy for the local server.
The quickest dirty fix is to set some delay between scripts invocation.
for ($i = $start;$i<=$end;$i++){
shell_exec("php /home/mysql_box/script.php phrases $i {$argv[3]} > /dev/null 2>&1 &");
usleep( 1000 * 1000 * 100 ); // 0.1 SECS
}
However this is leading you versus a "syncronous" approach which may not be what you want.
The proper solution would be to implement a system to know when a spawned script has ended. Then start running the scripts in parallel but avoid more than ex. 50 running simultaneously.
You may consider implementing multithreading directly in your PHP scripts, see:
PThreads / PHP PThreads
PHPThread
GPhpThread
If "hitting the server more gently" (you may even try as a problem-isolation-strategy to run the scripts with high delay or syncronously) doesn't solve the problem (but you should issue less queries at once anyway in my opinion) then the problem falls into the MySQL server has gone away category.
I quote from the linked docs what may be the causes in my opinion (for the full list go to ht elink above)
You tried to run a query after closing the connection to the server. This indicates a logic error in the application that should be corrected.
A client application running on a different host does not have the necessary privileges to connect to the MySQL server from that host.
You have encountered a timeout on the server side and the automatic reconnection in the client is disabled (the reconnect flag in the MYSQL structure is equal to 0).
You may also see the MySQL server has gone away error if MySQL is started with the --skip-networking option.
Related
I'm having trouble investigating an issue with many sleeping MySQL connections.
Once every one or two days I notice that all (151) MySQL connections
are taken, and all of them seem to be sleeping.
I investigated this, and one of the most reasonable explanations is that the PHP script was just killed, leaving a MySQL connection behind. We log visits at the beginning of the request, and update that log when the request finishes, so we can tell that indeed some requests do start, but don't finish, which indicates that the script was indeed killed somehow.
Now, the worrying thing is, that this only happens for 1 specific user, and only on 1 specific page. The page works for everyone else, and when I log in as this user on the Production environment, and perform the exact same action, everything works fine.
Now, I have two questions:
I'd like to find out why the PHP script is killed. Could this possibly have anything to do with the client? Can a client do 'something' to end the request and kill the php script? If so, why don't I see any evidence of that in the Apache logs? Or maybe I don't know what to look for? How do I find out if the script was indeed killed or what caused it?
how do I prevent this? Can I somehow set a limit the to number of mysql connections per PHP session? Or can I somehow detect long-running and sleeping mysql connections and kill them? It isn't an option for me to set the connection-timeout to a shorter time, because there are processes which run considerably longer, and the 151 connetions are used up in less than 2 minutes. Also increasing the number of connections is no solution. So, basically.. how do I kill processes which are sleeping for more than say 1 minute?
Best solution would be that I find out why the request of 1 user can eat up all the database connections and basically bring down the whole application. And how to prevent this.
Any help greatly appreciated.
You can decrease wait_timeout variable of the MySQL server. This specifies the amount of seconds MySQL waits for anything on a non-interactive connection, before it aborts the connection. The default value is 28800 seconds, which seems quite high. You can set this dynamically by executing SET GLOBAL wait_timeout = X; once.
You can still increase it for cronjobs again. Just execute the query SET SESSION wait_timeout = 28800; at the beginning of the cronjob. This only affects the current connection.
Please note that this might cause problems too, if you set this too low. Although I do not see that much problems. Most scripts should finish in less than a second. Setting wait_timeout=5 should therefore cause no harm…
I'm having a problem that I hope someone can help me out with.
Currently, every now and again we receive an error when our scripts (Java and PHP) try to connect to the localhost mysql database.
Host 'myhost' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'.
This issue appears to mainly occur in the early hours of the morning. After alot of searching to figure out why this may be occuring I have finally come to the conclusion that it may be due to the fact our hosting company runs their backup processes around this time. My theory is that during this backup process (this is also our busiest period) we end up using up all our connections and so this error occurs.
I have talked to our hosts about changing the times these backups occur but they have stated that this is not possible and that is simply the times the backups start to ensure they are finished in time for the day (Even though we have informed them our critical period is at the precise times the backups occur).
The things I have connecting to the server are:
PHP website
PHP files run using chron jobs
A couple of java applications to run as socket listeners that listen for incoming port connections and uses the mysql database for checking user credentials and checking outstanding messages.
We typically have anywhere from 300 - 600 socket connections open at any one time and the average activity on these are about 1-3 request per second.
I have also installed monit and munin with some mysql plugins on the server in the hope they may help auto resolve this issue however these do not see to resolve the issue.
My questions are:
Is there something I can do to auto poll the mysql database so if this occurs I can auto flush the database to clear
Is this potentially even related to the server backup. It seems a coincidence it happens 95% of the time during the period the backups occur.
Any other ideas that may help. Links to other websites, or questions I could put to our host to help out.
We are currently running on a PHP Version 5.2.6-1+lenny9 server with Apache.
If any more information is required to help, please let me know. Thanks.
UPDATE:
I am operating on a shared virtual host and am pretty sure I close my website connections as I have this code in my database class
function __destruct() {
#mysql_close($this->link);
}
I'm pretty sure I'm not using persistant connections via my PHP script as I connect to the db the #mysql_connect command.
UPDATE:
So I changed the max_connections limit from 100 - 200 and I changed the mysql.persistant variable from On to Off in php.ini. Now for two nights running the server has gone done and mainly the connection to the mySql database. I have one 1GB of RAM on the server but it never seems to get close to that. Also looking at my munin logs the connections never seem to hit the 200 mark and yet I get errors in my log files something like
SQLException: Too many connections
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
SQLException: null, message from server: "Can't create a new thread (errno 12); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug.
SQLState :: SQLException: HY000, VendorError :: SQLException: 1135
We've had a similar problem with out large ecommerce installation using MySQL as a backend. I'd suggest you alter the "max_connections" setting of the MySQL instance, then (if necessary) alter the number of file descriptors using "ulimit" before starting MySQL (we use "ulimit -n 32768" in /etc/init.d/mysql).
It's been suggestion I post an answer to this question although I never really got it sorted.
In the end I ended up implementing a Java connection pooling class which enabled me to share connections whilst maintaining a upper limit on the number of max connections I wanted. It was also suggested I increase the RAM and increase the number of max connections. I did both these things although they were just bandaids to the problem. We also ended up moving hosting providers as the ones we were with were not very co-ooperative.
After these minor implementations I haven't noticed this issue occur for at least 8 months which is good enough for me.
Other suggestions over time have to also implement a Thread pooling facility, however current demand does not require this need.
I have a long running script that dies out for no reason. It's supposed to run for over 8 hours, but dies out after an hour or two, no errors, nothing. I tried running it via CLI and via http, no difference.
I have the following parameters set:
set_time_limit(0);
ini_set('memory_limit', '1024M');
I've been monitoring the memory usage, and it doesn't go over 200M
Is there anything else that I'm missing. Why would it die out?
One possible explanation could be that the PHP garbage collector is interfering with the script. That could be why you're seeing random die offs. When the garbage collector is turned on, the cycle-finding algorithm is executed whenever the root buffer runs full.
The PHP manual states:
The rationale behind the ability to turn the mechanism on and off, and to initiate cycle collection yourself, is that some parts of your application could be highly time-sensitive.
You could try disabling the PHP garbage collector using gc_disable. The manual recommends you call gc_collect_cycles right before disabling to free the buffer.
Another explanation could be the code itself. An 8 hour script is a long script and if it's complex, it could easily be hitting a snag that causes the script to exit. I think for your troubleshooting now, you should definitely turn error reporting to report everything using error_reporting(-1);.
Also, if your script is communicating with other services, say a database for example, it's quite possible that could be the issue. If the database server runs out of memory or times out, it could be causing your script to hang and die. If this is the case, you could split up your connections to the database and connect/disconnect at specific timed intervals during the script to keep that connection fresh. The same mentality could be applied to any other service you may be communicating with.
You could, for testing purposes only, purposely make your script write to a log file an each successful query, making sure to include the timestamp from when the query beings and another when the query ends. You might not get any errors, but it may help you determine if there is a specific problem query or if a query is hanging for longer than usual. You could also check to make sure your MySQL connection is still valid and print out something to inform you of that as well.
An example log file:
[START 2011/01/21 13:12:23] MySQL Connection: TRUE [END 2011/01/21 13:12:28] Query took 5s
[START 2011/01/21 13:12:28] MySQL Connection: TRUE [END 2011/01/21 13:12:37] Query took 9s
[START 2011/01/21 13:12:39] MySQL Connection: TRUE [END 2011/01/21 13:12:51] Query took 12s
It's propably something related to the code.
I have scripts running weeks and months with no trouble.
Your database connection might timeout and output error.
It's also possible you run out of filedescriptors if you open connections or files. Or you're shared memory region is full. It depends on the code.
Check out system logs that selinux is not messing with you. This way your script would not print any error. From the system logs you also see if you have crossed user limits on any system resources (see ulimit).
It's really strange if you run it in cli and you get nothing, not even segfault. You saw both stdout and stderr?
Maybe it segafults.
Try to launch your script on this way:
$ ulimt -c unlimited
$ php script.php
And see if you find a core dump file (core.xxxx) in the running directory when it dies
Apache also has it's own script timeout, you will need to tweak the httpd.conf file
Is there a way to force Mysql from PHP to kill a query if it didn't return within a certain time frame?
I sometimes see expensive queries running for hours (obviously by this time HTTP connection timed out or the user has left). Once enough such queries accumulate it starts to affect the overall performance badly.
Long running queries are a sign of poor design. It is best to inspect the queries in question and how they could be optimised. This would be just ignoring the problem.
If still want this you could use the SHOW PROCESSESLIST command to get all running processes. And then use KILL x to kill a client connection. But for this to work you have to check this from another PHP script since multithreading is not yet possible. Also it is not advisable to give users utilized by PHP grants to mess with the server settings.
Warning: The intended behaviour should now be used with something like upstart.
You want to create DAEMONS for this sort of thing but you could also use cron.
Just have a script looking at set intervals for queries above xyz time and kill them.
// this instruction kills all processes executing for more than 10 seconds
SELECT CONCAT('KILL ',id,';')
FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE state = "executing" AND `time` >= 10
However if queries are running for such a long time... they must be optimized.
On the other hand you may be trying to admin a shared server and there can be some rogue users. On that scenario you should specify in the terms of service that scripts will be monitored and disabled and that is exactly what should be done to such offensive ones.
I'm trying to debug an error I got on a production server. Sometimes MySQL gives up and my web app can't connect to the database (I'm getting the "too many connections" error). The server has a few thousand visitors a day and on the night I'm running a few cron jobs which sometimes does some heavy mysql work (Looping through 50 000 rows, inserting and deletes duplicates etc)
The server runs both apache and mysql on the same machine
MySQL has a pretty standard based configuration (max connections)
The web app is using PHP
How do I debug this issue? Which log files should I read? How do I find the "evil" script? The strange this is that if I restart the MySQL server it starts working again.
Edit:
Different apps/scripts is using different connectors to its database (mostly mysqli but also Zend_Db)
First, use innotop (Google for it) to monitor your connections. It's mostly geared to InnoDB statistics, but it can bet set to show all connections including those not in a transaction.
Otherwise, the following are helpful: Use persistent connections / connection pools in your web apps. Increase your max connections.
It's not necessarily a long-running SQL query.
If you open a connection at the start of a page, it won't be released until the PHP script terminates - even if there is no query running.
You should add some stats to your pages to find out the slowest ones, and the most-hit ones. Closing the connection early would help, if possible.
Try using persistent connections (mysql_pconnect), it will help reduce the server load caused by constantly opening and closing MySQL connections.
The starting point is probably to use mysqladmin processlist to get a list of the processes on the mysql server. The next step depends on what you find.