I'm tuning my project MYSQL database, as many people I got suggestion to reduce
wait_timeout
, but it is unclear for me, does this session variable exclude query execution time, or it includes it? I have set it for 5 seconds, taking into account that I may have queries which are being executed for 3-5 seconds sometimes (yea that is slow there are few of them, but they still exist), so mysql connections have at least 1-2 seconds to be taken by PHP scripts before they are closed by MYSQL.
In MySQL docs there is no clear explanation about how it starts counting that timeout and if it includes execution time. Perhaps your experience may help. Thanks.
Does the wait_timeout session variable exclude query execution time?
Yes, it excludes query time.
max_execution_time controls how long the server will keep a long-running query alive before stopping it.
Do you use php connection pools? If so 5 seconds is an extremely short wait_time. Make it longer. 60 seconds might be good. Why? the whole point of connection pools is to hold some idle connections open from php to MySQL, so php can handle requests from users without the overhead of opening connections.
Here's how it works.
php sits there listening for incoming requests.
A request arrives, and the php script starts running.
The php script asks for a database connection.
php (the mysqli or PDO module) looks to see whether it has an idle connection waiting in the connection pool. If so it passes the connection to the php script to use.
If there's no idle connection php creates one and passes it to the php script to use. This connection creation starts the wait_timeout countdown.
The php script uses the connection for a query. This stops the wait_timeout countdown, and starts the max_execution_time countdown.
The query completes. This stops the max_execution_time countdown and restarts the wait_timeout countdown. Repeat 6 and 7 as often as needed.
The php script releases the connection, and php inserts it into the connection pool. Go back to step 1. The wait_time is now counting down for that connection while it is in the pool.
If the connection's wait_time expires, php removes it from the connection pool.
If step 9 happens a lot, then step 5 also must happen a lot and php will respond more slowly to requests. You can make step 9 happen less often by increasing wait_timeout.
(Note: this is simplified: there's also provision for a maximum number of connections in the connection pool.)
MySQL also has an interactive_timeout variable. It's like wait_timeout but used for interactive sessions via the mysql command line program.
What happens when a web user makes a request and then abandons it before completion? For example, a user might stop waiting for a report and go to another page. In some cases, the host language processor detects the closing of the user connection, kills the MySQL query, and returns the connection to the pool. In other cases the query either completes or hits the max_execution_timeout barrier. Then the connection is returned to the pool. In all cases the wait_timeout countdown only is active when a connection is open but has no query active on it.
A MySQL server timeout can occur for many reasons, but happens most often when a command is sent to MySQL over a closed connection. The connection could have been closed by the MySQL server because of an idle-timeout; however, in most cases it is caused by either an application bug, a network timeout issue (on a firewall, router, etc.), or due to the MySQL server restarting.
It is clear from the documentation that it does not include the query execution time. It is basically the maximum idle-time allowed between two activities. If it crosses that limit, server closes the connection automatically.
The number of seconds the server waits for activity on a
noninteractive connection before closing it.
From https://www.digitalocean.com/community/questions/how-to-set-no-timeout-to-mysql :
Configure the wait_timeout to be slightly longer than the application connection pool's expected connection lifetime. This is a good safety check.
Consider changing the waittimeout value online. This does not require a MySQL restart, and the waittimeout can be adjusted in the running server without incurring downtime. You would issue set global waittimeout=60 and any new sessions created would inherit this value. Be sure to preserve the setting in my.cnf. Any existing connections will need to hit the old value of waittimeout if the application abandoned the connection. If you do have reporting jobs that will do longer local processing while in a transaction, you might consider having such jobs issue set session wait_timeout=3600 upon connecting.
From the reference manual,
Time in seconds that the server waits for a connection to become active before closing it.
In elementary terms, How many SECONDS will you tolerate someone reserving your resources and doing nothing? It is likely your 'think time' before you decide which action to take is frequently more than 5 seconds. Be liberal. For a web application, some processes take more than 5 seconds to run a query and you are causing them to be terminated at 5 seconds. The default is 28800 seconds which is not reasonable. 60 seconds could be a reasonable time to expect any web based process to be completed. If your application is also used in a traditional workplace environment, a 15 minute break is not unreasonable. be liberal to avoid 'bad feedback'.
Related
I have a script which attempts to stop long running queries on a MySQL server. The logic - when the server starts to slow down for whatever reason, it accumulates a rush of queries as each user refreshes his page, each connection hanging in a queue, not being stopped by PHP's time limit, and preventing new connections. In addition, a mistaken query might use a lot of resources.
I have encountered a strange situation recently with this system. We have two cron scripts running constantly. Normally, their connections don't have a time more than 1 in "SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST". For some reason, the other day, these connections were increased in time 50+ seconds, but they did not have a query attached to them. I was not able to see this live, but it was recorded in the log clearly enough to be traced back to these processes.
My question is why would these connections suddenly increase in their duration? Especially, why, given that they did not have a query, but were in sleep mode. (As proof, my logs showed Time=78,State='',Info=0 - not sure why 0). In PHP, I am using PDO with the standard options, except a ATTR_TIMEOUT of 30 for CLI scripts. Also, there was reported a slowness on the site at the time of these problem connections.
I'm having trouble investigating an issue with many sleeping MySQL connections.
Once every one or two days I notice that all (151) MySQL connections
are taken, and all of them seem to be sleeping.
I investigated this, and one of the most reasonable explanations is that the PHP script was just killed, leaving a MySQL connection behind. We log visits at the beginning of the request, and update that log when the request finishes, so we can tell that indeed some requests do start, but don't finish, which indicates that the script was indeed killed somehow.
Now, the worrying thing is, that this only happens for 1 specific user, and only on 1 specific page. The page works for everyone else, and when I log in as this user on the Production environment, and perform the exact same action, everything works fine.
Now, I have two questions:
I'd like to find out why the PHP script is killed. Could this possibly have anything to do with the client? Can a client do 'something' to end the request and kill the php script? If so, why don't I see any evidence of that in the Apache logs? Or maybe I don't know what to look for? How do I find out if the script was indeed killed or what caused it?
how do I prevent this? Can I somehow set a limit the to number of mysql connections per PHP session? Or can I somehow detect long-running and sleeping mysql connections and kill them? It isn't an option for me to set the connection-timeout to a shorter time, because there are processes which run considerably longer, and the 151 connetions are used up in less than 2 minutes. Also increasing the number of connections is no solution. So, basically.. how do I kill processes which are sleeping for more than say 1 minute?
Best solution would be that I find out why the request of 1 user can eat up all the database connections and basically bring down the whole application. And how to prevent this.
Any help greatly appreciated.
You can decrease wait_timeout variable of the MySQL server. This specifies the amount of seconds MySQL waits for anything on a non-interactive connection, before it aborts the connection. The default value is 28800 seconds, which seems quite high. You can set this dynamically by executing SET GLOBAL wait_timeout = X; once.
You can still increase it for cronjobs again. Just execute the query SET SESSION wait_timeout = 28800; at the beginning of the cronjob. This only affects the current connection.
Please note that this might cause problems too, if you set this too low. Although I do not see that much problems. Most scripts should finish in less than a second. Setting wait_timeout=5 should therefore cause no harm…
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
MySQL server has gone away - in exactly 60 seconds
We're using the mysql_connect method in PHP to create database handles for a daemon. The problem is, that connection might not be used for more than 8 hours (sometimes for more than a few weeks.)
We're running into issues where MySQL will end the session because the wait_timeout is reached.
http://dev.mysql.com/doc/refman/5.0/en/gone-away.html
We don't want to increase this value for ALL connections. Is there a way to increase the timeout for that handle only through PHP? Through MySQL?
It's not good to hang onto DB connections for long periods because the DB only provides a fixed number of connections at any one time; if you're using one up for ages it means your DB has less capacity to deal with other requests, even if you're not actually doing anything with that connection.
I suggest dropping the connection if the program is finished using it for the time being, and re-connect when the time comes to do more DB work.
In addition, this solution will protect your program from possible database downtime, ie if you need to reboot your DB server (it happens, even in the best supported network). If you keep the connection alive (ie a DB ping as per the other answers), then an event like that will leave you with exactly the same problem you have now. With a properly managed connection that is dropped when not needed, you can safely leave your daemon running even if you have planned downtime on your DB; as long as it remains idle for the duration, it needn't be any the wiser.
(as an aside, I'd also question the wisdom of writing a PHP program that runs continuously; PHP is designed for short duration web requested. It may be capable of running long term daemon programs, but there are better tools for the job)
For this problem, the best way is use mysql_ping();
Check it out on manual PHP mysql_ping()
I am using MySQL and PHP with 2 application servers and 1 database server.
With the increase of the number of users (around 1000 by now), I'm getting the following error :
SQLSTATE[08004] [1040] Too many connections
The parameter max_connections is set to 1000 in my.cnf and mysql.max_persistent is set to -1 in php.ini.
There are at most 1500 apache processes running at a time since the MaxClients apache parameter is equal to 750 and we have 2 application servers.
Should I raise the max_connections to 1500 as indicated here?
Or should I set mysql.max_persistent to 750 (we use PDO with persistent connections for performance reasons since the database server is not the same as the application servers)?
Or should I try something else?
Thanks in advance!
I think your connections aren't closing fast enough and they stack until the default time has reached. I had same problem and with wait_timeout I solved things out.
You can try to setup in my.cnf
set-variable = max_connections=1000 // max connection
set-variable = max_user_connections=100 // max user connection per hour
wait_timeout = 60 // wait timeout for connection in seconds
as will terminate any existing connections after 60 seconds has passed
I think you should check out the PHP code if you can get rid of the persistent connections.
The problem with persistent connections is that the php instance keeps them open even after script exits until the data has been sent to the client and php instance is freed to next customer.
Another problem with persistent connections is that some PHP code might leave the socket with different settings than in the startup, with different locales or with temporary tables.
If you can rewrite the code that for every connection theres only one or few mysql_connects and the database handle is passed to different parts of the code or kept in GLOBAL the performance impact of losing persistent connections is negligible.
And, of course, there's little harm in doubling the max_connections. It's not very useful with PHP anyway as PHP/Apache child exits quite often anyway and closes the handles. The max_connections is more useful in other environments.
Trying to separate out my LAMP application into two servers, one for php and one for mysql. So far the application connects locally through a file socket and works fine.
I'm worried about the number connections I can establish if it is over the network. I have been testing tcp connections on unix for benchmark purposes and I know that you cannot exceed a certain amount of connections per second otherwise it halts due to the lack of resources (be it sockets, or file handles or whatever). I also understand that php does not implement connection pooling so for each page load a new connection over the network must be made. I also looked into pconnect for php and it seems to bring more problems.
I know this is a very very common setup (php+mysql), can anyone provide some typical usage and statistics they get out of their servers? Thanks!
The problem is not related to running out of connections allowed my MySQL. The main problem is that unix cannot very quickly create and tear down tcp connections. Sockets end up in TIME_WAIT and you have to wait for a period before you free up more sockets to connect again. These two screenshots clearly shows this pattern. MySQL does work up to a certain point and then pauses because the web server ran out of sockets. After certain amount of time passed, the web server was able to make new connections.
alt text http://img35.imageshack.us/img35/3809/picture4k.png
alt text http://img35.imageshack.us/img35/4580/picture2uyw.png
I think the limit is at 65535. So you'd have to have 65535 connections at the same time to hit that limit since a regular mysql connection closes automatically.
mysql_connect()
Note: The link to the server will be closed as soon as the execution of the script ends, unless it's closed earlier by explicitly calling mysql_close().
But if you're using a persistent mysql connection, then you can run into trouble.
Using persistent connections can require a bit of tuning of your Apache and MySQL configurations to ensure that you do not exceed the number of connections allowed by MySQL.
Each MySQL connection actually uses several meg of ram for various buffers, and takes a while to set up, which is why MySQL is limited to 100 concurrent open connections by default. You can up that limit, but it's better to spend your time trying to limit concurrent connections, via various methods.
Beware of raising the connection limit too high, as you can run out of memory (which, I believe, crashes mysql), or you may push important things out of memory. e.g. MySQL's performance is highly dependent on the OS automatically caching the data it reads from disk in memory; if you set your connection limit too high, you'll be contending for memory with the cache.
If you don't up your connection limit, you'll run out of connections long before your run out of sockets/file handles/etc. If you do increase your connection limit, you'll run out of RAM long before you run out of sockets/file handles/etc.
Regarding limiting concurrent connections:
Use a connection pooling solution. You're right, there isn't one built in to PHP, but there are plenty of standalone ones out there to choose from. This saves expensive connection setup/tear down time.
Only open database connections when you absolutely need them. In my current project, we automatically open a database connection when the first query is issued, and not a moment before; we also release the connection after we've done all our database work, but before the page's HTML is actually generated. The shorter the period of time you hold connections open, the fewer connections will be open simultaneously.
Cache what you can in a lighter-weight solution like memcached. My current project temporarily caches pages displayed to anonymous users (since every anonymous user gets the same HTML, in the end -- why bother running the same database queries all over again a few scant milliseconds later?), meaning no database connection is necessary at all. This is especially useful for bursts of anonymous traffic, like a front-page digg.