MySQL wait_timeout and Wordpress - php

I'm troubleshooting a series of reoccurring errors: WordPress database error MySQL server has gone away for query ...
I think I've found a solution here but it's a few years old and I want to better understand the MySQL wait_timeout and it's relationship to Wordpress before I start monkeying with core files or reconfiguring my server. (I'm on a virtual dedicated server, so I have the option to change the wait_timeout on the server.)
I checked by running SHOW VARIABLES; from phpMyAdmin and wait_timeout is currently set to 35. That seems low to me, but I don't fully understand what it does. I'm considering changing it to 600.
My main question is whether this is a responsible thing to do or not. But I think that broader question can be divided into smaller parts:
1. Do I have the option to override this setting with PHP (Wordpress)?
2. What is the optimal setting is for medium-large Wordpress site?
3. Are there any Wordpress configuration options or filters that I could use to change the setting without modifying core files?
Thanks.

wait_timeout is the time mysql will hold a non-interactive connection open for before closing it basically.
So increasing it to 600 seconds could solve your problem, however, if you set it to 600 seconds and you have lots of people running a slow page on your site at the same time you can get to a point where mysql starts refusing connections and then apache will start queueing requests until it subsequently refuses requests and your server takes a dive.
My suggestion would be to try and find out why a single request is taking over 35 seconds because to be honest, that seems a rather long load time on a single page from a blog to me.

Related

Error "too many connections" on CodeIgniter website

I've already read all the questions / answers about this subject here on stack overflow but unfortunately none have resolved my problem.
In the last few days the mysql error "too many connections" keeps showing up on the website logs and it hangs the entire website for every client. In fact it hangs all the websites on the server.
So here are my questions / remarks:
there are about 50 different client databases, besides 2 which are common to all clients
pconnect is already = FALSE for all connections
On the file "php.ini" the variable "allowpersistent" is ON. Does this make the mysql connections permanent even if I write pconnect = FALSE? (I can't change the "allowpersistent" variable, would have to ask the hosting company)
There are 3 files that load databases, one loads the client's DB and the other two load common databases for all clients, they are called on the construct method of every model, but CI supposedly should close the mysql connections after its done with them AND ignore any "load->database" already loaded
"db->close" apparently does nothing, because this->db->database keeps its value after I close it :P
Threads_connected are up to 1000 as I write this and the website is down :(
mysql configuration has max_connections = 1000, can it be further increased? I see no change in free memory, so what can happen?
Should I change to PDO? I'm using dbdriver "mysqli"
Should I ask the hosting company to lower the mysql variable 'wait_timeout', so as to rapidly close the DB connection?
Should I update CodeIgniter? I have version 3.1.4 and it's now at 3.1.9
Many thanks for your help!
In our case the solution was lowering the mysql variable "wait_timeout" from 8 hours (the default, wtf!?) to 180 seconds. And it can still be lowered more if needed. This had to be done by the hosting company, as we do not have root access to our server.
All the other solutions I mentioned in the question were not working, like "pconnect = false" and "db->close".
"Threads_connected" are now always under 100 or 200, instead of the almost 1000 from before this fix.
My team wrestled with this problem for two days, and there's lots of people on the Web asking for solutions but without any (working) answers.
Cheers :)
I also encounter the same problem. When I check the current number of active connections in MySQL with this query:
show processlist
There are so many connections in sleep mode. After searching around, I found out that:
When a database connection is created, a session is also created on the database server simultaneously, but if that connection and session is not closed properly, then the query goes into sleep mode after the wait time gets over.
To resolve this problem, I do as #Pedro, change the wait_ timeout to 180 by running this command in mysql:
SET GLOBAL wait_timeout = 180;
With this method, you do not need to restart mysql service to take effect.
You can read more here: https://www.znetlive.com/blog/what-is-sleep-query-in-mysql-database/

MySQL number of threads_connected slow down my website

I have an store running on Prestashop 1.5.4. I keep having a problem with the site behaviour.
Everytime i check this number
user#server:~$ mysql -se "show status like '%threads_connected%'"
If the number gets above 25, my site becomes really slow. Opening a page takes forever and page load can get as high as 1 - 2 minutes.
The only solution (temporary) is for me to restart the apache services.
I am pretty sure 25 is a pretty low number.
In case you need to know, i don't have direct access to my.ini nor to any access to MySQL Server Configuration. My database is stored in a shared (DBaSS).
I do however have full access to my webserver (apache2.conf, php.ini, www.example.com.conf)
A little info that might help:
-se "show variables like '%max%'"
Outputs:
I guess my question is how can i improve performance ? Can i improve this by limiting the number of "threads_connected" ?
footnote:
I am aware that there is probably a bad query somewhere in the code, however, at the moment i need a quick solution for this. As reviewing all queries can take some time.
EDIT #1
Perhaps this information might give some ideas
EDIT #2
Can i improve this by limiting the number of "threads_connected"?
Absolutely not.
All you could do by limiting the number of connections would be to cause errors for users of your site, because their particular Apache process wouldn't be able to connect to the database.
The problem is not the number of threads connected. That is a symptom of the real problem, which is that you have one or more queries that perform poorly, or that you do not have enough memory for Apache to scale up when traffic gets heavy, forcing your machine into heavily swapping, and thereby slowing down Apache to the point that it keeps connections open longer.
I need a quick solution for this
Sorry... but there isn't another solution other than to find the actual problem and fix it.
More useful than the number of connections is what are these connections doing right now?
SHOW FULL PROCESSLIST; in MySQL will answer that question. If they are just sleeping, they aren't hurting anything on the MySQL side, and you may want to limit the number of Apache processes, but if that is a side effect of the level of site traffic, then your server may actually be too small, or you may need to disable HTTP keepalive, or tweak the timeout, if browser connections are holding open Apache children that are idle, consuming memory.

MySQL database needs a flush tables every now and again. Can I script something to resolve this?

I'm having a problem that I hope someone can help me out with.
Currently, every now and again we receive an error when our scripts (Java and PHP) try to connect to the localhost mysql database.
Host 'myhost' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'.
This issue appears to mainly occur in the early hours of the morning. After alot of searching to figure out why this may be occuring I have finally come to the conclusion that it may be due to the fact our hosting company runs their backup processes around this time. My theory is that during this backup process (this is also our busiest period) we end up using up all our connections and so this error occurs.
I have talked to our hosts about changing the times these backups occur but they have stated that this is not possible and that is simply the times the backups start to ensure they are finished in time for the day (Even though we have informed them our critical period is at the precise times the backups occur).
The things I have connecting to the server are:
PHP website
PHP files run using chron jobs
A couple of java applications to run as socket listeners that listen for incoming port connections and uses the mysql database for checking user credentials and checking outstanding messages.
We typically have anywhere from 300 - 600 socket connections open at any one time and the average activity on these are about 1-3 request per second.
I have also installed monit and munin with some mysql plugins on the server in the hope they may help auto resolve this issue however these do not see to resolve the issue.
My questions are:
Is there something I can do to auto poll the mysql database so if this occurs I can auto flush the database to clear
Is this potentially even related to the server backup. It seems a coincidence it happens 95% of the time during the period the backups occur.
Any other ideas that may help. Links to other websites, or questions I could put to our host to help out.
We are currently running on a PHP Version 5.2.6-1+lenny9 server with Apache.
If any more information is required to help, please let me know. Thanks.
UPDATE:
I am operating on a shared virtual host and am pretty sure I close my website connections as I have this code in my database class
function __destruct() {
#mysql_close($this->link);
}
I'm pretty sure I'm not using persistant connections via my PHP script as I connect to the db the #mysql_connect command.
UPDATE:
So I changed the max_connections limit from 100 - 200 and I changed the mysql.persistant variable from On to Off in php.ini. Now for two nights running the server has gone done and mainly the connection to the mySql database. I have one 1GB of RAM on the server but it never seems to get close to that. Also looking at my munin logs the connections never seem to hit the 200 mark and yet I get errors in my log files something like
SQLException: Too many connections
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
SQLException: null, message from server: "Can't create a new thread (errno 12); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug.
SQLState :: SQLException: HY000, VendorError :: SQLException: 1135
We've had a similar problem with out large ecommerce installation using MySQL as a backend. I'd suggest you alter the "max_connections" setting of the MySQL instance, then (if necessary) alter the number of file descriptors using "ulimit" before starting MySQL (we use "ulimit -n 32768" in /etc/init.d/mysql).
It's been suggestion I post an answer to this question although I never really got it sorted.
In the end I ended up implementing a Java connection pooling class which enabled me to share connections whilst maintaining a upper limit on the number of max connections I wanted. It was also suggested I increase the RAM and increase the number of max connections. I did both these things although they were just bandaids to the problem. We also ended up moving hosting providers as the ones we were with were not very co-ooperative.
After these minor implementations I haven't noticed this issue occur for at least 8 months which is good enough for me.
Other suggestions over time have to also implement a Thread pooling facility, however current demand does not require this need.

How to get php page load time statistics?

Recently we've been having problems with our LAMP setup and we started to see the number of MySQL database connections spike up every now and then. We suspect that some mysql operation is taking longer than usual and apache just started to build a backlog of connections to deal with incoming requests.
Question is, is there a way to per page statistics on things like average load time? median load time? max/min load time for each php page (page1.php, page2.php, page3.php etc). So that we can narrow down where the problem is. Is there such thing included as part of apache? Maybe a separate module?
From the log format, you can just log the time taken (%D) in your access logs, and after an incident, sort on time-taken, and check the urls. I'm not aware of any application that checks this out of the box, but a lot of applications can handle apache's access logs, so chances are there are those who can work with it. I seldomly look at page-specific logs, only server totals, so I can't help you there.
If MySQL is busy / the cause:
Close a connection to MySQL if you're done with it, so the connection is released sooner.
Increase the maximum allowed connections if you really need them.
If you still have hanging processes, check the output of SHOW FULL PROCESSLIST to see what queries are being performed.
You can enable the slow_query_log, logging all queries above a certain amount of miliseconds (in newer versions, old versions only supported seconds) or not using indexes. The command line tool mysqldumpslow can accurately group / count the queries.
If you have access to php.ini, you can use Xdebug : http://xdebug.org/

What different settings affect PHP and/or Apache timeouts?

I was asked to help troubleshoot someone's website. It is written in php, on a linux box, using an apache server and mysql, and I have never worked with any of these before (except maybe linux in school).
I got most of the issues fixed (most code is really the same no matter what langues it is) however there is still one page that is timing out when processing huge files. I'm fairly sure the problem is a timeout somewhere but I have no idea where all the php timeouts would be.
I have adjusted max_execution_time, max_input_time, mysql.connect_timeout, default_socket_timeout, and realpath_cache_ttl in php.ini but it is still timing out after about 10 minutes. What other settings might exist that I could increase to try and fix this?
As a sidenote, I'm aware that 10min is generally not desired when processing an file, however this section of the site is only used by one person once or twice a week and she doesn't mind providing the process finishes as expected !and I really don't want to go rewrite someone else's bad coding in a language I don't understand, for a process I don't understand)
EDIT: The sql process finishes in the background, its just the webpage itself that times out.
Per Frank Farmer's suggestion, I added flush() to the code and it works now. Definitely a browser timeout, thanks Frank!
You can use set_time_limit() if you set it to zero the script should not time out at all.
This would be placed within your script, not in any config etc...
Edit: Try changing apache's timeout settings. In the config look for TimeOut directive (should be the same for apache 2.x and apache 1.3.x), once changed restart apache and check it.
Edit 3:
Did you go to the link I provided? It lists there the default, which is 300 seconds (5 minutes). Also if the setting IS NOT in the config file, you CAN add it.
According to the docs:
The TimeOut directive currently defines the amount of time Apache will wait for three things:
The total amount of time it takes to receive a GET request.
The amount of time between receipt of TCP packets on a POST or PUT request.
The amount of time between ACKs on transmissions of TCP packets in responses.
So it is possible it doesn't relate, but try it and see.

Categories