My Wordpress website got a "error establishing connection to database" massage.
My host informed me it was because my "user" had too many database connections that were open at once. This caused an error making additional connections, and thus the massage.
This has been corrected by killing deadlocked database connections. There were a number of connections copying data to temporary tables, but the deadlock was caused by a large set of lookups waiting for one update.
Can someone explain to me how this might have happened, and how to avoid it?
(p.s: that WP installation has over 2000 posts)
One thing I've seen help a great deal with WP and database speed is to clean your database of post and page revisions. WP keeps a full copy of each edit revision, and with 2000 posts, your database could be huge. Run this as an SQL query in phpmyadmin to clear revisions. I've seen databases drop 75% in size and run much faster after clearing revisions. Change the table prefix if you changed it when you installed WP, and run a backup beforehand.
DELETE a,b,c
FROM wp_posts a
LEFT JOIN wp_term_relationships b ON (a.ID = b.object_id)
LEFT JOIN wp_postmeta c ON (a.ID = c.post_id)
WHERE a.post_type = 'revision'
Then optimize tables after you run that query to finish clearing the revisions, either from the dropdown menu in phpmyadmin to optimize the whole database, or by another query just for the posts table:
OPTIMIZE TABLE wp_posts;
Then you can prevent post/page revisions from accumulating again by adding this line to wp-config.php to stop revisions:
define ('WP_POST_REVISIONS', FALSE);
Or this line to select the number of revisions to keep:
define('WP_POST_REVISIONS', 3);
If you have access to your MySQL config file, look into tuning MySQL for better performance with a utility like GitHub - major/MySQLTuner-perl.
In a shared hosting environment this behavior will occur sooner or later as your blog starts seeing more traffic - the specifics you mentioned sound like they may be related to poorly-written WordPress plugins (for performance's sake, make sure all your plugins updated along with the WordPress core).
You might also want to consider WP Super Cache if you haven't already.
There are two options you may want to look at,
PHP Persistent connections
MySQL max_connections setting
Persistent connections tries to use the same connection with the MySQL server over and over if available (the connection is not closed between PHP requests).
MySQL max_connections allows to increase the amount of possible connections accepted by the server.
I'm not familiar with wordpress specifically, so here is my take from a DB perf tuning perspective:
First, I would dig in to understand why the update is taking so long. Perhaps there is a bad query plan which requires tuning your DB's indexing strategy.
If the update cannot be sped up, and you are willing for your lookups to potentially read data that isn't fully committed (which might be ok for a blog, but not an accounting application for example), then you can change the SELECTs to include NOLOCK hints to avoid blocking on the update.
See this SO question for more info
Too many database connections can occur when you have too many database connections in use (obviously), that is more people running queries on your site at a time than the max allowed connections. How many connections does your mysql server allow?
Are you using mysql_pconnect() or just mysql_connect()? With the former the connection will stay open for longer and you cannot force it to close.
Related
I have wordpress with thousands of catgories/custom taxonomies and tens of thousands of posts.
I have a hard time keeping it online without cache, because processor reaches 100% ( used by mysql server not php ).
I have isolated a problem, due to mysql update,
WordPress database error: [MySQL server has gone away]
UPDATE wphn_options SET option_value = ........... ' WHERE option_name = 'rewrite_rules', this is executed on every page load.
This is an example of that the option_value looks like: `WordPress database error: [MySQL server has gone away] ( this is not every 1% of the query, just a short preview).
Anyone know how i can stop this query from executing?
UPDATE `wphn_options` SET `option_value` = 'a:7269:{s:18:\"sitemap_trolio.xml\";s:33:\"index.php?aiosp_sitemap_path=root\";s:29:\"sitemap_trolio_(.+)_(\\d+).xml\";s:71:\"index.php?aiosp_sitemap_path=$matches[1]&aiosp_sitemap_page=$matches[2]\";s:23:\"sitemap_trolio_(.+).xml\";s:40:\"index.php?aiosp_sitemap_path=$matches[1]\";s:34:\"sitemap(-+([a-zA-Z0-9_-]+))?
Reading the content of that update to the options table, you can see it's related to the sitemap of your site. You may have a sitemap plugin. That sitemap plugin may do something on every page load. Try disabling it.
If you have access to phpmyadmin, first make a backup of your installation and database (if you aren't doing so already). Then issue the SQL command OPTIMIZE TABLE wphn_options; and see if it helps. If it does, great. Try optimizing some of the other tables as well. OPTIMIZE TABLE wphn_posts; might be a good one to try.
But look: Your WordPress installation is underprovisioned. You need better server resources. You've gone to the trouble of creating tens of thousands of posts. By using such a weak server configuration, you are intentionally concealing those posts from your audience, just to save a few coins.
And, you're running the risk of corrupting your site by using a weak server. Is this not the very definition of "penny wise, pound foolish?"
Your question is like "My car's battery is low. I want to stop wasting electricity on my brake lights. Please tell me how to cut the wires to the brake lights." With respect, the only rational answer is "Are you crazy? You'll risk smashing your car to avoid fixing your battery? Fix your battery!"
I have found the solution, it seems that because of the large number of posts and categories the query could not be built and mysql server crashed to protect it's self.
I have fixed the issue by adding max_allowed_packet=256M in the MySQL configuration file
is there any chance to set in PDO settings that SELECT's will be executed on SLAVE DB server and Insert & Update & DELETE will be executed on MASTER DB server, or I need to create PHP handler to do that?
Situation:
We have Master - Master replication for MySQL. We are going to add two new servers so it will be - Master/Slave - Master/Slave.
I want to create some handling for SELECT queries. I want execute SELECT queries on SLAVE instead of MASTER and all UPADTE&INSERT&DELETE queries will be executed on MASTER. Is this possible with some setting?
Thanks!
No, you can't configure PDO or any of PHP's database extensions to do this. That is simply because each PDO (or MySQLi, etc.) instance represents a single connection, to a single server.
So yes, you'll need a handler that is aware of multiple connections to do that. Some popular ORMs and other database-abstraction layers do provide such functionality.
I recommend not doing it even if you could. Replication is "asynchronous". That is, when you insert into the Master, there is no assurance that it will arrive at the Slave before you try to read it. Nor even any guarantee that it will arrive today!
If you user posts a comment on a blog, and then goes to a page that shows the comment, they will be annoyed if the comment does not show. They may assume that the comment was lost and then repost it. This causes you grief when users complain about double-posting.
This is called "critical read". This simple way to avoid the mess is to be careful about what you send to the Slaves -- namely nothing that would lead to "disappearing" posts.
There are various "proxy" packages that allow from the read-write split you describe; some try to avoid the "critical read", but I don't trust them.
A Galera Cluster (see PXC, MariaDB), does synchronous reads, so it can avoid the critical read problem. (There is, however, a setting you need to apply.)
I am using MariaDB in a PHP application. The problem is the following: using Doctrine DBAL with the MySQL adaptor I do an insert from one page and then redirect to another one, in which a SELECT is done. Both are very basic queries.
The problem is that the SELECT does not reflect the actual data, but older one. I am hosting this application on a shared hosting, so please consider that I won't have all DB configuration options/permissions available.
I have tried to flush after the first INSERT, but it does not work either, and it still shows outdated data. I believed that the Query Caches are invalidated if the data changes, and that they do not apply because, in fact, it is a different query.
I do not use transactions either, so the commit is supposedly done after the insert. Any idea on how to get the most recent data possible?
It sounds like you are doing Replication and the proxy for directing queries is oblivious to "Critical Reads".
In a replication setup (MariaDB or MySQL), there is one Master server and one Slave (at least). INSERTs (and other writes) must occur on the Master. Then they are replicated to the Slave(s). SELECTs, on the other hand, may be done on either server, but, for sharing the load, it is better to do them on the Slave(s).
Replication is "asynchronous". That is, the write is eventually sent to the Slave and performed there. Normally, the delay is sub-second. But, for a number of reasons, the delay could be arbitrarily large. One should not depend on how quickly writes are replicated.
But... There is a thing called a "Critical Read". This is when the SELECT needs to "see" the thing that was just written.
You have a "critical read".
I don't know what is deciding to direct your SELECT to a Slave.
If you are using the Galera clustering option of MariaDB, then you can protect yourself from the critical read problem by changing your select to
SET SESSION wsrep_sync_wait = 1;
SELECT ... (as before)
SET SESSION wsrep_sync_wait = 0;
However; the SETs must go to the same 'node' as the SELECT. Without knowing what kind of proxying is going on, I cannot be more specific.
I hope you are not reconnecting before each statement. That would be really bad.
More on Galera issues for developers
If you are using replication and Doctrine DBAL has nothing for critical reads, complain to them!
I am designing an "high" traffic application which realies mainly on PHP and MySQL database queries.
I am designing the database tables so they can hold 100'000 rows, each page loading queries the db for user data.
I can experience slow performances or database errors when there are say 1000 users connected ?
Asking because i cannot find specification on the real performance limits of mysql databases.
Thanks
If the userdata remains unchanged due loading another page, you could think about storing those information in a session.
Also, you should analyze how the read/write ratio in your database/ on specific tables is. MyIsam and InnoDB are very different when it comes to locking. Many connections can slow down your server, try to cache connections.
Take a look at http://php.net/manual/en/pdo.connections.php
if designed wrongly, one user might kill your server. you need to have performance tests, find bottle necks profiling your code. use explain for your queries...
Well designed databases can handle with tens of millions of rows, but poor designed can't.
Don't worry about performance, try to design it well.
It's just hard to say a design was good or not,you should always do some stress tests before you set up your application or website to help you see the performance,tools i often used were mysqlslap(for mysql only) and apache's ab command.you can google them for details.
I have a report class that saves data like summary page views, banner impressions and so on. These are mostly INSERT and UPDATE queries. This takes a pounding.
Would it be a good idea to move the tables to a separate database and use a separate connection? Wondering if there would be any major advantages in relation to; performance, scalability etc. Thanks
Yes, but only if you are trying to manage load. If there is a lot of inserts and updates going on, that could cause locking issues if your tables are MyISAM. If you are doing replication, then heavy logging can cause slaves to fall behind because mysql replication is serial. You can do 10 inserts simultaneously on 10 different tables, but replication causes them to run one after another. That can cause more important inserts to take longer to replicate.
Having a separate server for logging will help performance and scalability. Keep in mind though that you can't join across servers.