Wordpress mysqld has gone away - php

I have wordpress with thousands of catgories/custom taxonomies and tens of thousands of posts.
I have a hard time keeping it online without cache, because processor reaches 100% ( used by mysql server not php ).
I have isolated a problem, due to mysql update,
WordPress database error: [MySQL server has gone away]
UPDATE wphn_options SET option_value = ........... ' WHERE option_name = 'rewrite_rules', this is executed on every page load.
This is an example of that the option_value looks like: `WordPress database error: [MySQL server has gone away] ( this is not every 1% of the query, just a short preview).
Anyone know how i can stop this query from executing?
UPDATE `wphn_options` SET `option_value` = 'a:7269:{s:18:\"sitemap_trolio.xml\";s:33:\"index.php?aiosp_sitemap_path=root\";s:29:\"sitemap_trolio_(.+)_(\\d+).xml\";s:71:\"index.php?aiosp_sitemap_path=$matches[1]&aiosp_sitemap_page=$matches[2]\";s:23:\"sitemap_trolio_(.+).xml\";s:40:\"index.php?aiosp_sitemap_path=$matches[1]\";s:34:\"sitemap(-+([a-zA-Z0-9_-]+))?

Reading the content of that update to the options table, you can see it's related to the sitemap of your site. You may have a sitemap plugin. That sitemap plugin may do something on every page load. Try disabling it.
If you have access to phpmyadmin, first make a backup of your installation and database (if you aren't doing so already). Then issue the SQL command OPTIMIZE TABLE wphn_options; and see if it helps. If it does, great. Try optimizing some of the other tables as well. OPTIMIZE TABLE wphn_posts; might be a good one to try.
But look: Your WordPress installation is underprovisioned. You need better server resources. You've gone to the trouble of creating tens of thousands of posts. By using such a weak server configuration, you are intentionally concealing those posts from your audience, just to save a few coins.
And, you're running the risk of corrupting your site by using a weak server. Is this not the very definition of "penny wise, pound foolish?"
Your question is like "My car's battery is low. I want to stop wasting electricity on my brake lights. Please tell me how to cut the wires to the brake lights." With respect, the only rational answer is "Are you crazy? You'll risk smashing your car to avoid fixing your battery? Fix your battery!"

I have found the solution, it seems that because of the large number of posts and categories the query could not be built and mysql server crashed to protect it's self.
I have fixed the issue by adding max_allowed_packet=256M in the MySQL configuration file

Related

Limiting mysql use per process

I have Debian VPS configured with a standard LAMP.
On this server, there is only one site (shop) which has a few cron jobs - mostly PHP scripts. One of them is update script executed by Lynx browser, which sends tons of queries.
When this script runs (it takes 3-4 minutes to complete) it consumes all MySQL resources, and the site almost doesn't work (page generates in 30-60 seconds instead of 1-2s).
How can I limit this script (i.e. extending its execution time limiting available resources) to allow other services to run properly? I believe there is a simple solution to the problem but can't find it. Seems my Google superpowers are limited last two days.
You don't have access to modify the offending script, so fixing this requires database administrator work, not programming work. Your task is called tuning the MySQL databse.
(I guess you already asked your vendor for help with this, and they said no.)
Ron top or htop while the script runs. Is CPU pinned at 100%? Is RAM exhausted?
1) Just live with it, and run the update script at a time of day when your web site doesn't have many visitors. Fairly easy, but not a real solution.
2) As an experiment, add RAM to your VPS instance. It may let MySQL do things all-in-RAM that it's presently putting on the hard drive in temporary tables. If it helps, that may be a way to solve your problem with a small amount of work, and a larger server rental fee.
3) Add some indexes to speed up the queries in your script, so each query gets done faster. The question is, what indexes will help? (Just adding indexes randomly generally doesn't help much.)
First, figure out which queries are slow. Give the command SHOW FULL PROCESSLIST repeatedluy while your script runs. The Info column in that result shows all the running queries. Copy them into a text file to keep them. (Or you can use MySQL's slow query log, about which you can read online.)
Second, analyze the worst offending queries to see whether there's an obvious index to add. Telling you how to do that generally is beyond the scope of a Stack Overflow answer. You might ask another question about a specific query. Before you do, please
reead this note about asking good SQL questions, and pay attention to the section on query performance.
3) It's possible your script is SELECTing many rows, or using SELECT to summarize many rows, from tables that also need to be updated when users visit your web site. In that case your visitors may be waiting for those SELECTs to finish. If you could change the script, you could put this statement right before long-running SELECTS.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
This allows the SELECT statement after it to do a "dirty read", in which it might get an earlier version of an updated row. See here.
Or, if you can figure out how to insert one statement into your obscured script, put this one right after it opens a database session.
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
Without access to the source code, though, you only have one way to see if this is the problem. That is, access the MySQL server from a privileged account, right before your script runs, and give these SQL commands.
SHOW VARIABLES LIKE 'tx_isolation';
SET GLOBAL TRANSACTION ISOLATION LEVEL READ UNCOMMITED;
then see if the performance problem is improved. Set it back after your script finishes, probably like this (depending on the tx_isolation value retrieved above)
SET GLOBAL TRANSACTION ISOLATION LEVEL READ COMMITED;
Warning a permanent global change to the isolation level might foul up your application if it relies on transaction consistency. This is just an experiment.
4) Harass the script's author to fix this problem.
Slow queries? High CPU? High I/O? Then you must look at the queries. You cannot "tune your way out of a performance problem". Tuning might give you a few percent improvement; fixing the indexes and queries is likely to give you a lot more improvement.
See this for finding the 'worst' queries; then come back with SELECTs, EXPLAINs, and SHOW CREATE TABLEs for help.

Optimize loading time of php page

I have got a simple PHP page requesting a list of addresses from a MySQL database. The database table has got 1257 entries. I also include a dynamically loaded side menu to browse to other sites.
Together I got 5 MySQL requests
The addresses
Pagination
Check whee the user has got permission to browse
get all the groups for side menu
get all sub entries for side menu
The site takes about 5 seconds to load.
I Googled for site load time improvement and found the Google Developer tools with page speed did all the improvements it told me to like enable deflate, change banner size, and so on but it is still at nearly the same loading time. I would like to know if this is common or if there anything I can do to improve the loading time.
EDIT: I have also indexed the columns and enabled the MySQL cache. I also use foreign keys in the sub entries table which are from the menu group table
EDIT2: I found the solution the problem was that i used localhost to connect to my db but since im using windows 7 it tried to connect via ipv6 now i change all localhost to 127.0.0.1 and it only takes about 126ms to load my page
In the first place, find out what's taking a page so long to load with browser's console. If the cause of the delay is at server side, e.g. the html file itself is being generated for a long time, then check the following:
Try to log slow mysql queries and make sure that you have none.
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html
If you really have some expensive calculations going on (which is not likely in your case), try to cache it.
Don't forget about benefits of PHP code accelerators like APC and mysql optimizations (query cache etc).
... many other ways to speed things up, but got to profile the app itself and see what's going on.
Have you done indexing for the columns using in where condition. If not pl index the columns and check it

Multiple select list on same page -> error

i'm going crazy trying to solve a VERY weird error in a PHP script (Joomla). I have a page that displays multiple select dropdown lists, all of them showing the same list of items (the selected item changes from one list to another, but listed items are the same). This list has around 35-40 items. This works fine until a certain amount of selects, but when i put more than 20 or 25 selects on the same page, it doesn't work and shows only a white page. No errors, there is no text displayed, no errors in php logs, nothing; just a white page. If using THE SAME CODE, i display 11 dropdown select lists... it works.
I'm guessing that this problem is related to memory or something like that, but i can't be sure cause as i've said, there is no errors displayed. Does anyone knows about a simmilar issues? can anyone give me a tip about how to address this problem? i don't know what to do and i've tried many things but it still doesn't work. Any help will be very much appreciated and wellcomed...
NOTE: The select list are filled with values from a DB table and each select list has a different selected item based on contents from another table. It's not a very complex code and as i've said, it works fine when i use less select lists on the same page. The problem is when i reach a certain number of select lists on the same page (i think that it's around 20 or 25 input selects). I think that the amount of data is not very exagerated, so i can't understand why it doesn't work¿!?
A quick google for your issue turns this up:
for jos_session, which is the only table I suggested you empty, any logged on users will be logged off...any work in progress (forum posts/articles would be lost)...
I also empty recaptcha...
Please remember to always back up your db first.
I empty these two for a higher volume joomla 1.5 system once a week...we also set the session lifetime (no activity) varies 60-90 minutes...6-7k day volume site...this also helps our akeeba back up, as the two aforementioned tables can get very large without proper maintenance.
Just some general ramblings...
You should also review your mysql site report via phymyadmin "Show MySQL runtime information". Look for things that are in 'red'.
As for your overall question about performance. Please remember that there are many ways to improve websites performance. It's yes another job required by site administrators and at least an interesting process.
The joomla performance forum is a great place to have your site reviewed and get good help tuning your site including the minimum base server you need (shared/vps/dedicicated).
IMHO...First objective is to turn off joomla cache and joomla gzip by enabling standard server modules like mod_deflate and mod_expires (mod_expires is one of the best fixes for returning visitors). Make sure you mysql configuration enabling query_cache are or can be set. You will need a minimum of a vps. and there's more!...jajajaja
A little note about running shared server, and not having certain server modules available,
check this out: http://www.webogroup.com/ It's really one heck of a product. I've used it on the aforementioned site until I could implement the changes on the server. As I implemented each new server module I turned off the webo...site is now boring fast
have fun

Optimizing and compressing mysql db with drupal

I have developed a news website in a local language(utf-8) which server average 28k users a day. The site has recently started to show much errors and slow down. I got a call from the host saying that the db is using almost 150GB of space. I believe its way too much for the db and think there something critically wrong however i cannot understand what it could be. The site is in Drupal and the db is Mysql(innoDb). Can any one give directions as to what i should do.
UPDATE: Seems like innoDb dump is using the space. What can be done about it? Whats the standard procedure to deal with this issue.
The question does not have enough info for a specific answer, maybe your code is writing the same data to the DB multiple times, maybe you are logging to the table and the logs have become very big, maybe somebody managed to get access to your site/DB and is misusing it.
You need to login to your database and check which table is taking the most space. Use SHOW TABLE STATUS (link) which will tell you the size of each table. Then manually check the data in the table to figure out what is wrong.

What can cause "too many database connections"

My Wordpress website got a "error establishing connection to database" massage.
My host informed me it was because my "user" had too many database connections that were open at once. This caused an error making additional connections, and thus the massage.
This has been corrected by killing deadlocked database connections. There were a number of connections copying data to temporary tables, but the deadlock was caused by a large set of lookups waiting for one update.
Can someone explain to me how this might have happened, and how to avoid it?
(p.s: that WP installation has over 2000 posts)
One thing I've seen help a great deal with WP and database speed is to clean your database of post and page revisions. WP keeps a full copy of each edit revision, and with 2000 posts, your database could be huge. Run this as an SQL query in phpmyadmin to clear revisions. I've seen databases drop 75% in size and run much faster after clearing revisions. Change the table prefix if you changed it when you installed WP, and run a backup beforehand.
DELETE a,b,c
FROM wp_posts a
LEFT JOIN wp_term_relationships b ON (a.ID = b.object_id)
LEFT JOIN wp_postmeta c ON (a.ID = c.post_id)
WHERE a.post_type = 'revision'
Then optimize tables after you run that query to finish clearing the revisions, either from the dropdown menu in phpmyadmin to optimize the whole database, or by another query just for the posts table:
OPTIMIZE TABLE wp_posts;
Then you can prevent post/page revisions from accumulating again by adding this line to wp-config.php to stop revisions:
define ('WP_POST_REVISIONS', FALSE);
Or this line to select the number of revisions to keep:
define('WP_POST_REVISIONS', 3);
If you have access to your MySQL config file, look into tuning MySQL for better performance with a utility like GitHub - major/MySQLTuner-perl.
In a shared hosting environment this behavior will occur sooner or later as your blog starts seeing more traffic - the specifics you mentioned sound like they may be related to poorly-written WordPress plugins (for performance's sake, make sure all your plugins updated along with the WordPress core).
You might also want to consider WP Super Cache if you haven't already.
There are two options you may want to look at,
PHP Persistent connections
MySQL max_connections setting
Persistent connections tries to use the same connection with the MySQL server over and over if available (the connection is not closed between PHP requests).
MySQL max_connections allows to increase the amount of possible connections accepted by the server.
I'm not familiar with wordpress specifically, so here is my take from a DB perf tuning perspective:
First, I would dig in to understand why the update is taking so long. Perhaps there is a bad query plan which requires tuning your DB's indexing strategy.
If the update cannot be sped up, and you are willing for your lookups to potentially read data that isn't fully committed (which might be ok for a blog, but not an accounting application for example), then you can change the SELECTs to include NOLOCK hints to avoid blocking on the update.
See this SO question for more info
Too many database connections can occur when you have too many database connections in use (obviously), that is more people running queries on your site at a time than the max allowed connections. How many connections does your mysql server allow?
Are you using mysql_pconnect() or just mysql_connect()? With the former the connection will stay open for longer and you cannot force it to close.

Categories