Upgraded server, now dealing with MySQL sleeping queries - php

The Problem
MySQL is generating a lot of sleeping queries. At times so many that it was causing denial of service of sorts and really slowing (or shutting) down our site. We created a temporary solution by limiting the amount of time an idle process is open before it is killed (and has additional negative side affects for legitimate connections), but this is not the root of the problem, which I have been tasked to discover.
What Changed
This was not happening until recently, when we upgraded our server. We have (and have always had) a LAMP setup. Additionally, we updated the version of MySQL we're using, going from 5.6.19 to 5.6.26. We are also now using the latest stable version of PHP.
Research I've Done
My primary problem is that DBA is a weak part of my knowledge base. But the following articles have helped (1) improve my understanding of what may be going on and (2) show me that there are a lot of reasons this could be happening.
https://www.percona.com/blog/2007/02/08/debugging-sleeping-connections-with-mysql/ http://board.phpbuilder.com/showthread.php?10375101-How-to-Stop-MySQL-Sleep%28%29-Processes
wpapi.com/mysql-sleep-processes-issue-solved/
serverfault.com/questions/24191/mysql-sleeper-queries
major.io/2007/05/20/mysql-connections-in-sleep-state/
My Question
Essentially, because of my weak DBA skills, I'm having a hard time diagnosing what is our specific problem. Is it MySQL variables/settings, apache settings, a PDO problem? If so I could really use some how-to direction as I'm really just learning at the moment. It seems sleeping queries are being generated both in the form of duplicate queries (example: if I do show process list, my command to run the process list will show up as one of the current processes, as well as an additional sleeping query will be spawned), as well as just randomly we'll have tons of sleeping queries not related to duplicate processes.
As a side note, we don't use persistent connections. We did recently split out our database and all our clients have their own database now, but this all seemed much more aligned (time-frame wise) with the new server.
Any thoughts or insights in helping me diagnose and fix our issue would be very helpful. Thanks!

Related

How to reduce Time till first byte - Wordpress

I've got a huge TTFB of around 6.5 seconds when the whole load time of my site is around 7 seconds
I've done the basics to try and reduce this - updating to the latest version of PHP, switched to https so I can enabled HTTP/2 where possible and enabling caches where possible such as OpCache which I've checked is up and running correctly with my phpinfo which you can see here
https://www.camp-site-finder.com/phpinfo.php
This and setting up caching plugins such as W3 Total Cache has reduces the issue but on search queries there still seems to a large wait
As you can see here for example if you check the Network tab of developer tools
https://www.camp-site-finder.com/?sfid=48&_sf_s=england
So my question really is how can I debug this, are there tools out there to test what is taking so long or is this a non issue? Is that wait period really acceptable? Any advice or pointing me in the direction of some research I could do I'd be hugely grateful.
If a search is slow, it is almost always a bottleneck of the database.
Which DB-Server do you use? I see MySQL and SQLight extensions active, but I guess it is the former. But do you use MySQL or MariaDB? You could try MariaDB or some other drop-in replacement for MySQL (like Percona) which should increase DB performance.
Also you should log slow queries in the DB server, so you can check which DB queries is that slow. I guess you might have to many joins. In that case you should need to restructure the database.
Additionally you could try do follow some basic db performance tips like these:
Indexing
Assigned memory
etc.
Just google for 'increase MySQL perfomance' and you should find plenty of adwise.

Making a PHP website scale a lot more

My server's load average shoots up to 150 (and the server is actually quite powerful 8cpus too much RAM etc), and the cause of that is MySQL taking all of my CPUs 700% !
I'm aware of Apache/MySQL tuning to meet better performance, I've done some, it worked a little bit but nowhere near the results I need.
All my problems are coming from this scenario: when website file based cache invalidates, PHP scripts run to remake those cached areas, generating MySQL queries (quite heavy queries, did some optimization on them too but they're still taxing on MySQL). That's quite normal, the problem is when a 100 people hit the website at that precise time when cache is invalid so they generate the same query 100 times - which makes MySQL sink all the way down with my server.
Are there any MySQL solutions to prevent the duplication of the same query? or is there any other technique to fix that special scenario?
I my opinion you would need another marker if rebuilding the cache is already in process, so that only one update query is done and until finished the old cache is used.
That would trigger the expensive query's just once.
For a better answer please explain how you invalidate your cache.

How to prevent corrupt database - MySQL

Currently running an ecommerce store. In the last couple of months we have been experiencing lots of very odd / random issues that have been affecting the store. In almost all cases we have been unable to replicate the faults and it turned out to be a corrupt database.
What can cause this to happen?
How can it be prevented?
EDIT:
Sorry, this is pretty vague. Basically Im looking for things that could potentially cause a database corruption. Its a MySql4 database.
What essentially causes database corruption and how can you detect and prevent.
Just generally.
The question is very broad so I will try to answer with broad suggestions.
MySQL, while not what I would call an enterprise-level DBMS, should not have random corruption problems.
Check your hardware. I don't know your platform, but assuming it's linux you can use Try prime95 to test your processors and smartmontools to do some basic disk tests. Memtest86 can diagnose memory errors, but you probably don't want to reboot your server just for that.
Some specific kernel versions have, in my experience, possibly led to unexplained problems. Can you upgrade?
Test importing the data on a newer version of MySQL, or on a different system. This may not be useful because "no errors yet" doesn't mean the problem is resolved.
If you are using MyISAM tables, which should still work fine, you may try an ACID compliant engine. InnoDB was all the rage for a while, but I believe it's been replaced.
This is a fairly general question, so I will just give you what I have seen personally.
Most of the MySQL database failures that I have seen have been because of a hard drive being corrupted some how or power failures when a server is hard powered-off instead of shutdown properly.
The main thing you can do to prevent damage in these two cases is to make backups of your database often (and store them somewhere that will not be tampered with), this way you will always have something recent to revert back to. Also, storing your data in a RAID is helpful because if depending on your setup, you can survive if a disk or two crashes. Having backup power supplies for your disks in case power goes out is good too.
Also, try to use a robust tables that can recover if problems arise. I used to use MyISAM tables but if I ever ran into a problem I would lose the data and have to start from the latest backup. So I switched to InnoDB and then I was actually able to recover from most crashes. However, this was a while ago and InnoDB may not be the latest and greatest anymore.
Anyways, good luck with solving your issues and if you have anymore information, hopefully I can help more.

How to optimize this website. Need general advices

this is my first question here, which is regarding a specific website optimization.
A few moths ago, we launched [site] for one of our clients which is some kind of community website.
Everything works great, but now this website is getting bigger and it shows some slowness when the pages are loading.
The server specs:
PHP 5.2.1 (i think we need to upgrade on 5.3 to make use of the new garbage collector)
Apache 2.2
Quad Core Xeon Processor # 2,8 Ghz and 4 GB DDR 3 RAM.
XCACHE 1.3 (we added this a few months ago)
Mysql 5.1 (we are using innodb as engine)
Codeigniter framework
Here is what we did so far and what we intend to do further :
Beside xcache, we don't really use a caching mechanism because most of the content comes live and beside this, we didn't wanted to optimize prematurely because we didn't know what to expect as far as the traffic flow.
On the other hand, we have installed memcached and we want to implement a cache system based on memcached.
Regarding the database structure, we have reached 3NF with most of our tables, and yes we have some slow queries(which we plan to optimize) but i think because the tables that produce slow queries are the one for blog comments(~44,408 rows) / user logs tracking (~725,837 rows) / user comments (~698,964 rows) etc which are quite big tables. The entire database is 697.4 MB in size for now.
Also, here are some stats for January 2011:
Monthly unique visitors: - 127.124
Monthly unique views: 4.829.252
Monthly unique visits: 242.708
Daily average:
Unique new visitors: 7.533
Unique new views : 179.680
Just let me know if you need more details.
Any advice is highly appreciated.
Thank you.
When it come to performance issue, there is no golden rule or labelled sticky note that first tell that is related to database. Maybe what i could suggest is to do performance profiling and there are many free and paid tools over the Internet that allows you to do so.
First start of with web server layer, make sure everything is done correctly and optimized as what is be possible.
Then move on to next layer (which i assume is your database). Normally from layman perspective whenever someone mentioned InnoDB MySQL, we assume there are indexes being created to optimize and search operations. The usage of indexes also quite important because you don't want to indexing something wrong and make things worse. My advise to this is to get a DBA equivalent personnel to troubleshoot using a staging environment.
Another tricks you could possibility look at is the contents, from web page contents to database data, make sure you show/keep data where is needed only, do no store unnecessary information into database and using smart layout on the webpage. A cut down of a seconds or two might do a big difference in terms of usability and response time.
It is very hard to explain the detail here unless we have in-depth information about your application, its architecture and your environment, but above are some commonly used direction people use to troubleshoot such incident.
Good luck!
This site has excellent resources http://www.websiteoptimization.com/
The books that are mentioned are excellent. There are just too many techniques to list here and we do not know what you have tried so far.
Sorry for the delay guys, i have been very busy to find the issue and i did it.
Well, the problem was because of apache mostly, i had an access log of almost 300 GB which at midnight was parsed to generate webalizer stats. Mostly when this was happening the website was very very slow. I disabled webalizer for the domain, cleared the logs, and what to see, it is very fast again, doesn't matter the hour you access it.
I now only have just a few slow queries that i tend to fix today.
I also updated to CI 2.0 Reactor as suggested and started to use the memcached driver.
Who would knew that apache logs can be so problematic...
Based on the stats, I don't think you are hitting load problems... on a hunch, I would look to the database first. Database partitioning might be a good place to start.
But you should really do some profiling of your application first. How much time is spent in the application versus database. Are there application methods that are using lots of time and just need some tweaking? Are database queries not written efficiently? Do you need more or better database indices?
Everything looks pretty good-- if upgrading codeigniter is an option, the new codeigniter 2.0 (reactor) adds support for memcache (New Cache driver with file system, APC and memcache support). Granted you're already using xcache, these new additions may be worth looking at.
When cache objects weren't enough for our multi-domain platform that saw huge traffic, we went the route of throwing more hardware at it-- ram, servers/database. Then we moved to database clustering to handle single account forecasted heavy load. And now switching from apache to nginx... It's a never ending battle, but what worked for us was being smart about what we cached and increasing server memory then distributing this load across servers...
Cache as many database calls as you can. In my CI application I have a settings table that rarely changes, so I cache all calls made to it as I am constantly querying the settings table.
Cache your views and even your controllers as well. I tend to cache basically as much as I can in my CI applications and then refresh the cache when a file changes.
Only autoload important libraries, models and helpers. I've seen people autoload up to 10 libraries and on-top of that a few helpers and then a model. You only really need to autoload the database and session libraries if you are using them.
Regarding point number 3, are you autoloading many things in your config/autoload.php file by any chance? It might help speed things up only loading things you need in your controllers as you need them with exception of course the session and database libraries.

Whats a good way about troubleshooting a script in terms of performance (php/mysql)?

I've written a site CMS from scratch, and now that the site is slowly starting to get traffic (30-40k/day) Im seeing the server load a lot higher than it should be. It hovers around 6-9 all the time, on a quad core machine with 8gb of ram. I've written scripts that performed beautifully on 400-500k/day sites, so I'd like to think Im not totally incompetent.
I've reduce numbers of queries that are done on every page by nearly 60% by combining queries, eliminating some mysql calls completely, and replacing some sections of the site with static TXT files that are updated with php when necessary. All these changes affected the page execution time (index loads in 0.3s, instead of 1.7 as before).
There is virtually no IOwait, and the mysql DB is just 30mb. The site runs lighttpd, php 5.2.9, mysql 5.0.77
What can I do to get to the bottom of what exactly is causing the high load? I really wanna localize the problem, since "top" just tells me its mysql, which hovers between 50-95% CPU usage at all times.
Use EXPLAIN to help you optimize/troubleshoot your queries. It will show you how tables are referenced and how many rows are being read. It's very useful.
Also if you've made any modifications to your MySQL configuration, you may want to revisit that.
The best thing you can do is to profile your application code. Find out which calls are consuming so much of your resources. Here are some options (the first three Google hits for "php profiler"):
Xdebug
NuSphere PhpED
DBG
You might have some SQL queries that are very slow, but if they are run infrequently, they probably aren't a major cause of your performance problems. It may be that you have SQL queries that are more speedy, but they are run so often that their net impact to performance is greater. Profiling the application will help identify these.
The most general-purpose advice for improving application performance with respect to database usage is to identify data that changes infrequently, and put that data in a cache for speedier retrieval. It's up to you to identify what data would benefit from this the most, since it's very dependent on your application usage patterns.
As far as technology for caching, APC and memcached are options with good support in PHP.
You can also read through the MySQL optimization chapter carefully to identify any improvements that are relevant to your application.
Other good resources are MySQL Performance Blog, and the book "High Performance MySQL." If you're serious about running a MySQL-based website, you should be consulting these resources frequently.
mytop is a good place to start. It's basically top for MySQL, and will give you a window into what exactly your DB is doing:
http://jeremy.zawodny.com/mysql/mytop/
Noah
It could be any number of reasons, so it could take a lot of proding. A good first step would be to turn on the slow query log, and go over it by hand or with a parser. You can pick specific heavily used, slow queries to optimize (perhaps ones that hit something unindexed)

Categories