Can one find out which MySQL-queries were slow? - php

I use the PHP function mysql_stat() to obtain information about my MySQL database.
There is an entry Slow queries, which in my case is 94 (of 301729 queries). Is there a log-file that contains more information about the queries? (time, execution time, the query itself,...)?
14 Hours ago I set up a new server, with 4 times more RAM,... but still I have 0.031154% slow queries, which is basically the same as before, I think that’s very high. I really like to find out, which are the slow queries, and how to optimize them. What is an acceptable ratio of slow queries?

You can enable log-slow-queries in my.cnf. That writes the slow queries to a log.
log_slow_queries = /var/log/mysql/mysql-slow.log
I think you should never have queries slower than .2 seconds if the user needs to wait. When executing crons where no users are involved, it does not matter that much. But if you use the same database / table, the query of the cron can slow down the normal query (locking / i/o).
You can optimize your databse by setting the right indexes, and use EXPLAIN to try different queries.

Related

large amount of inserts per seconds causing massive CPU load

I have a PHP script that in every run, inserts a new row to a Mysql db (with a relative small amount of data..)
I have more than 20 requests per second, and this is causing my CPU to scream for help..
I'm using the sql INSERT DELAYED method with a MyISAM engine (although I just notice that INSERT DELAYED is not working with MyISAM).
My main concern is my CPU load and I started to look for ways to store this data with more CPU friendly solutions.
My first idea was to write this data to an hourly log files and once an hour to retrieve the data from the logs and insert it to the DB at once.
Maybe a better idea is to use NoSQL DB instead of log files and then once an hour to insert the data from the NoSQL to the Mysql..
I didn't test yet any of these ideas, so I don't really know if this will manage to decrease my CPU load or not. I wanted to ask if someone can help me find the right solution that will have the lowest affect over my CPU.
I recently had a very similar problem and my solution was to simply batch the requests. This sped things up about 50 times because of the reduced overhead of mysql connections and also the greatly decreased amount of reindexing. Storing them to a file then doing one larger (100-300 individual inserts) statement at once probably is a good idea. To speed things up even more turn off indexing for the duration of the insert with
ALTER TABLE tablename DISABLE KEYS
insert statement
ALTER TABLE tablename ENABLE KEYS
doing the batch insert will reduce the number of instances of the php script running, it will reduce the number of currently open mysql handles (large improvement) and it will decrease the amount of indexing.
Ok guys, I manage to lower the CPU load dramatically with APC-cache
I'm doing it like so:
storing the data in memory with APC-cache, with TTL of 70 seconds:
apc_store('prfx_SOME_UNIQUE_STRING', $data, 70);
once a minute I'm looping over all the records in the cache:
$apc_list=apc_cache_info('user');
foreach($apc_list['cache_list'] as $apc){
if((substr($apc['info'],0,5)=='prfx_') && ($val=apc_fetch($apc['info']))){
$values[]=$val;
apc_delete($apc['info']);
}
}
inserting the $values to the DB
and the CPU continues to smile..
enjoy
I would insert a sleep(1); function at the top of your PHP script, before every insert at the top of your loop where 1 = 1 second. This only allows the loop to cycle once per second.
This way it will regulate a bit just how much load the CPU is getting, this would be ideal assuming your only writing a small number of records in each run.
You can read more about the sleep function here : http://php.net/manual/en/function.sleep.php
It's hard to tell without profiling both methods, if you write to a log file first you could end up just making it worse as your turning your operation count from N to N*2. You gain a slight edge by writing it all to a file and doing a batch insert but bear in mind that as the log file fills up it's load/write time increases.
To reduce database load, look at using mem cache for database reads if your not already.
All in all though your probably best of just trying both and seeing what's faster.
Since you are trying INSERT DELAYED, I assume you don't need up to the second data. If you want to stick with MySQL, you can try using replication and the BLACKHOLE table type. By declaring a table as type BLACKHOLE on one server, then replicating it to a MyISAM or other table type on another server, you can smooth out CPU and io spikes. BLACKHOLE is really just a replication log file, so "inserts" into it are very fast and light on the system.
I do not know what is your table size or your server capabilities but I guess you need to make a lot of inserts per single table. In such a situation I would recommend checking for the construction of vertical partitions that will reduce the physical size of each partition and significantly reduce the insertion time to the table.

what is an alternative way to profile your web app without going through a a profiler program?

I have a website that uses php - mysql . I want to determine the DB queries that take the most time . Instaed of using a profiler , what other methods can I use to pinpoint the QUERY bottlenecks .
You can enable logging of slow queries in MySql:
http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
The slow query log consists of all SQL statements that took more than long_query_time seconds to execute and (as of MySQL 5.1.21) required at least min_examined_row_limit rows to be examined. The time to acquire the initial table locks is not counted as execution time. mysqld writes a statement to the slow query log after it has been executed and after all locks have been released, so log order might be different from execution order. The default value of long_query_time is 10.
This a large subject and I can only suggest a couple of pointers - but I am sure there are good coverage subjects elsewhere on SO
firstly profiling db queries. Every RDBMS holds a long running query list MySql is turned on with a single config flag. This will catch queries that run a long time (this could be order of seconds which is a long time for a modern RDBMS).
Also every RDBMS returns a time to execute with it's recordset. I strongly suggest you pull all the calls to a DNase through one common function say "executequery" then pull the SQL and execute times into a file for later
in general slow queries come from poor table design and the lack of good indexes. Run an "explain" over any query that worries you - the dbae will tell you how it will run that query - any "table scans" indicate the RDBMS cannot find a index on the table that meets the query needs
Next profiling is a term most often used to see time spent in executing parts of a program ie which for loops are used all the time, or time spent establinsing a db connection
what you seem to want is performance testing
Just read time before/after executing each query. This is easy if you use any database abstraction class/function.

multiple MySQL queries vs. multiple PHP Sessions

I originally had a page that ran 60 MySQL queries, which was obviously flawed. The page took a couple seconds to load. So i changed the code to one MySQL query and used php sessions/arrays to arrange the 60 results. The page now loads much faster/instantly but I'm wondering is this way better than the MySQL, design wise? I have an incrementing session that is set in a while loop(60 loops), each session holds an array, which i then sort.
both are bad as yoda did say
you have to move in completely different direction:
Sensibly reduce number of queries. There is nothing actually bad in having 60 queries, and a page could have it and still load in a fraction of second. But it would be wise to remove unnecessary ones.
Optimize query runtime. Determine which query runs slow and optimize it, by using DESC query query (or rather explain extended+show warnings), using indexes and such
It's impossible to say more for such a vague question with not a single query for example
Reduce number of queries.
Optimize queries for performance.
Use memcache or APC if instead of session. Sessions are not made for this purpose.

How can I speed up INNODB queries comparable to MYISAM performance?

I have recently switched my database tables from MYISAM to INNODB and experience bad timeouts with queries, mostly inserts. One function I use previously took <2 seconds to insert, delete and update a large collection of records across ~30 MYISAM tables, but now that they are INNODB, the function causes a PHP timeout.
The timeout was set to 60 seconds. I have optimised my script enough that now, even though there are still many queries, they are combined together (multiple inserts, multiple deletes, etc) and the script now takes ~25 seconds, which is a substantial increase from what appeared to be at least 60 seconds.
This duration is still over 10x quicker when previously using MYISAM, is there any mistakes I could be making in the way I process these queries? Or are there any settings that could assist in the performance? Currently the MySQL is using the default settings of installation.
The queries are nothing special, DELETE ... WHERE ... simple logic, same with the INSERT and UPDATE queries.
Hard to say without knowing too much about your environment, but this might be more of a database tuning problem. InnoDB can be VERY slow on budget hardware where every write forces a true flush. (This affects writes, not reads.)
For instance, you may want to read up on options like:
innodb_flush_log_at_trx_commit=2
sync_binlog=0
By avoiding the flushes you may be able to speed up your application considerably, but at the cost of potential data loss if the server crashes.
If data loss is something you absolutely cannot live with, then the other option is to use better hardware.
Run explain for each query. That is, if the slow query is select foo from bar;, run explain select foo from bar;.
Examine the plan, and add indices as necessary. Re-run the explain, and make sure the indices are being used.
Innodb builds hash indexes which helps to speed up lookup by indexes by passing BTREE index and using hash, which is faster

Performance tuning MYSQL Database

How can i log which all query is executed,how many secs a MYSQL query take to execute, to increase performance of database?
I am using PHP
Use the slow query log for catching queries which run longer than a specified time limit (2 seconds by default). This will show you the worst offenders; probably many will be low-hanging fruit, fixable by proper indexes.
It won't catch a different type of code smell: a relatively fast query running many times in a tight loop.
If you wish, you may set the slow query limit to 0 and it will log all queries - note that this in itself will slow down the server.

Categories