Performance tuning MYSQL Database - php

How can i log which all query is executed,how many secs a MYSQL query take to execute, to increase performance of database?
I am using PHP

Use the slow query log for catching queries which run longer than a specified time limit (2 seconds by default). This will show you the worst offenders; probably many will be low-hanging fruit, fixable by proper indexes.
It won't catch a different type of code smell: a relatively fast query running many times in a tight loop.
If you wish, you may set the slow query limit to 0 and it will log all queries - note that this in itself will slow down the server.

Related

Parallel queries to databases

I've got about 30 databases (on different machines) with the same structure and I want to query them with the same very query.
Normally I am preparing connections and then doing foreach connecting to every database and sending query, waiting for result.
I was thinking about running those queries in parallel processes, so instead of waiting for results sum up (ie. 1 second per query per server), it would be a time of longest running query.
First I though about mysqli::poll / MYSQLI_ASYNC, but it heavily depends on mysqli.
I've found similar question: PHP asynchronous mysql-query but it's over 3 years old. Maybe somebody found other way?
Only one independent solution I can think of right now is using pnctl_fork to split queries to parallel processes, and then collect data using shared memory.
Is there any other method in PHP to work it around and achieve desired result?

In what order does MySQL process queries from 2 different connections?

Let's say I have two files file1.php and file2.php.
file1.php has the following queries:
-query 1
-query 2
-query 3
file2.php has the following queries:
-query 4
-query 5
-query 6
Let's say one visitor runs the first script and another visitor runs the second one exactly at the same time.
My question is: does MySQL receive one connection and keep the second connection in queue while executing all queries of the first script, and then moves on to the second connection?
Will the order of queries processed by MySQL be 1,2,3,4,5,6 (or 4,5,6,1,2,3) or can it be in any order?
What can I do to make sure MySQL executes all queries of one connection before moving on to another connection?
I'm concerned with data integrity. For example, account balance by two users who share the same account. They might see the same value, but if they both send queries at the same time, this could lead to some unexpected outcome
The database can accept queries from multiple connections in parallel. It can execute the queries in arbitrary order, even at the same time. The isolation level defines how much the parallel execution may affect the results:
If you don't use transactions, the queries can be executed in parallel, and the strongest isolation level still guarantees only that the queries will return the same result as if they were not executed in parallel, but can still be run in any order (as long as they're sorted within each connection)
If you use transactions, the database can guarantee more:
The strongest isolation level is serializable, which means the results will be as if no two transactions ran in parallel, but the performance will suffer.
The weakest isolation level is the same as not using transactions at all; anything could happen.
If you want to ensure data consistency, use transactions:
START TRANSACTION;
...
COMMIT;
The default isolation level is read-commited, which is roughly equivalent to serializable with ordinary SELECTs happening out-of-transactions. If you use SELECT FOR UPDATE for every SELECT within the transaction, you get serializable
See: http://dev.mysql.com/doc/refman/5.0/en/set-transaction.html#isolevel_repeatable-read
In general, you cannot predict or control order of execution - it can be 1,2,3,4,5,6 or 4,5,6,1,2,3 or 1,4,2,5,3,6 or any combination of those.
MySQL executes queries from multiple connections in parallel, and server performance is shared across all clients (2 in your case).
I don't think you have a reason to worry or change this - MySQL was created with this in mind, to be able to serve multiple connections.
If you have performance problems, they typically can be solved by adding indexes or changing your database schema - like normalizing or denormalizing your tables.
You may limit max_connections to 1 but then it will give you too many connections error for other connections. Limiting concurrent execution makes no sense.
Make the operation as transaction and set auto commit to false
Access all of your tables in the same order as this will prevent deadlock.

Can one find out which MySQL-queries were slow?

I use the PHP function mysql_stat() to obtain information about my MySQL database.
There is an entry Slow queries, which in my case is 94 (of 301729 queries). Is there a log-file that contains more information about the queries? (time, execution time, the query itself,...)?
14 Hours ago I set up a new server, with 4 times more RAM,... but still I have 0.031154% slow queries, which is basically the same as before, I think that’s very high. I really like to find out, which are the slow queries, and how to optimize them. What is an acceptable ratio of slow queries?
You can enable log-slow-queries in my.cnf. That writes the slow queries to a log.
log_slow_queries = /var/log/mysql/mysql-slow.log
I think you should never have queries slower than .2 seconds if the user needs to wait. When executing crons where no users are involved, it does not matter that much. But if you use the same database / table, the query of the cron can slow down the normal query (locking / i/o).
You can optimize your databse by setting the right indexes, and use EXPLAIN to try different queries.

what is an alternative way to profile your web app without going through a a profiler program?

I have a website that uses php - mysql . I want to determine the DB queries that take the most time . Instaed of using a profiler , what other methods can I use to pinpoint the QUERY bottlenecks .
You can enable logging of slow queries in MySql:
http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
The slow query log consists of all SQL statements that took more than long_query_time seconds to execute and (as of MySQL 5.1.21) required at least min_examined_row_limit rows to be examined. The time to acquire the initial table locks is not counted as execution time. mysqld writes a statement to the slow query log after it has been executed and after all locks have been released, so log order might be different from execution order. The default value of long_query_time is 10.
This a large subject and I can only suggest a couple of pointers - but I am sure there are good coverage subjects elsewhere on SO
firstly profiling db queries. Every RDBMS holds a long running query list MySql is turned on with a single config flag. This will catch queries that run a long time (this could be order of seconds which is a long time for a modern RDBMS).
Also every RDBMS returns a time to execute with it's recordset. I strongly suggest you pull all the calls to a DNase through one common function say "executequery" then pull the SQL and execute times into a file for later
in general slow queries come from poor table design and the lack of good indexes. Run an "explain" over any query that worries you - the dbae will tell you how it will run that query - any "table scans" indicate the RDBMS cannot find a index on the table that meets the query needs
Next profiling is a term most often used to see time spent in executing parts of a program ie which for loops are used all the time, or time spent establinsing a db connection
what you seem to want is performance testing
Just read time before/after executing each query. This is easy if you use any database abstraction class/function.

reducing execution time of individual php files that are not mandatory to the system like Ajax JSON requests

I want to make sure AJAX responses from dynamic JSON pages does not slow down the server when the SQL queries take too long. I'm using PHP, MySQL with Apache2. I had this idea to use ini_set() to recude the execution of this pages with the inline use of the mentioned method or the set_time_limit() method. Is this effective? Are their any alternatives or a mysql syntax equivalent for query time?
these are being used for example with jquery ui autosuggestions and things like that which is better to not work if they are gonna slow down the server.
If it makes sense for your application, then go ahead and set_time_limit with the desired max execution time. However, it most likely makes more sense to tweak your queries and introduce caching of query results.
memory_get_usage function can be used to find how much memory is used by your queries.
as you said you can set time limit. But how this will improve your code?
If your mysql query is going to take 5 mins and yu set time limit as 2 mins what will happen?
Main thing is optimizing the mysql query itself.
If you going to fetch huge data.
try to fetch in blocks .
set limit like fetch 1000 then next 1000.
use indexing.
make optimized joining if youare joining tables.
You can use stored procedure also if it works for your application.
Mysql 5 have SP.

Categories