In what order does MySQL process queries from 2 different connections? - php

Let's say I have two files file1.php and file2.php.
file1.php has the following queries:
-query 1
-query 2
-query 3
file2.php has the following queries:
-query 4
-query 5
-query 6
Let's say one visitor runs the first script and another visitor runs the second one exactly at the same time.
My question is: does MySQL receive one connection and keep the second connection in queue while executing all queries of the first script, and then moves on to the second connection?
Will the order of queries processed by MySQL be 1,2,3,4,5,6 (or 4,5,6,1,2,3) or can it be in any order?
What can I do to make sure MySQL executes all queries of one connection before moving on to another connection?
I'm concerned with data integrity. For example, account balance by two users who share the same account. They might see the same value, but if they both send queries at the same time, this could lead to some unexpected outcome

The database can accept queries from multiple connections in parallel. It can execute the queries in arbitrary order, even at the same time. The isolation level defines how much the parallel execution may affect the results:
If you don't use transactions, the queries can be executed in parallel, and the strongest isolation level still guarantees only that the queries will return the same result as if they were not executed in parallel, but can still be run in any order (as long as they're sorted within each connection)
If you use transactions, the database can guarantee more:
The strongest isolation level is serializable, which means the results will be as if no two transactions ran in parallel, but the performance will suffer.
The weakest isolation level is the same as not using transactions at all; anything could happen.
If you want to ensure data consistency, use transactions:
START TRANSACTION;
...
COMMIT;
The default isolation level is read-commited, which is roughly equivalent to serializable with ordinary SELECTs happening out-of-transactions. If you use SELECT FOR UPDATE for every SELECT within the transaction, you get serializable
See: http://dev.mysql.com/doc/refman/5.0/en/set-transaction.html#isolevel_repeatable-read

In general, you cannot predict or control order of execution - it can be 1,2,3,4,5,6 or 4,5,6,1,2,3 or 1,4,2,5,3,6 or any combination of those.
MySQL executes queries from multiple connections in parallel, and server performance is shared across all clients (2 in your case).
I don't think you have a reason to worry or change this - MySQL was created with this in mind, to be able to serve multiple connections.
If you have performance problems, they typically can be solved by adding indexes or changing your database schema - like normalizing or denormalizing your tables.

You may limit max_connections to 1 but then it will give you too many connections error for other connections. Limiting concurrent execution makes no sense.

Make the operation as transaction and set auto commit to false
Access all of your tables in the same order as this will prevent deadlock.

Related

Disable BEGIN/COMMIT when using mysqli

I have an app which takes heartbeats (a simple http request) from hosts, typically the host generates one request every x minutes. This results in a large number of completely independent php pages runs which does a read queries then (possibly) generates one single row insert to the RDS database, which doesn't really matter if it succeeds (one missed beat isn't a reason for alarm, several are)
However, with mysqli I have a significant overhead in IOPs - it sends a BEGIN, my single line insert, then a COMMIT - and therefore appears to use three IOPs where I only need one.
Is there any way to avoid the transactions entirely? I could change auto_commit, but it's useless as each run of the handler is separate, so there is no other insert to group with this one. Even turning auto_commit off still runs a transaction, but only ends it when the connection closes (which happens after one insert anyway.)
Or should I switch to raw mysql handling for efficiency (lots of work)? The old mysql php library (deprecated)? Something else?
If you really don't need transactions you can use MyISAM storage engine for your tables that doesn't support transactions.
I use always :
SET AUTOCOMMIT=1
for my InnoDB databases where I don't need use transactions just after connecting to database.

Does MySQL queue queries?

What happens if there are two people sending the same query at the same time to the database and one makes the other query return something different?
I have a shop where there is one item left. Two or more people buy the item and the query arrives at the exact same time on the MySQL server. My guess is that it will just queue but if so, how does MySQL pick the first one to execute and can i have influence on this?
sending the same query at the same time
QUERIES DO NOT ALWAYS RUN IN PARALLEL
It depends on the database engine. With MyISAM, nearly every query acquires a table level lock meaning that the queries are run sequentially as a queue. With most of the other engines they may run in parallel.
echo_me says nothing happens at the exact same time and a CPU does not do everything at once
That's not exactly true. It's possible that a DBMS could run on a machine with more than one cpu, and with more than one network interface. It's very improbable that 2 queries could arrive at the same time - but not impossible, hence there is a mutex to ensure that the paring/execution transition only runs as a single thread (of execution - not necesarily the same light weight process).
There's 2 approaches to solving concurent DML - either to use transactions (where each user effectively gets a clone of the database) and when the queries have completed the DBMS tries to reconcile any changes - if the reconciliation fails, then the DBMS rolls back one of the queries and reports it as failed. The other approach is to use row-level locking - the DBMS identifies the rows which will be updated by a query and marks them as reserved for update (other users can read the original version of each row but any attempt to update the data will be blocked until the row is available again).
Your problem is that you have two mysql clients, each of which have retrieved the fact that there is one item of stock left. This is further complicated by the fact that (since you mention PHP) the stock levels may have been retrieved in a different DBMS session than the subsequent stock adjustment - you cannot have a transaction spanning more than HTTP request. Hence you need revalidate any fact maintained outside the DBMS within a single transaction.
Optimistic locking can create a pseudo - transaction control mechanism - you flag a record you are about to modify with a timestamp and the user identifier (with PHP the PHP session ID is a good choice) - if when you come to modify it, something else has changed it, then your code knows the data it retrieved previously is invalid. However this can lead to other complications.
They are executed as soon as the user requests it, so if there are 10 users requesting the query at the exact same time, then there will be 10 queries executed at the exact same time.
nothing happens at the exact same time and a CPU does not do everything at once. It does things one at a time (per core and/or thread). If 10 users are accessing pages which run queries they will "hit" the server in a specific order and be processed in that order (although that order may be in milliseconds). However, if there are multiple queries on a page you can't be sure that all the queries on one user's page will complete before the queries on another user's page are started. This can lead to concurrency problems.
edit:
Run SHOW PROCESSLIST to find the id of the connecton you want to kill
.
SHOW PROCESSLIST will give you a list of all currently running queries.
from here.
MySQL will perform well with fast CPUs because each query runs in a single thread and can't be parallelized across CPUs.
how mysql uses memorie
Consider a query similar to:
UPDATE items SET quantity = quantity - 1 WHERE id = 100
However many queries the MySQL server runs in parallel, if 2 such queries run and the row with id 100 has quantity 1, then something like this will happen by default:
The first query locks the row in items where id is 100
The second query tries to do the same, but the row is locked, so it waits
The first query changes the quantity from 1 to 0 and unlocks the row
The second query tries again and now sees the row is unlocked
The second query locks the row in items where id is 100
The second query changes the quantity from 0 to -1 and unlocks the row
This is essentially a concurrency question. There are ways to ensure concurrency in MySQL by using transactions. This means that in your eshop you can ensure that race conditions like the ones you describe won't be an issue. See link below about transactions in MySQL.
http://dev.mysql.com/doc/refman/5.0/en/sql-syntax-transactions.html
http://zetcode.com/databases/mysqltutorial/transactions/
Depending on your isolation level different outcomes will be returned from two concurrent queries.
Queries in MySQL are handled in parallel. You can read more about the implementation here.

what is an alternative way to profile your web app without going through a a profiler program?

I have a website that uses php - mysql . I want to determine the DB queries that take the most time . Instaed of using a profiler , what other methods can I use to pinpoint the QUERY bottlenecks .
You can enable logging of slow queries in MySql:
http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
The slow query log consists of all SQL statements that took more than long_query_time seconds to execute and (as of MySQL 5.1.21) required at least min_examined_row_limit rows to be examined. The time to acquire the initial table locks is not counted as execution time. mysqld writes a statement to the slow query log after it has been executed and after all locks have been released, so log order might be different from execution order. The default value of long_query_time is 10.
This a large subject and I can only suggest a couple of pointers - but I am sure there are good coverage subjects elsewhere on SO
firstly profiling db queries. Every RDBMS holds a long running query list MySql is turned on with a single config flag. This will catch queries that run a long time (this could be order of seconds which is a long time for a modern RDBMS).
Also every RDBMS returns a time to execute with it's recordset. I strongly suggest you pull all the calls to a DNase through one common function say "executequery" then pull the SQL and execute times into a file for later
in general slow queries come from poor table design and the lack of good indexes. Run an "explain" over any query that worries you - the dbae will tell you how it will run that query - any "table scans" indicate the RDBMS cannot find a index on the table that meets the query needs
Next profiling is a term most often used to see time spent in executing parts of a program ie which for loops are used all the time, or time spent establinsing a db connection
what you seem to want is performance testing
Just read time before/after executing each query. This is easy if you use any database abstraction class/function.

How do I schedule an SQL to execute later #database level

---------Specification---------
Database: PostgreSQL
Language: PHP
---------Description---------
I want to create a table to store transaction log of the database. I just want to store brief information.
I think that during heavy concurrent execution, adding data (transaction log from all table) to single log table will bottleneck performance.
So I thought of a solution, why not add the SQL for transaction log to a queue which will execute automatically when there is NO heavy pressure on database.
---------Question---------
Is there any similar facilities available in PostgreSQL. OR How can I achieve similar functionality using PHP-Cron job or any other method. Note: Execution during LOW pressure on DB is necessary
---------Thanx in advance---------
EDIT:
Definition
Heavy Pressure/heavy concurrent execution: About 500 or more query per sec on more than 10 tables concurrently.
NO heavy pressure: About 50 or less query per second on less than 5 tables concurrently.
Transaction log table: If anything is edited/inserted/deleted into any table, its detail must be INSERTED in transaction log table
I think that during heavy concurrent execution, adding data (transaction log from all table) to single log table will bottleneck performance.
Don't assume. Test.
Especially when it comes to performance. Doing premature optimization is a bad thing.
Please also define "heavy usage". How many inserts per second to you expect?
So I thought of a solution, why not add the SQL for transaction log to a queue which will execute automatically when there is NO heavy pressure on database
Define "no heavy pressure"? How do you find out?
All in all I would recommend to simply insert the data and tune PostgreSQL so that it can cope with the load.
You could move the data to a separate hardd disk so that IO for the regular operations is not affected by this. In general insert speed is limited by IO, so get yourself a fast RAID 10 system.
You will probably also need to tune the checkpoint segments and WAL writer.
But if you are not talking about something like 1000 inserts per second, you'll probably don't have to do much to make this work (fast harddisk/RAID system assumed)

Performance tuning MYSQL Database

How can i log which all query is executed,how many secs a MYSQL query take to execute, to increase performance of database?
I am using PHP
Use the slow query log for catching queries which run longer than a specified time limit (2 seconds by default). This will show you the worst offenders; probably many will be low-hanging fruit, fixable by proper indexes.
It won't catch a different type of code smell: a relatively fast query running many times in a tight loop.
If you wish, you may set the slow query limit to 0 and it will log all queries - note that this in itself will slow down the server.

Categories