MySQL 'locked' processes when copying tmp table - php

I have a query which takes a very long time to run but produces a new table in the end. The actual joins are not all that slow but it spends almost all of its time in 'copying to tmp table' and during this time the status of all other queries (which should be going to unrelated tables) is 'locked'. I am in the process of optimizing the long query but it is ok for it to take a while since it is an offline process, but it is NOT ok for it to stop all other queries which should not be related to it anyway. Does anyone know why all other unrelated queries would comeback as 'locked' and how to prevent this behavior?

You are right in that "unrelated tables" shouldn't be affected. They shouldn't and to my knowledge they aren't.
There is a lot of information over at MySQL regarding locks, storage engines and ways of dealing with it.
To limit locks I would suggest that you write an application that reads all data needed to do this new table and simply have your application insert values to the new table. This might take longer but it will do it in smaller chunks and have less or no locks.
Good luck!

What is your MySQL Version?
Do you use MyISAM? MyISAM has a big LOCK problems on large SELECT commands.
Do you have a dedicated server? what is your maximum size for in-memory tables (look in my.cnf)?

Related

large amount of inserts per seconds causing massive CPU load

I have a PHP script that in every run, inserts a new row to a Mysql db (with a relative small amount of data..)
I have more than 20 requests per second, and this is causing my CPU to scream for help..
I'm using the sql INSERT DELAYED method with a MyISAM engine (although I just notice that INSERT DELAYED is not working with MyISAM).
My main concern is my CPU load and I started to look for ways to store this data with more CPU friendly solutions.
My first idea was to write this data to an hourly log files and once an hour to retrieve the data from the logs and insert it to the DB at once.
Maybe a better idea is to use NoSQL DB instead of log files and then once an hour to insert the data from the NoSQL to the Mysql..
I didn't test yet any of these ideas, so I don't really know if this will manage to decrease my CPU load or not. I wanted to ask if someone can help me find the right solution that will have the lowest affect over my CPU.
I recently had a very similar problem and my solution was to simply batch the requests. This sped things up about 50 times because of the reduced overhead of mysql connections and also the greatly decreased amount of reindexing. Storing them to a file then doing one larger (100-300 individual inserts) statement at once probably is a good idea. To speed things up even more turn off indexing for the duration of the insert with
ALTER TABLE tablename DISABLE KEYS
insert statement
ALTER TABLE tablename ENABLE KEYS
doing the batch insert will reduce the number of instances of the php script running, it will reduce the number of currently open mysql handles (large improvement) and it will decrease the amount of indexing.
Ok guys, I manage to lower the CPU load dramatically with APC-cache
I'm doing it like so:
storing the data in memory with APC-cache, with TTL of 70 seconds:
apc_store('prfx_SOME_UNIQUE_STRING', $data, 70);
once a minute I'm looping over all the records in the cache:
$apc_list=apc_cache_info('user');
foreach($apc_list['cache_list'] as $apc){
if((substr($apc['info'],0,5)=='prfx_') && ($val=apc_fetch($apc['info']))){
$values[]=$val;
apc_delete($apc['info']);
}
}
inserting the $values to the DB
and the CPU continues to smile..
enjoy
I would insert a sleep(1); function at the top of your PHP script, before every insert at the top of your loop where 1 = 1 second. This only allows the loop to cycle once per second.
This way it will regulate a bit just how much load the CPU is getting, this would be ideal assuming your only writing a small number of records in each run.
You can read more about the sleep function here : http://php.net/manual/en/function.sleep.php
It's hard to tell without profiling both methods, if you write to a log file first you could end up just making it worse as your turning your operation count from N to N*2. You gain a slight edge by writing it all to a file and doing a batch insert but bear in mind that as the log file fills up it's load/write time increases.
To reduce database load, look at using mem cache for database reads if your not already.
All in all though your probably best of just trying both and seeing what's faster.
Since you are trying INSERT DELAYED, I assume you don't need up to the second data. If you want to stick with MySQL, you can try using replication and the BLACKHOLE table type. By declaring a table as type BLACKHOLE on one server, then replicating it to a MyISAM or other table type on another server, you can smooth out CPU and io spikes. BLACKHOLE is really just a replication log file, so "inserts" into it are very fast and light on the system.
I do not know what is your table size or your server capabilities but I guess you need to make a lot of inserts per single table. In such a situation I would recommend checking for the construction of vertical partitions that will reduce the physical size of each partition and significantly reduce the insertion time to the table.

php mysql, queries get locked and never returned

My system creates a lot of transactions as it has many users and a lot of data which is checked on a daily basis and renewed.
Somehow at a certain moment (i am not sure if it is the backup which did it) there is a LOCKED on queries. And Somehow they are never returned. Is this the deadlock?
The database is not returning anything to the code either, so I can't check if it's locked or not. Also, this causes other queries to be stopped and pile up and my server runs out of connections...
any idea's on this?
It may be caused by several issues. Most popular is MyISAM table lock. Just run this quesry: SHOW STATUS LIKE 'Table%';. Post it here. If Table_locks_waited is big (e.g. more than 0.5% of Table_locks_immediate) and you are using MyISAM switch to InnoDB table engine.
If your database is not very big, changing engine is pretty fast and transparent.
Note, that all your locked queries are "write" queries. That's because MyISAM has long running selects that lock tables. Moreover, selects can cause some kind of deadlock. Quotation from docs:
MySQL grants table write locks as follows:
If there are no locks on the table, put a write lock on it.
Otherwise, put the lock request in the write lock queue.
MySQL grants table read locks as follows:
If there are no write locks on the table, put a read lock on it.
Otherwise, put the lock request in the read lock queue.
Don't forget to tune innodb_* params!
If you don't want to switch to InnoDB (why?!), you can tune concurrent_insert parameter (try "2") in your my.cnf.
Btw, I see a lot of sleeping connections. Do you have persistent connections? If "yes", do you close them properly?

How do I schedule an SQL to execute later #database level

---------Specification---------
Database: PostgreSQL
Language: PHP
---------Description---------
I want to create a table to store transaction log of the database. I just want to store brief information.
I think that during heavy concurrent execution, adding data (transaction log from all table) to single log table will bottleneck performance.
So I thought of a solution, why not add the SQL for transaction log to a queue which will execute automatically when there is NO heavy pressure on database.
---------Question---------
Is there any similar facilities available in PostgreSQL. OR How can I achieve similar functionality using PHP-Cron job or any other method. Note: Execution during LOW pressure on DB is necessary
---------Thanx in advance---------
EDIT:
Definition
Heavy Pressure/heavy concurrent execution: About 500 or more query per sec on more than 10 tables concurrently.
NO heavy pressure: About 50 or less query per second on less than 5 tables concurrently.
Transaction log table: If anything is edited/inserted/deleted into any table, its detail must be INSERTED in transaction log table
I think that during heavy concurrent execution, adding data (transaction log from all table) to single log table will bottleneck performance.
Don't assume. Test.
Especially when it comes to performance. Doing premature optimization is a bad thing.
Please also define "heavy usage". How many inserts per second to you expect?
So I thought of a solution, why not add the SQL for transaction log to a queue which will execute automatically when there is NO heavy pressure on database
Define "no heavy pressure"? How do you find out?
All in all I would recommend to simply insert the data and tune PostgreSQL so that it can cope with the load.
You could move the data to a separate hardd disk so that IO for the regular operations is not affected by this. In general insert speed is limited by IO, so get yourself a fast RAID 10 system.
You will probably also need to tune the checkpoint segments and WAL writer.
But if you are not talking about something like 1000 inserts per second, you'll probably don't have to do much to make this work (fast harddisk/RAID system assumed)

How can I speed up INNODB queries comparable to MYISAM performance?

I have recently switched my database tables from MYISAM to INNODB and experience bad timeouts with queries, mostly inserts. One function I use previously took <2 seconds to insert, delete and update a large collection of records across ~30 MYISAM tables, but now that they are INNODB, the function causes a PHP timeout.
The timeout was set to 60 seconds. I have optimised my script enough that now, even though there are still many queries, they are combined together (multiple inserts, multiple deletes, etc) and the script now takes ~25 seconds, which is a substantial increase from what appeared to be at least 60 seconds.
This duration is still over 10x quicker when previously using MYISAM, is there any mistakes I could be making in the way I process these queries? Or are there any settings that could assist in the performance? Currently the MySQL is using the default settings of installation.
The queries are nothing special, DELETE ... WHERE ... simple logic, same with the INSERT and UPDATE queries.
Hard to say without knowing too much about your environment, but this might be more of a database tuning problem. InnoDB can be VERY slow on budget hardware where every write forces a true flush. (This affects writes, not reads.)
For instance, you may want to read up on options like:
innodb_flush_log_at_trx_commit=2
sync_binlog=0
By avoiding the flushes you may be able to speed up your application considerably, but at the cost of potential data loss if the server crashes.
If data loss is something you absolutely cannot live with, then the other option is to use better hardware.
Run explain for each query. That is, if the slow query is select foo from bar;, run explain select foo from bar;.
Examine the plan, and add indices as necessary. Re-run the explain, and make sure the indices are being used.
Innodb builds hash indexes which helps to speed up lookup by indexes by passing BTREE index and using hash, which is faster

MySQL preventing dual loading

Right guys,
I have a MySQL database, using InnoDB on tables, every so often I have to perform a big cron job that does a large batch of queries and inserts. When I run this cron job, for the 5minutes or so that it is running, no other page is able to load. As soon as it is done, the queries are executed and the page loads.
The table that is actually having all this data added to it, isn't even being queried by the main site. It simply is that when MySQL is under a lot of work, the rest of the site is untouchable. This surely must not be right, what could be causing this to happen? CPU usage for MySQLD rockets to huge figures like 120% (!!!!!) and all MySQL queries are locked.
What could cause/fix this?
No, that's obviously wrong. This is probably related to bad configuration. Take a look at the size of the innodb buffer pool and see if it can be increased. This sounds like a typical case of ram shortage. Healthy setups are almost never cpu bound, and certainly not when doing bulk inserts.
With InnoDB, other things should still be able to access the database. Are you prepared to show the schema (or relevant part of it) and the relevant parts of the application?
Maybe it's contention in hardware.
How big are the transactions which your "cron" job is using? Using tiny transactions will create a massive amount of IO needlessly.
Do your database servers have battery backed raid controllers (assuming your servers use hard drives not SSD)? If not, commits will be quite slow.
How much ram is in your database server? If possible, ensure that it is a bit bigger than your database and set innodb_buffer_pool to > data size - this will mean that read workloads come out of ram anyway, which should make them fast.
Can you reproduce the problem in a test system on production-grade hardware?
I think you might need to re-think how you are building up your queries. InnoDB has page-level locks, but with massive updates you can still lock down quite a bit of your queries.
Post your actual queries, and try again.. I don't think there is a generic solution for a generic question like this, so look into optimizing what you're doing today.
You could have the script delay for 1/10 second or so between each query. It will take longer but allow activity in the background.
sleep( 0.1 );
You will probably only need to do this for the writes, reads are very cheap.

Categories