In a PHP script working with a mysql database, I recently had the need to use a transaction at a point that happened to be inside another transaction. All my tests seem to indicate this is working out fine, but I can't find any documentation on this usage.
I want to be sure - are transactions within transactions valid in mysql? If so, is there a way to find out how many levels deep you are in nested transactions? (ie. how many rollbacks it would take to return to normal)
Thanks in advance,
Brian
Contrary to everyone else's answer, you can effectively create transactions within transactions and it's really easy. You just create SAVEPOINT locations and use ROLLBACK TO savepoint to rollback part of the transaction, where savepoint is whatever name you give the savepoint.
Link to MySQL documentation: http://dev.mysql.com/doc/refman/5.0/en/savepoint.html
And of course, none of the queries anywhere in the transaction should be of the type that implicitly commit, or the whole transaction will be committed.
Examples:
START TRANSACTION;
# queries that don't implicitly commit
SAVEPOINT savepoint1;
# queries that don't implicitly commit
# now you can either ROLLBACK TO savepoint1, or just ROLLBACK to reverse the entire transaction.
SAVEPOINT savepoint2;
# queries that don't implicitly commit
# now you can ROLLBACK TO savepoint1 OR savepoint2, or ROLLBACK all the way.
# e.g.
ROLLBACK TO savepoint1;
COMMIT; # results in committing only the part of the transaction up to savepoint1
In PHP I have written code like this, and it works perfectly:
foreach($some_data as $key => $sub_array) {
$result = mysql_query('START TRANSACTION'); // note mysql_query is deprecated in favor of PDO
$rollback_all = false; // set to true to undo whole transaction
for($i=0;$i<sizeof($sub_array);$i++) {
if($sub_array['set_save'] === true) {
$savepoint = 'savepoint' . $i;
$result = mysql_query("SAVEPOINT $savepoint");
}
$sql = 'UPDATE `my_table` SET `x` = `y` WHERE `z` < `n`'; // some query/queries
$result = mysql_query($sql); // run the update query/queries
$more_sql = 'SELECT `x` FROM `my_table`'; // get data for checking
$result = mysql_query($more_sql);
$rollback_to_save = false; // set to true to undo to last savepoint
while($row = mysql_fetch_array($result)) {
// run some checks on the data
if( /*some check says to go back to savepoint */) {
$rollback_to_save = true; // or just do the rollback here.
}
if( /* some check says to rollback entire transaction */ ) {
$rollback_all = true;
}
}
if($rollback_all === true) {
mysql_query('ROLLBACK'); // rollback entire transaction
break; // break out of for loop, into next foreach
}
if($rollback_to_save = true) {
mysql_query("ROLLBACK TO $savepoint"); // undo just this part of for loop
}
} // end of for loop
mysql_query('COMMIT'); // if you don't do this, the whole transaction will rollback
}
This page of the manual might interest you : 12.3.3. Statements That Cause an Implicit Commit; quoting a few sentences :
The statements listed in this section
(and any synonyms for them) implicitly
end a transaction, as if you had done
a COMMIT before executing the
statement.
And, a bit farther in the page :
Transaction-control and locking
statements. BEGIN, LOCK TABLES,
SET autocommit = 1 (if the value is
not already 1), START TRANSACTION,
UNLOCK TABLES.
See also this paragraph :
Transactions cannot be nested. This is
a consequence of the implicit commit
performed for any current transaction
when you issue a START TRANSACTION
statement or one of its synonyms.
I want to be sure - are transactions
within transactions valid in mysql?
No.
MySql doesn't support nested transactions. There are a few ways that you can emulate it though. First, you can use savepoints as a form of transaction, so that gives you two levels of transactions; I've used this for testing, but I'm not sure about the limitations, if you use it in production code. A simpler solution is to ignore the second begin transaction and instead increase a counter. For each commit, you decrease it. Once you hit zero, you do an actual commit. There are obvious limitations of this; Eg. a rollback will roll all transactions back, but for a case where you only use transactions for error-handling, that may be acceptable.
There are some great answers in this thread, however, if you use innoDB as your MySQL storage engine and are using MySQL 5.0.3 or higher, you get nested transactions right out of the box without the need for any extra work on your part or any of the fancy techniques described by others in this thread.
From the MySQL docs on XA Transactions:
MySQL 5.0.3 and up provides server-side support for XA transactions. Currently, this support is available for the InnoDB storage engine. The MySQL XA implementation is based on the X/Open CAE document Distributed Transaction Processing: The XA Specification. This document is published by The Open Group and available at http://www.opengroup.org/public/pubs/catalog/c193.htm. Limitations of the current XA implementation are described in Section E.5, “Restrictions on XA Transactions”.
My XA Transaction Example Just For You:
# Start a new XA transaction
XA START;
# update my bank account balance, they will never know!
UPDATE `bank_accounts` SET `balance` = 100000 WHERE `id` = 'mine';
# $100,000.00 is a bit low, I'm going to consider adding more, but I'm not sure so
# I will start a NESTED transaction and debate it...
XA START;
# max int money! woo hoo!
UPDATE `bank_accounts` SET `balance` = 2147483647 WHERE `id` = 'mine';
# maybe thats too conspicuous, better roll back
XA ROLLBACK;
# The $100,000 UPDATE still applies here, but the max int money does not, going for it!
XA COMMIT;
# Oh No! Sirens! It's the popo's!!! run!!
# What the hell are they using ints for money columns anyway! Ahhhh!
MySQL Documentation For XA Transactions:
13.3.7. XA
Transactions
13.3.7.1. XA Transaction SQL
Syntax
13.3.7.2. XA Transaction States
E.6. Restrictions on XA Transactions
I <3 XA Transactions 4 Eva!
You might want to check your testing methadology. Outside of MaxDB, MySQL doesn't support anything remotely like nested transactions.
It does:
http://dev.mysql.com/doc/refman/5.0/en/xa.html
Related
My original error was
Error No: 1213 - Deadlock found when trying to get lock; try
restarting transaction
Okay, so I wrote a loop with max retries and a wait in between to try and get through the deadlocks.
$Try = 0;
while (!$Result = $dbs->query($MySQL)) {
$Try++;
if ($Try === MYSQL_MAX_RETRIES)
HandleMySQLError($dbs->error, $MySQL, false, $Test, $Trace);
else
sleep(MYSQL_RETRY_WAIT);
}
But now I'm constantly getting some of the original error still, and a new error
Got error 35 "Resource deadlock avoided" during COMMIT
But I can't really seem to find out what this means or how to fix it?
EDIT
I left out a ton of information when I first wrote this, but the server is a RedHat 7 AWS EC2 (well, 3 of them) in a Galera & MariaDB cluster.
The query I am running is a call to a stored procedure
call`getchatmessages`('<ChatID>','<UserID>',from_unixtime('<Some Timestamp>'));
And the stored procedure is as follows
CREATE DEFINER=`root`#`%` PROCEDURE `getchatmessages`(IN `__ChatID` CHAR(36), IN `__UserID` CHAR(36), IN `__Timestamp` TIMESTAMP(6))
BEGIN
DECLARE `__NewChatMessages` TINYINT(1) DEFAULT 0;
DECLARE `__i` INT(11) DEFAULT 0;
DECLARE `__Interval` INT(11) DEFAULT 100; -- ms
DECLARE `__Timeout` INT(11) DEFAULT 15000; -- ms
while `__NewChatMessages`=0 and `__i`<`__Timeout`/`__Interval` do
select 1 into `__NewChatMessages` from `chatmessages` where `ChatID`=`__ChatID` and `DateTimeAdded`>ifnull(`__Timestamp`,0) limit 1;
update `chatusers` set `DateTimeRead`=now(6) where `ChatID`=`__ChatID` and `UserID`=`__UserID`;
do sleep(`__Interval`/1000);
set `__i`=`__i`+1;
end while;
select `chatmessages`.`Body`, `chatmessages`.`ChatID`, `chatmessages`.`UserID`,
`chatmessages`.`ChatMessageID`, `chatmessages`.`DateTimeAdded`, UNIX_TIMESTAMP(`chatmessages`.`DateTimeAdded`) `Timestamp`, `users`.`FirstName`,
`users`.`LastName`
from `chatmessages`
join `users` using (`UserID`)
where `chatmessages`.`ChatID`=`__ChatID`
and `chatmessages`.`DateTimeAdded`>ifnull(`__Timestamp`,0)
order by `chatmessages`.`DateTimeAdded` desc
limit 100;
END
Deadlock in Galera Cluster (MariaDB Galera Cluster, 3 nodes) is not a typical deadlock, but a way of communicating the multi-master conflicts:
http://galeracluster.com/documentation-webpages/dealingwithmultimasterconflicts.html
The easiest way to avoid deadlocks is to write to 1 node at a time, i.e. configure HA proxy to write to 1 node only. In your case you will run sp on Node1 (does not matter which node, but always on 1 node, sort of "sticky sessions").
More information here: https://severalnines.com/blog/avoiding-deadlocks-galera-set-haproxy-single-node-writes-and-multi-node-reads
Is this Proc being called inside a transaction? If so, I argue strongly with its design. You have a loop with a sleep hanging onto the transaction.
Instead, have the UPDATE be a transaction by itself.
This may virtually eliminate the deadlocks. However you should still deal with deadlocks, as discussed by other answer(s).
Edit Since there are no BEGINs, and autocommit=ON, the OP is already following this advice. Alas.
I have a fairly large, complex chunk of SQL that creates a report for a particular page. This SQL has several variable declarations at the top, which I suspect is giving me problems.
When I get the ODBC result, I have to use odbc_next_result() to get past the empty result sets that seem to be returned with the variables. That seems to be no problem. When I finally get to the "real" result, odbc_num_rows() tells me it has over 12 thousand rows, when in actuality it has 6.
Here is an example of what I'm doing, to give you an idea, without going into details on the class definitions:
$report = $db->execute(Reports::create_sql('sales-report-full'));
while(odbc_num_rows($report) <= 1) {
odbc_next_result($report);
}
echo odbc_num_rows($report);
The SQL looks something like this:
DECLARE #actualYear int = YEAR(GETDATE());
DECLARE #curYear int = YEAR(GETDATE());
IF MONTH(GETDATE()) = 1
SELECT #curYear = #curYear - 1;
DECLARE #lastYear int = #curYear-1;
DECLARE #actualLastYear int = #actualYear-1;
DECLARE #tomorrow datetime = DATEADD(dd, 1, GETDATE());
SELECT * FROM really_big_query
Generally speaking it's always a good idea to start every stored procedure, or batch of commands to be executed, with the set nocount on instruction - which tells SQL Server to supress sending "rows effected" messages to a client application over ODBC (or ADO, etc.). These messages effect performance and cause empty record sets to be created in the result set - which cause additional effort for application developers.
I've also used ODBC drivers that actually error if you forget to suppress these messages - so it has become instinctive for me to type set nocount on as soon as I start writing any new stored procedure.
There are various question/answers relating to this subject, for example What are the advantages and disadvantages of turning NOCOUNT off in SQL Server queries? and SET NOCOUNT ON usage which cover many other aspects of this command.
function generateRandomData(){
# $db = new mysqli('localhost','XXX','XXX','scores');
if(mysqli_connect_errno()) {
echo 'Failed to connect to database. Please try again later.';
exit;
}
$query = "insert into scoretable values(?,?,?)";
for($a = 0; $a < 1000000; $a++)
{
$stmt = $db->prepare($query);
$id = rand(1,75000);
$score = rand(1,100000);
$time = rand(1367038800 ,1369630800);
$stmt->bind_param("iii",$id,$score,$time);
$stmt->execute();
}
}
I am trying to populate a data table in mysql with a million rows of data. However, this process is extremely slow. Is there anything obvious I'm doing wrong that I could fix in order to make it run faster?
As hinted in the comments, you need to reduce the number of queries by catenating as many inserts as possible together. In PHP, it is easy to achieve that:
$query = "insert into scoretable values";
for($a = 0; $a < 1000000; $a++) {
$id = rand(1,75000);
$score = rand(1,100000);
$time = rand(1367038800 ,1369630800);
$query .= "($id, $score, $time),";
}
$query[strlen($query)-1]= ' ';
There is a limit on the maximum size of queries you can execute, which is directly related to the max_allowed_packet server setting (This page of the mysql documentation describes how to tune that setting to your advantage).
Therfore, you will have to reduce the loop count above to reach an appropriate query size, and repeat the process to reach the total number you want to insert, by wrapping that code with another loop.
Another practice is to disable check constraints on the table you wish to do bulk insert:
ALTER TABLE yourtablename DISABLE KEYS;
SET FOREIGN_KEY_CHECKS=0;
-- bulk insert comes here
SET FOREIGN_KEY_CHECKS=1;
ALTER TABLE yourtablename ENABLE KEYS;
This practice however must be done carefully, especially in your case since you generate the values randomly. If you have any unique key within the columns you generate, you cannot use that technique with your query as it is, as it may generate a duplicate key insert. You probably want to add a IGNORE clause to it:
$query = "insert INGORE into scoretable values";
This will cause the server to silently ignore duplicate entries on unique keys. To reach the total number of requiered inserts, just loop as many time as needed to fill up the remaining missing lines.
I suppose that the only place where you could have a unique key constraint is on the id column. In that case, you will never be able to reach the number of lines you wish to have, since it is way above the range of random values you generate for that field. Consider raising that limit, or better yet, generate your ids differently (perhaps simply by using a counter, which will make sure every record is using a different key).
You are doing several things wrong. First thing you have to take into account is what MySQL engine you're using.
The default one is InnoDB, previously the default engine is MyISAM.
I'll write this answer under assumption you're using InnoDB, which you should be using for plethora of reasons.
InnoDB operates in something called autocommit mode. That means that every query you make is wrapped in a transaction.
To translate that to a language that us mere mortals can understand - every query you do without specifying BEGIN WORK; block is a transaction - ergo, MySQL will wait until hard drive confirms data has been written.
Knowing that hard drives are slow (mechanical ones are still the ones most widely used), that means your inserts will be as fast as the hard drive is. Usually, mechanical hard drives can perform about 300 input output operations per second, ergo assuming you can do 300 inserts a second - yes, you'll wait quite a bit to insert 1 million records.
So, knowing how things work - you can use them to your advantage.
The amount of data that the HDD will write per transaction will be generally very small (4KB or even less), and knowing today's HDDs can write over 100MB/sec - that indicates that we should wrap several queries into a single transaction.
That way MySQL will send quite a bit of data and wait for the HDD to confirm it wrote everything and that the whole world is fine and dandy.
So, assuming you have 1M rows you want to populate - you'll execute 1M queries. If your transactions commit 1000 queries at a time, you should perform only about 1000 write operations.
That way, your code becomes something like this:
(I am not familiar with mysqli interface so function names might be wrong, and seeing I'm typing without actually running the code - the example might not work so use it at your own risk)
function generateRandomData()
{
$db = new mysqli('localhost','XXX','XXX','scores');
if(mysqli_connect_errno()) {
echo 'Failed to connect to database. Please try again later.';
exit;
}
$query = "insert into scoretable values(?,?,?)";
// We prepare ONCE, that's the point of prepared statements
$stmt = $db->prepare($query);
$start = 0;
$top = 1000000;
for($a = $start; $a < $top; $a++)
{
// If this is the very first iteration, start the transaction
if($a == 0)
{
$db->begin_transaction();
}
$id = rand(1,75000);
$score = rand(1,100000);
$time = rand(1367038800 ,1369630800);
$stmt->bind_param("iii",$id,$score,$time);
$stmt->execute();
// Commit on every thousandth query
if( ($a % 1000) == 0 && $a != ($top - 1) )
{
$db->commit();
$db->begin_transaction();
}
// If this is the very last query, then we just need to commit and end
if($a == ($top - 1) )
{
$db->commit();
}
}
}
DB querying involves many interrelated tasks. As a result it is an 'expensive' process. It is even more 'expensive' when it comes to insertion/update.
Running query once is the best way to enhance performance.
You can prepare the statements in the loop and run it once.
eg.
$query = "insert into scoretable values ";
for($a = 0; $a < 1000000; $a++)
{
$values = " ('".$?."','".$?."','".$?."'), ";
$query.=$values;
...
}
...
//remove the last comma
...
$stmt = $db->prepare($query);
...
$stmt->execute();
Have a look at this gist I've created. It takes about 5 minutes to insert a million rows on my laptop.
Im trying to update 100.000 rows in my database, the following code should do that but I always get an error :
Error: Commands out of sync; you can't run this command now
Because it is an update I don't need the result and just want to get rid of them. The $count variable is used so that my database gets chunks of updates instead of one big update. (One big update is not working because of some limitations of the database).
I tried a lot of different things like mysqli_free_result and so on... nothing worked.
global $mysqliObject;
$count = 0;
$statement = "";
foreach ($songsArray as $song) {
$id = $song->getId();
$treepath = $song->getTreepath();
$statement = $statement."UPDATE songs SET treepath='".$treepath."' WHERE id=".$id."; ";
$count++;
if ($count > 10000){
$result = mysqli_multi_query($mysqliObject, $statement);
if(!$result) {
die('<br/><br/>Error1: ' . mysqli_error($mysqliObject));
}
$count = 0;
$statement = "";
}
}
Using a prepared query will reduce the CPU load in the mysqld process as DaveRandom and StevenVI suggest. However in this case I doubt that using prepared queries will materially impact your runtime. The challenge that you have is that you are attempting to update 100K rows in the songs table and this is going to involve a lot of physical I/O on your physical disk subsystem. It is these physical delays (say ~10 mSec per PIO) that will dominate runtimes. Factors such as what is contained in each row, how many indexes are you using on the table (especially those that involve treepath) will all blend into this mix.
The actual CPU costs of preparing a simple statement like
UPDATE songs SET treepath="some treepath" WHERE id=12345;
will be lost in this overall physical I/O delay, and the relative size of this will materially depend on the nature of the physical subsystem where you are storing your data: a single SATA disk; SSD; some NAS with large caches and SSD support ...
You need to rethink your overall strategy here, especially if you are also using the songs table at the same time as an resource for interactive requests through a web front-end. Updating 100K rows is going to take some time -- less if you are updating 100K out of 100K in storage order since this will be more aligned to the MYD organisation and the write-though caching will be better; more if you are update 100K rows in random order out of 1M rows, where the number of PIOs will be a lot more.
When you are doing this, the overall performance of your D/B is going to degrade badly.
Do you want to minimise impact on parallel use of your DB or are you just trying to do this as dedicated batch operation with other services offline?
Is your goal to minimise the total elapsed time or to keep it reasonable short subject to some overall impact constrain, or even just to complete without dying.
I suggest that you've got two sensible approaches: (i) do this as a proper batch activity with the D/B offline to other services. In this case you probably want to take out a lock on the table, and bracket the updates with ALTER TABLE ... DISABLE/ENABLE KEYS. (ii) do this as a trickle update with far smaller update sets and a delay between each set to allow the D/B to flush to disk.
Whatever, I'd drop the batch size. The multi_query essentially optimises RPC over heads involved in calling the out-of-process mysqld. A batch of 10 say cuts this by 90%. You've got diminishing returns after this -- especially saying the updates will be physical I/O intensive.
Try this code using prepared statements:
// Create a prepared statement
$query = "
UPDATE `songs`
SET `treepath` = ?
WHERE `id` = ?
";
$stmt = $GLOBALS['mysqliObject']->prepare($query); // Global variables = bad
// Loop over the array
foreach ($songsArray as $key => $song) {
// Get data about this song
$id = $song->getId();
$treepath = $song->getTreepath();
// Bind data to the statement
$stmt->bind_param('si', $treepath, $id);
// Execute the statement
$stmt->execute();
// Check for errors
if ($stmt->errno) {
echo '<br/><br/>Error: Key ' . $key . ': ' . $stmt->error;
break;
} else if ($stmt->affected_rows < 1) {
echo '<br/><br/>Warning: No rows affected by object at key ' . $key;
}
// Reset the statment
$stmt->reset();
}
// We're done, close the statement
$stmt->close();
I'd do something like this:
$link = mysqli_connect('host');
if ( $stmt = mysqli_prepare($link, "UPDATE songs SET treepath=? WHERE id=?") ) {
foreach ($songsArray as $song) {
$id = $song->getId();
$treepath = $song->getTreepath();
mysqli_stmt_bind_param($stmt, 's', $treepath); // Assuming it's a string...
mysqli_stmt_bind_param($stmt, 'i', $id);
mysqli_stmt_execute($stmt);
}
mysqli_stmt_close($stmt);
}
mysqli_close($link);
Or of course you normal mysql_query's but enclosed in a transaction.
I found another way...
Since this is not a production server - the fastest way to update 100k rows is by deleting all of them and inserting 100k from scratch with the new calculated values. It seems a little bit odd to delete everything and insert everything instead of updating but it is WAYYY faster.
Before: hours Now: seconds!
I would suggest to lock the table and disable the keys before executing multiple updates.
This would avoid that the database engine stops (at least in my case of 300,000 row update).
LOCK TABLES `TBL_RAW_DATA` WRITE;
/*!40000 ALTER TABLE `TBL_RAW_DATA` DISABLE KEYS */;
UPDATE TBL_RAW_DATA SET CREATION_DATE = ADDTIME(CREATION_DATE,'01:00:00') WHERE ID_DATA >= 1359711;
/*!40000 ALTER TABLE `TBL_RAW_DATA` ENABLE KEYS */;
UNLOCK TABLES;
In MySQL I have to check whether select query has returned any records, if not I insert a record. I am afraid though that the whole if-else operation in PHP scripts is NOT as atomic as I would like, i.e. will break in some scenarios, for example if another instance of the script is called where the same record needs to be worked with:
if(select returns at least one record)
{
update record;
}
else
{
insert record;
}
I did not use transactions here, and autocommit is on. I am using MySQL 5.1 with PHP 5.3. The table is InnoDB. I would like to know if the code above is suboptimal and indeed will break. I mean the same script is re-entered by two instances and the following query sequence occurs:
instance 1 attempts to select the record, finds none, enters the block for insert query
instance 2 attempts to select the record, finds none, enters the block for insert query
instance 1 attempts to insert the record, succeeds
instance 2 attempts to insert the record, fails, aborts the script automatically
Meaning that instance 2 will abort and return an error, skipping anything following the insert query statement. I could make the error not fatal, but I don't like ignoring errors, I would much rather know if my fears are real here.
Update: What I ended up doing (is this ok for SO?)
The table in question assists in a throttling (allow/deny, really) amount of messages the application sends to each recipient. The system should not send more than X messages to a recipient Y within a period Z. The table is [conceptually] as follows:
create table throttle
(
recipient_id integer unsigned unique not null,
send_count integer unsigned not null default 1,
period_ts timestamp default current_timestamp,
primary key (recipient_id)
) engine=InnoDB;
And the block of [somewhat simplified/conceptual] PHP code that is supposed to do an atomic transaction that maintains the right data in the table, and allows/denies sending message depending on the throttle state:
function send_message_throttled($recipient_id) /// The 'Y' variable
{
query('begin');
query("select send_count, unix_timestamp(period_ts) from throttle where recipient_id = $recipient_id for update");
$r = query_result_row();
if($r)
{
if(time() >= $r[1] + 60 * 60 * 24) /// The numeric offset is the length of the period, the 'Z' variable
{/// new period
query("update throttle set send_count = 1, period_ts = current_timestamp where recipient_id = $recipient_id");
}
else
{
if($r[0] < 5) /// Amount of messages allowed per period, the 'X' variable
{
query("update throttle set send_count = send_count + 1 where recipient_id = $recipient_id");
}
else
{
trigger_error('Will not send message, throttled down.', E_USER_WARNING);
query('rollback');
return 1;
}
}
}
else
{
query("insert into throttle(recipient_id) values($recipient_id)");
}
if(failed(send_message($recipient_id)))
{
query('rollback');
return 2;
}
query('commit');
}
Well, disregarding the fact that InnoDB deadlocks occur, this is pretty good no? I am not pounding my chest or anything, but this is simply the best mix of performance/stability I can do, short of going with MyISAM and locking entire table, which I don't want to do because of more frequent updates/inserts vs selects.
It seems like you already know the answer to the question, and how to solve your problem. It is a real problem, and you can use one of the following to solve it:
SELECT ... FOR UPDATE
INSERT ... ON DUPLICATE KEY UPDATE
transactions (don't use MyIsam)
table locks
This can and might happen depending on how often this page is executed.
The safe bet would be to use transactions. The same thing you wrote would still happen, except that you could safely check for the error inside the transaction (In case the insertion involves several queries, and only the last insert breaks) allowing you to rollback the one that became invalid.
So (pseudo):
#start transaction
if (select returns at least one record)
{
update record;
}
else
{
insert record;
}
if (no constraint errors)
{
commit; //ends transaction
}
else
{
rollback; //ends transaction
}
You could lock the table as well but depending on the work you're doing, you'd have to get an exclusive lock on the entire table (you cannot SELECT ... FOR UPDATE non-existing rows, sorry) but that would also block reads from your table until you are finished.