Doctrine/PostgreSQL not running every update on my script - php

I'm running a dozen update statements on a script and some updates just won't happen, I need to know why and how to fix this situation.
I have this piece of code:
foreach($clis as $c => $cli)
{
//... do some stuff
foreach($emails as $e)
{
$Mailqueue = $this->em->getRepository('Sendsys\Entity\Mailqueue')->findBy(array("email" => $e,"status" => "P"));
foreach($Mailqueue as $queue)
{
$queue->setStatus('S');
$this->em->persist($queue);
}
}
$this->em->flush(); //entity manager is retrieved before that
}
This basically run through a (somewhat large) table to update the emails with status 'P' to 'S', these emails are on a list in the $emails array.
I have tried using doctrine's ExecuteUpdate with a prepared statement instead of retrieving and persisting an entity each time , but it results in the exact same issue.
Some more details:
The issue happens even with the mail sending function commented out
I know that all of the emails are being sent (when uncommented) and the script is generating all of the update statements, I've output them when testing ExecuteUpdate
I know that the updates are getting to Postgres because some rows do update and if I refresh the rows will randomly update until there is none left
I can't just update everything afterwards, I need to know where the script stopped in case of failure.
EDIT
It's not doctrine related, I've connected directly to the database and ran the followings statements inside a loop:
$dbh->setAttribute(\PDO::ATTR_ERRMODE, \PDO::ERRMODE_EXCEPTION);
$stmt = $dbh->prepare("UPDATE mailqueue SET status = 'S' WHERE status = :status AND email LIKE :mail");
$stmt->execute(array( ':status' => 'P', ':mail' => $e ));
echo $stmt->rowCount();

The statements of updates are run only at the end by $this->em->flush();
This is the only action that tells Doctrine to begin the SQL transaction, if you need updates te bo executed before that, just call the flush method.
During the treatments before flushing, the changes remain in the entities linked to the manager.

Related

How to lock database for Laravel's `firstOrCreate`?

We currently encounter a Duplicate entry QueryException when executing the following code:
Slug::firstOrCreate([
Slug::ENTITY_TYPE => $this->getEntityType(),
Slug::SLUG => $slug
], [
Slug::ENTITY_ID => $this->getKey()
]);
Since the firstOrCreate method by Laravel first checks if the entry with the attributes exist before inserting it, this exception should never occur. However, we have an application with million of visitors and million of actions every day and therefore also use a master DB connection with two slaves for reading. Therefore, it might be possible that some race conditions might occur.
We currently tried to separate the query and force the master connection for reading:
$slugModel = Slug::onWriteConnection()->where([
Slug::SLUG => $slug,
Slug::ENTITY_TYPE => $this->getEntityType()
])->first();
if ($slugModel && $slugModel->entity_id !== $this->getKey()) {
$class = get_class($this);
throw new \RuntimeException("The slug [{$slug}] already exists for a model of type [{$class}].");
}
if (!$slugModel) {
return $this->slugs()->create([
Slug::SLUG => $slug,
Slug::ENTITY_TYPE => $this->getEntityType()
]);
}
However the exception still occurs sometimes.
Our next approach would be to lock the table before the reading check and release the lock after the writing to prevent any inserts with the same slug from other database actions between our reading and our writing. Does anyone know how to solve this? I don`t really understand how Laravel's Pessimistic Locking can help solving the issue. We use MySql for our database.
I would not recommend to lock the table, especially if you have millions of viewers.
Most race-conditions can be fixed by locks, but this is not fixable with locks, because you cannot lock a row that does not exist (there is something like gap locking, but this won't help here.).
Laravel does not handle race-conditions by itself. If you call firstOrCreate it does two queries:
SELECT item where slug=X and entity_type=Y
If it does not exists, create it
Now because we have two queries, race condition is possible, meaning two user in parallel reach step 1, then both try to create the entry in step 2 and your system will crash.
Since you already have a Duplicate Key error, it means you aleady put a unique constrain on the tuple on the two columns that identify your row, which is good.
What you could do now, is to catch the duplicate key error like this:
try{
$slug = Slug::firstOrCreate([
Slug::ENTITY_TYPE => $this->getEntityType(),
Slug::SLUG => $slug
], [
Slug::ENTITY_ID => $this->getKey()
]);
}
catch (Illuminate\Database\QueryException $e){
$errorCode = $e->errorInfo[1];
if($errorCode == 1062){
$slug = Slug::where('slug','=', $slug)->where('entity_type','=', $this->getEntityType())->first();
}
}
one solution for this is to use Laravel queue and make sure that it runs one job at a time, in this way you will never have 2 identical queries at the same time.
for sure this will not work if you want to return back the result in the same request.

How to check a condition if it was true then insert some thing at race condition

We have an api function which check a condition on database with a select-query then if it was true we want just for one time insert some thing to database for example inserting to database that insertion done. Problem is when we call multiple times this api-function concurrently race condition happen, in another words assume we call this function 2 times, first request check the condition it's true then second request check that and it's true again so their do insert to database. But we want to when we check condition no other one can check it again until we do insertion.
We use php/Laravel and know about some ways like using insert into ... select or using some thing like replace into ... and so on.
$order = Order::find($orderId);
$logRefer = $order->user->logrefer;
if (!is_null($logRefer) && is_null($logRefer->user_turnover_id)) {
$userTurnover = new UserTurnover();
$userTurnover->user_id = $logRefer->referrer_id;
$userTurnover->order_id = $order->id;
$userTurnover->save();
$logRefer->order_id = $order->id;
$logRefer->user_turnover_id = $userTurnover->id;
$logRefer->save();
}
If logrefer not found set it and corresponding user-turnover just for one time. We expect to see just one user-turnover related to this order but after running it multiple time concurrently we see multiple user-turnover has inserted.
I usually take advantage of transaction when operations need to be sequential but i think that in your case the situation it's a bit complex due to the fact that also the condition need to be evaluated conditionally if the function it's running. So the idea that i can give you it's to have on the database a variable (another table) used as semaphore which allow or not to perform actions on the table (condition gived by the fact that you set or unset the value of the semaphore). I think as good programmer that semaphore are useful in a lot of cases of concurrential functions.
The database should have a unique key on columns expected to be unique, even if some mechanism in the code prevents duplicates.
Wrap connected queries into a transaction which will fail and roll back in a race event condition
try {
DB::transaction(function() {
$order = Order::find($orderId);
...
$logRefer->save();
});
} catch (\Illuminate\Database\QueryException $ex) {
Log::error(“failed to write to database”);
}

Foreign key and transaction

I'm trying to use transaction when creating table group, and table with relation user-group.
It works ok when I don't use transaction, so the naming of the attributes is correct. Here is the code:
$db = Yii::app()->db;
$transaction = $db->beginTransaction();
try {
$model->attributes=$_POST['MyGroup'];
$model->save();
$model->refresh();
$userMyGroup = new UserMyGroup();
$userMyGroup->IDMyGroup = $model->IDMyGroup;
$userMyGroup->IDUser = Yii::app()->user->id;
$userMyGroup->save();
$transaction->commit();
} catch (CDbException $ex) {
Yii::log("Couldn't create group:".$ex->errorInfo[1], CLogger::LEVEL_ERROR);
$transaction->rollback();
}
The error is:
The INSERT statement conflicted with the FOREIGN KEY constraint "FK_UserMyGroup_MyGroup". The conflict occurred in database "MyDatabase", table "dbo.MyGroup", column 'IDMyGroup'.. The SQL statement executed was: INSERT INTO [dbo].[UserMyGroup] ([IDMyGroup], [IDUser]) VALUES (:yp0, :yp1). Bound with :yp0=4022, :yp1=1
Problem is probably that the saved model might not be in database while saving the second model(userMyGroup) with the foreign key. How to do the transaction correctly?
EDIT:
I've found out that the problem is caused by audit module, it is trying to log the query, but can't as it is in transaction and not really saved yet in database. I'm trying to figure out how to use this transaction along with the module...
The refresh method repopulates active record with the latest data.
While transaction is not commited latest data is existing data in table.
Move $model->refresh(); after $transaction->commit();
I've found out that the problem is caused by audit module which I'm using, it is trying to log the query, but can't as it is in transaction and not really saved yet in database. Unfortunately, I didn't figure out how to use this transaction along with the module, so the result is to disable audit module on the classes used in transaction.

In CodeIgniter, Can we use Database Transactions like this?

In CodeIgniter, Are we also allowed to use "Active Record Class" like this?
<?php
$data_balance = array(
'balance' => $newbalance
);
$data_transaction = array(
'amount' => $amount,
'user' => $user
);
$this->db->trans_start();
// This is an "Insert" query for a table
$this->db->set('date', 'NOW()', FALSE);
$this->db->insert('financial_transactions', $data_transaction);
// This is an "Update" query for another table
$this->db->where('user_id', $user_id);
$this->db->update('financial_balances', $data_balance);
$this->db->trans_complete();
if ($this->db->trans_status() === FALSE){
// Do stuff if failed
}
?>
Note that i use $this->db-> for both queries, so i don't know if the success result of first one is actually cleared to check the second one?
Is this going to work? can i trust that this will make either both queries to success or neither of them (i don't want one to success and one to fail)
From documentation:
By default CodeIgniter runs all transactions in Strict Mode. When strict mode is enabled, if you are running multiple groups of transactions, if one group fails all groups will be rolled back. If strict mode is disabled, each group is treated independently, meaning a failure of one group will not affect any others.

PHP PDO Transaction with nested PHP check

i have to do 2 MySql-queries:
SELECT id FROM X WHERE [...]
INSERT [...]
The second query should only be executed, if the first query returns an correct id.
Is it possible, to mix PHP conditions between both queries?
Eg.
try
{
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$dbh->beginTransaction();
$stmt = $dbh->prepare("SELECT id FROM [...]");
$stmt->bindParam(1, [...]);
if($stmt->execute())
{
if($row = $stmt->fetch())
{
$matchID = $row['id'];
$checkD = $this->checkId($matchID);
if($checkD)
{
return '-1';
}
else
{
$stmt = $dbh->prepare("INSERT INTO [...]");
$stmt->bindParam(1,[...]);
$stmt->execute();
stmt = $dbh->prepare("DELETE [...]");
$stmt->bindParam(1,[...]);
$stmt->execute();
$dbh->commit();
return $matchID;
}
}
else
{
return '-1';
}
}
else
{
return '-1';
}
} catch(Exception $e)
{
$dbh->rollBack();
return '-1';
}
Is this correct? (i get zero errors)
If not: how can i realize it?
I want to be sure, that no other user could reach the INSERT query, when annother is performing the 1. query.
Transactions are isolated from the current data. How they behave exactly is dependent on the isolation level they use. For example a transaction with serializable isolation level completely lives in the past, and it knows nothing of the data changes have been made since the beginning of the transaction.
If you want to prevent anybody to do changes on the database while your script is working on something, then you have to lock your database, tables or rows. This is usually not necessary, with the proper code.
In your case
you can use read committed transaction isolation level (default) call the DELETE after the SELECT, and check whether there are affected rows by the DELETE before the INSERT
if you don't want to change the order of your queries then you can
simply throw an exception if the DELETE does not affect any row, and so the INSERT will be rolled back
add a constraint which prevents multiple INSERTs with the same data, so the duplicated INSERT will violate a constraint, and it will be rolled back
lock the rows with a SELECT FOR UPDATE
This rather makes no sense. there is no use for transactions at all. What you want to roll back? a SELECT query result?
I want to be sure, that no other user could reach the INSERT query, when another is performing the 1. query.
This is achieved via table locking, not transactions. Or, rather by the simple key uniqueness. Without knowing your business logic it's impossible to answer more. Better ask another question, explaining user experience, and avoiding technical terms like "transaction" at all.

Categories