below I have the following code. If the transaction fails, it is NOT being rolled back. If I remove the lock table statements it rolls back. Is there anything special I need to do in order to use locks and transactions?
function save ($items,$customer_id,$employee_id,$comment,$show_comment_on_receipt,$payments,$sale_id=false, $suspended = 0, $cc_ref_no = '', $auth_code = '', $change_sale_date=false,$balance=0, $store_account_payment = 0)
{
if(count($items)==0)
return -1;
$sales_data = array(
'customer_id'=> $this->Customer->exists($customer_id) ? $customer_id : null,
'employee_id'=>$employee_id,
'payment_type'=>$payment_types,
'comment'=>$comment,
'show_comment_on_receipt'=> $show_comment_on_receipt ? $show_comment_on_receipt : 0,
'suspended'=>$suspended,
'deleted' => 0,
'deleted_by' => NULL,
'cc_ref_no' => $cc_ref_no,
'auth_code' => $auth_code,
'location_id' => $this->Employee->get_logged_in_employee_current_location_id(),
'store_account_payment' => $store_account_payment,
);
$this->db->trans_start();
//Lock tables invovled in sale transaction so we don't have deadlock
$this->db->query('LOCK TABLES '.$this->db->dbprefix('customers').' WRITE, '.$this->db->dbprefix('sales').' WRITE,
'.$this->db->dbprefix('store_accounts').' WRITE, '.$this->db->dbprefix('sales_payments').' WRITE, '.$this->db->dbprefix('sales_items').' WRITE,
'.$this->db->dbprefix('giftcards').' WRITE, '.$this->db->dbprefix('location_items').' WRITE,
'.$this->db->dbprefix('inventory').' WRITE, '.$this->db->dbprefix('sales_items_taxes').' WRITE,
'.$this->db->dbprefix('sales_item_kits').' WRITE, '.$this->db->dbprefix('sales_item_kits_taxes').' WRITE,'.$this->db->dbprefix('people').' READ,'.$this->db->dbprefix('items').' READ
,'.$this->db->dbprefix('employees_locations').' READ,'.$this->db->dbprefix('locations').' READ, '.$this->db->dbprefix('items_tier_prices').' READ
, '.$this->db->dbprefix('location_items_tier_prices').' READ, '.$this->db->dbprefix('items_taxes').' READ, '.$this->db->dbprefix('item_kits').' READ
, '.$this->db->dbprefix('location_item_kits').' READ, '.$this->db->dbprefix('item_kit_items').' READ, '.$this->db->dbprefix('employees').' READ , '.$this->db->dbprefix('item_kits_tier_prices').' READ
, '.$this->db->dbprefix('location_item_kits_tier_prices').' READ, '.$this->db->dbprefix('location_items_taxes').' READ
, '.$this->db->dbprefix('location_item_kits_taxes'). ' READ, '.$this->db->dbprefix('item_kits_taxes'). ' READ');
$this->db->insert('sales',$sales_data);
$sale_id = $this->db->insert_id();
//A bunch of mysql other queries to save a sale
$this->db->query('UNLOCK TABLES');
$this->db->trans_complete();
if ($this->db->trans_status() === FALSE)
{
return -1;
}
return $sale_id;
}
I believe it's mainly beacuse of the fact that MySQL "LOCK TABLES" commits any active transaction before attempting to lock the tables.
Actually, the interaction between the two actions (LOCKING and TRANSACTIONS) looks quite tricky in MySQL, as it's very clearly and completely outlined in the MySQL manual page.
The proposed solution is to turn OFF the autocommit flag (which is ON by default, so usually every query issued is automatically committed after execution) by issueing the SET autocommit = 0 command:
The correct way to use LOCK TABLES and UNLOCK TABLES with
transactional tables [..] is to begin a transaction with SET
autocommit = 0 (not START TRANSACTION) followed by LOCK TABLES, and to
not call UNLOCK TABLES until you commit the transaction explicitly.
For example, if you need to write to table t1 and read from table t2,
you can do this:
Your code could look like this:
$this->db->query("SET autocommit=0");
$this->db->query("LOCK TABLES...."); // your query here
$this->db->query("COMMIT");
$this->db->query("UNLOCK TABLES');
Related
I currently have this code below, however when adding around 2000 rows this runs too slow due to being in an foreach loop.
foreach($tables as $key => $table) {
$class_name = explode("\\", get_class($table[0]));
$class_name = end($class_name);
$moved = 'moved_' . $class_name;
${$moved} = [];
foreach($table[0]->where('website_id', $website->id)->get() as $value) {
$value->website_id = $live_website_id;
$value->setAppends([]);
$table[0]::on('live')->updateOrCreate([
'id' => $value->id,
'website_id' => $value->website_id
], $value->toArray());
${$moved}[] = $value->id;
}
// Remove deleted rows
if ($table[1]) {
$table[0]::on('live')->where([ 'website_id' => $live_website_id ])
->whereNotIn('id', ${$moved})
->delete();
}
}
What is happening is basically users will add/update/delete data in a development server, then when they hit a button this data needs to be pushed into the live table, retaining the ID's as auto incremental id's on the live won't work due to look-up tables and multiple users launching data live at the same time.
What is the best way to do this? Should I simply remove all the data in that table (there is a unique identifier for chunks of data) and then just insert?
I think can do:
You can create a temp table and fill just like your development server and just rename table to published table
Use Job and background service(async queue)
You can set your db connection as singleton, because connecting multiple times can waste time
Can use some sql tools for example if use postgresql you can use Foreign Data Wrapper(fdw) to connect db to each other and update and create in a faster way!
I have following model
Inventory [product_name, quantity, reserved_quantity]
with data
[Shirt, 1, 0]
[Shorts, 10, 0]
What happens if following code is executed in multiple threads at the same time?
$changes = [
['name' => 'Shirt', 'qty' => 1],
['name' => 'Shorts', 'qty' => 1],
];
$db->startTransaction();
foreach($changes as $change){
$rowsUpdated = $db->exec("UPDATE inventory
SET reserved_quantity = reserved_quantity + $change['qty']
WHERE product_name = $change['name']
and quantity >= reserved_quantity + $change['qty']");
if($rowsUpdated !== 1)
$db->rollback();
exit;
}
$db->commit();
Is it possible that the result will be?
[Shirt, 1, 2]
[Shorts, 10, 2]
It's not.
Lets see what would be happening in the following scenario:
The first transaction starts
UPDATE Shirt => an exclusive row lock will be set on the record
The second transaction starts
The second transaction tries to UPDATE Shirt. As it would need to obtain a record lock it would wait as this record has already been locked by the first transaction
The first transaction commits, the second one would resume execution and see the updated record
Of course it's only relevant to InnoDb and similar mysql engines.
Please note that you're lucky enough that traversing the records in the same order. If it were not the case you might run into a deadlock
I am a little confused. I actively use PHP RedBean as ORM within my direct mail service and I run into curious situation - I have a table with unique key constraint (i.e. subscriber_id, delivery_id) and two scripts that is writing data into this table.
There is source code that is inserting or updating table:
public static function addOpenPrecedent($nSubscriberId, $nDeliveryId)
{
$oOpenStatBean = \R::findOrDispense('open_stat', 'delivery_id = :did AND subscriber_id = :sid', array(':did' => $nDeliveryId, ':sid' => $nSubscriberId));
$oOpenStatBean = array_values($oOpenStatBean);
if (1 !== count($oOpenStatBean)) {
throw new ModelOpenStatException(
"Ошибка при обновлении статистики открытий: пара (delivery_id,
subscriber_id) не является уникальной: ($nDeliveryId, $nSubscriberId).");
}
$oOpenStatBean = $oOpenStatBean[0];
if (!empty($oOpenStatBean->last_add_dt)) {
$oOpenStatBean->precedent++;
} else {
$oOpenStatBean->delivery_id = $nDeliveryId;
$oOpenStatBean->subscriber_id = $nSubscriberId;
}
$oOpenStatBean->last_add_dt = time('Y-m-d H:i:s');
\R::store($oOpenStatBean);
}
It is called both from two scripts. And I have issues with corruption unique constraint on this table periodically, because race conditions occurs. I know about SQL "INSERT on duplicate key update" feature. But how can I obtain same result purely using my ORM?
Current, that I know if, Redbean will not issue an
INSERT ON DUPLICATE KEY UPDATE
as the discussion of this cited in the comments above indicates that Redbean's developer considers upsert to be a business logic thing that would pollute the ORM's interphase. This being said, it is most likely achievable if one were to extend Redbean with a custom Query Writer or plugin per the Documentation. I haven't tried this because the method below easily achieves this behavior without messing with the internals and plugins of the ORM, however, it does require that you use transactions and models and a couple of extra queries.
Basically, start your transaction with either R::transaction() or R::begin() before your call to R::store(). Then in your "FUSE"d model, use the "update" FUSE method to run a query that checks for duplication and retrieves the existing id while locking the necessary rows (i.e. SELECT FOR UPDATE). If no id is returned, you are good and just let your regular model validation (or lack thereof) continue as usual and return. If an id is found, simply set $this->bean->id to the returned value and Redbean will UPDATE rather than INSERT. So, with a model like this:
class Model_OpenStat extends RedBean_SimpleModel{
function update(){
$sql = 'SELECT * FROM `open_stat` WHERE `delivery_id`=? AND 'subscriber_id'=? LIMIT 1 FOR UPDATE';
$args = array( $this->bean->deliver_id, $this->bean->subscriber_id );
$dupRow = R::getRow( $sql, $args );
if( is_array( $dupRow ) && isset( $dupRow['id'] ) ){
foreach( $this->bean->getProperties() as $property => $value ){
#set your criteria here for which fields
#should be from the one in the database and which should come from this copy
#this version simply takes all unset values in the current and sets them
#from the one in the database
if( !isset( $value ) && isset( $dupRow[$property] ) )
$this->bean->$property = $dupRow[$property];
}
$this->bean->id = $dupId['id']; #set id to the duplicates id
}
return true;
}
}
You would then modify the R::store() call like so:
\R::begin();
\R::store($oOpenStatBean);
\R::commit();
or
\R::transaction( function() use ( $oOpenStatBean ){ R::store( $oOpenStatBean ); } );
The transaction will cause the "FOR UPDATE" clause to lock the found row or, in the event that no row was found, to lock the places in the index where your new row will go so that you don't have concurrency issues.
Now this will not solve one user's update of the record clobbering another, but that is a whole different topic.
I want to empty a table in hbase... eg: user. Is there any command or function to empty the table without deleting it...
My table structure is :
$mutations = array(
new Mutation( array(
'column' => 'username:1',
'value' =>$name
) ),
new Mutation( array(
'column' => 'email:1',
'value' =>$email
) )
);
$hbase->mutateRow("user",$key,$mutations);
Can someone help me?
If you execute this in HBase shell:
> truncate 'yourTableName'
Then HBase will execute this operations for 'yourTableName':
> disable 'yourTableName'
> drop 'yourTableName'
> create 'yourTableName', 'f1', 'f2', 'f3'
Another efficient option is to actually delete the table then reconstruct another one with all the same settings as the previous.
I don't know how to do this in php, but I do know how to do it in Java. The corresponding actions in php should be similar, you just need to check how the API looks like.
In Java using HBase 0.90.4:
// Remember the "schema" of your table
HBaseAdmin admin = new HBaseAdmin(yourConfiguration);
HTableDescriptor td = admin.getTableDescriptor(Bytes.toBytes("yourTableName");
// Delete your table
admin.disableTable("yourTableName");
admin.deleteTable("yourTableName");
// Recreate your talbe
admin.createTable(td);
Using hbase shell, truncate <table_name> will do the task.
The snapshot of truncate 'customer_details' command is shown below:
where customer_details is the table name
truncate command in hbase shell will do the job for you:
Before truncate:
After truncate:
HBase thrift API (which is what php is using) doesn't provide a truncate command only deleteTable and createTable functionality (what's the diff from your point of view?)
otherwise you have to scan to get all the keys and deleteAllRow for each key - which isn't a very efficient option
For the purposes of this you can use HAdmin. It is an UI tool for Apache HBase administration. There are "Truncate table" and even "Delete table" buttons in alter table page.
Using alter command
alter '<table_name>', NAME=>'column_family',TTL=><number_of_seconds>
here number_of_seconds stands for duration after which data will be automatically deleted.
There's no single command to clear Hbase table, but you can use 2 workarounds: disable, delete, create table, or scan all records and delete each.
Actually, disable, delete and create table again takes about 4 seconds.
// get Hbase client
$client = <Your code here>;
$t = "table_name";
$tables = $client->getTableNames();
if (in_array($t, $tables)) {
if ($client->isTableEnabled($t))
$client->disableTable($t);
$client->deleteTable($t);
}
$descriptors = array(
new ColumnDescriptor(array("name" => "c1", "maxVersions" => 1)),
new ColumnDescriptor(array("name" => "c2", "maxVersions" => 1))
);
$client->createTable($t, $descriptors);
If there's not a lot of data in table - scan all rows and delete each is much faster.
$client = <Your code here>;
$t = "table_name";
// i don't remember, if list of column families is actually needed here
$columns = array("c1", "c2");
$scanner = $client->scannerOpen($t, "", $columns);
while ($result = $client->scannerGet($scanner)) {
$client->deleteAllRow($t, $result[0]->row);
}
In this case data is not deleted physically, actually it's "marked as deleted" and stays in table until next major compact.
Perhaps using one of these two commands:
DELETE FROM your_table WHERE 1;
Or
TRUNCATE your_table;
Regards!
Using Zend Framework, I need to (1) read a record from a MySQL database, and (2) immediately write back to that record to indicate that it has been read. I don't want other processes or queries to be able to read from or write to the same record in between steps (1) and (2).
I was considering using a transaction for these steps. If I use the following methods, will that fulfil my requirements?
Zend_Db_Adapter_Abstract::beginTransaction()
Zend_Db_Adapter_Abstract::commit()
Zend_Db_Adapter_Abstract::rollBack()
Presupposing you are using the InnoDB engine for tables that you will issue transactions on:
If the requirement is that you first need to read the row and exclusively lock it, before you are going to update it, you should issue a SELECT ... FOR UPDATE query. Something like:
$db->beginTransaction();
try
{
$select = $db->select()
->forUpdate() // <-- here's the magic
->from(
array( 'a' => 'yourTable' ),
array( 'your', 'column', 'names' )
)
->where( 'someColumn = ?', $whatever );
$result = $this->_adapter->fetchRow( $select );
/*
alter data in $result
and update if necessary:
*/
$db->update( 'yourTable', $result, array( 'someColumn = ?' => $whatever ) );
$db->commit();
}
catch( Exception $e )
{
$db->rollBack();
}
Or simply issue 'raw' SELECT ... FOR UPDATE and UPDATE SQL statements on $db of course.