Laravel Insert/Update/Delete - php

I currently have this code below, however when adding around 2000 rows this runs too slow due to being in an foreach loop.
foreach($tables as $key => $table) {
$class_name = explode("\\", get_class($table[0]));
$class_name = end($class_name);
$moved = 'moved_' . $class_name;
${$moved} = [];
foreach($table[0]->where('website_id', $website->id)->get() as $value) {
$value->website_id = $live_website_id;
$value->setAppends([]);
$table[0]::on('live')->updateOrCreate([
'id' => $value->id,
'website_id' => $value->website_id
], $value->toArray());
${$moved}[] = $value->id;
}
// Remove deleted rows
if ($table[1]) {
$table[0]::on('live')->where([ 'website_id' => $live_website_id ])
->whereNotIn('id', ${$moved})
->delete();
}
}
What is happening is basically users will add/update/delete data in a development server, then when they hit a button this data needs to be pushed into the live table, retaining the ID's as auto incremental id's on the live won't work due to look-up tables and multiple users launching data live at the same time.
What is the best way to do this? Should I simply remove all the data in that table (there is a unique identifier for chunks of data) and then just insert?

I think can do:
You can create a temp table and fill just like your development server and just rename table to published table
Use Job and background service(async queue)
You can set your db connection as singleton, because connecting multiple times can waste time
Can use some sql tools for example if use postgresql you can use Foreign Data Wrapper(fdw) to connect db to each other and update and create in a faster way!

Related

laravel chunkById stuck in the middle, did I missing something?

I want to move my audits table records which got million of rows to another table by using console/command and chunkbyid but it stop in the middle. For example I want to move the audit date of MONTH(created_at) = 02 and YEAR(created_at) = 2021, it does not run through all the records following that condition. As i checked in mysql it suppose to have like 5mils of records but only run up to hundreds thousand only. My codes as below in console
Audit::query()
->whereRaw("MONTH(created_at) = '$month'")
->whereRaw("YEAR(created_at) = '$year'")
->chunkById(1, function ($audits) use ($table_name) {
foreach($audits as $audit){
dump($audit->id);
$newRecord = $audit->replicate()->fill([
'audit_id' => $audit->id,
'created_at' => $audit->created_at,
'updated_at' => $audit->updated_at,
]);
$newRecord->setTable($table_name);
$newRecord->save();
if(str_contains($audit->auditable_type, 'User') || str_contains($audit->auditable_type, 'Trans') || str_contains($audit->auditable_type, 'Step')|| str_contains($audit->auditable_type, 'Team')){
$audit->delete();
}
}
}, $column = 'id');
I already followed many solutions i found in many sites but still not working. Is there anything i missed?
In Laravel documentaion (https://laravel.com/docs/9.x/queries)
there is a block of note say When updating or deleting records inside the chunk callback, any changes to the primary key or foreign keys could affect the chunk query. This could potentially result in records not being included in the chunked results.
and in your code you deleting audit in some cases.

Predicting future IDs used before saving to the DB

I am saving a complex dataset in Laravel 4.2 and I am looking for ways to improve this.
$bits has several $bobs. A single $bob can be one of several different classes. I am trying to duplicate a singular $bit and all its associated $bobs and save all of this to the DB with as few calls as possible.
$newBit = $this->replicate();
$newBit->save();
$bobsPivotData = [];
foreach ($this->bobs as $index => $bob) {
$newBob = $bob->replicate();
$newBobs[] = $newBob->toArray();
$bobsPivotData[] = [
'bit_id' => $newBit->id,
'bob_type' => get_class($newBob),
'bob_id' => $newBob->id,
'order' => $index
];
}
// I now want to save all the $bobs kept in $newBobs[]
DB::table('bobs')->insert($newBobs);
// Saving all the pivot data in one go
DB::table('bobs_pivot')->insert($bobsPivotData);
My problem is here, that I cant access $newBob->id before I have inserted the $newBob after the loop.
I am looking for how best to reduce saves to the DB. My best guess is that if I can predict the ids that are going to be used, I can do all of this in one loop. Is there a way I can predict these ids?
Or is there a better approach?
You could insert the bobs first and then use the generated ids to insert the bits. This isn't a great solution in a multi-user environment as there could be new bobs inserted in the interim which could mess things up, but it could suffice for your application.
$newBit = $this->replicate();
$newBit->save();
$bobsPivotData = [];
foreach ($this->bobs as $bob) {
$newBob = $bob->replicate();
$newBobs[] = $newBob->toArray();
}
$insertId = DB::table('bobs')->insertGetId($newBobs);
$insertedBobs = DB::table('bobs')->where('id', '>=', $insertId);
foreach($insertedBobs as $index => $newBob){
$bobsPivotData[] = [
'bit_id' => $newBit->id,
'bob_type' => get_class($newBob),
'bob_id' => $newBob->id,
'order' => $index
];
}
// Saving all the pivot data in one go
DB::table('bobs_pivot')->insert($bobsPivotData);
I have not tested this, so some pseudo-code to be expected.

MongoDB not insert in order with PHP Driver

There is a problem in inserting to MongoDB database. It is not insert to database in right order.
We read the writing concern in MongoDB:
http://www.php.net/manual/en/mongo.writeconcerns.php
We use mongoDB 2.6 and PHP driver 1.6 with following sample code:
set_message_sample('1');
set_message_sample('2');
set_message_sample($id) {
$connecting_string = sprintf('mongodb://%s:%d/%s', $hosts, $port,$database), // fill with right connection setting
$connection= new Mongo($connecting_string,array('username'=>$username,'password'=>$password)); // fill with proper authentication setting
$dbname = $connection->selectDB('dbname');
$collection = $dbname->selectCollection('collection');
$post = array(
'title' => $id,
'content' => 'test ' . $id,
);
$posts->insert($insert,array("w" => "1"));
Sometimes the result is inserting "2" before "1" to database. We want to inserting in right order (first "1" and next "2") all the times. I also notice that we order the collection with mongoID which automatically set by mongoDB.
We check many options but the problem not solved. How we could solve it? ( How we could disable something like cache or isolate the insert queue to MongoDB.)
So, i think you could insert the second only after the confirmation of the first one. since the insert is asynchronous, you can't be sure who goes first. So you must insert only after the confirmation of the first one.

PHP RedBean store bean if not exist one

I am a little confused. I actively use PHP RedBean as ORM within my direct mail service and I run into curious situation - I have a table with unique key constraint (i.e. subscriber_id, delivery_id) and two scripts that is writing data into this table.
There is source code that is inserting or updating table:
public static function addOpenPrecedent($nSubscriberId, $nDeliveryId)
{
$oOpenStatBean = \R::findOrDispense('open_stat', 'delivery_id = :did AND subscriber_id = :sid', array(':did' => $nDeliveryId, ':sid' => $nSubscriberId));
$oOpenStatBean = array_values($oOpenStatBean);
if (1 !== count($oOpenStatBean)) {
throw new ModelOpenStatException(
"Ошибка при обновлении статистики открытий: пара (delivery_id,
subscriber_id) не является уникальной: ($nDeliveryId, $nSubscriberId).");
}
$oOpenStatBean = $oOpenStatBean[0];
if (!empty($oOpenStatBean->last_add_dt)) {
$oOpenStatBean->precedent++;
} else {
$oOpenStatBean->delivery_id = $nDeliveryId;
$oOpenStatBean->subscriber_id = $nSubscriberId;
}
$oOpenStatBean->last_add_dt = time('Y-m-d H:i:s');
\R::store($oOpenStatBean);
}
It is called both from two scripts. And I have issues with corruption unique constraint on this table periodically, because race conditions occurs. I know about SQL "INSERT on duplicate key update" feature. But how can I obtain same result purely using my ORM?
Current, that I know if, Redbean will not issue an
INSERT ON DUPLICATE KEY UPDATE
as the discussion of this cited in the comments above indicates that Redbean's developer considers upsert to be a business logic thing that would pollute the ORM's interphase. This being said, it is most likely achievable if one were to extend Redbean with a custom Query Writer or plugin per the Documentation. I haven't tried this because the method below easily achieves this behavior without messing with the internals and plugins of the ORM, however, it does require that you use transactions and models and a couple of extra queries.
Basically, start your transaction with either R::transaction() or R::begin() before your call to R::store(). Then in your "FUSE"d model, use the "update" FUSE method to run a query that checks for duplication and retrieves the existing id while locking the necessary rows (i.e. SELECT FOR UPDATE). If no id is returned, you are good and just let your regular model validation (or lack thereof) continue as usual and return. If an id is found, simply set $this->bean->id to the returned value and Redbean will UPDATE rather than INSERT. So, with a model like this:
class Model_OpenStat extends RedBean_SimpleModel{
function update(){
$sql = 'SELECT * FROM `open_stat` WHERE `delivery_id`=? AND 'subscriber_id'=? LIMIT 1 FOR UPDATE';
$args = array( $this->bean->deliver_id, $this->bean->subscriber_id );
$dupRow = R::getRow( $sql, $args );
if( is_array( $dupRow ) && isset( $dupRow['id'] ) ){
foreach( $this->bean->getProperties() as $property => $value ){
#set your criteria here for which fields
#should be from the one in the database and which should come from this copy
#this version simply takes all unset values in the current and sets them
#from the one in the database
if( !isset( $value ) && isset( $dupRow[$property] ) )
$this->bean->$property = $dupRow[$property];
}
$this->bean->id = $dupId['id']; #set id to the duplicates id
}
return true;
}
}
You would then modify the R::store() call like so:
\R::begin();
\R::store($oOpenStatBean);
\R::commit();
or
\R::transaction( function() use ( $oOpenStatBean ){ R::store( $oOpenStatBean ); } );
The transaction will cause the "FOR UPDATE" clause to lock the found row or, in the event that no row was found, to lock the places in the index where your new row will go so that you don't have concurrency issues.
Now this will not solve one user's update of the record clobbering another, but that is a whole different topic.

How to clear a table in hbase?

I want to empty a table in hbase... eg: user. Is there any command or function to empty the table without deleting it...
My table structure is :
$mutations = array(
new Mutation( array(
'column' => 'username:1',
'value' =>$name
) ),
new Mutation( array(
'column' => 'email:1',
'value' =>$email
) )
);
$hbase->mutateRow("user",$key,$mutations);
Can someone help me?
If you execute this in HBase shell:
> truncate 'yourTableName'
Then HBase will execute this operations for 'yourTableName':
> disable 'yourTableName'
> drop 'yourTableName'
> create 'yourTableName', 'f1', 'f2', 'f3'
Another efficient option is to actually delete the table then reconstruct another one with all the same settings as the previous.
I don't know how to do this in php, but I do know how to do it in Java. The corresponding actions in php should be similar, you just need to check how the API looks like.
In Java using HBase 0.90.4:
// Remember the "schema" of your table
HBaseAdmin admin = new HBaseAdmin(yourConfiguration);
HTableDescriptor td = admin.getTableDescriptor(Bytes.toBytes("yourTableName");
// Delete your table
admin.disableTable("yourTableName");
admin.deleteTable("yourTableName");
// Recreate your talbe
admin.createTable(td);
Using hbase shell, truncate <table_name> will do the task.
The snapshot of truncate 'customer_details' command is shown below:
where customer_details is the table name
truncate command in hbase shell will do the job for you:
Before truncate:
After truncate:
HBase thrift API (which is what php is using) doesn't provide a truncate command only deleteTable and createTable functionality (what's the diff from your point of view?)
otherwise you have to scan to get all the keys and deleteAllRow for each key - which isn't a very efficient option
For the purposes of this you can use HAdmin. It is an UI tool for Apache HBase administration. There are "Truncate table" and even "Delete table" buttons in alter table page.
Using alter command
alter '<table_name>', NAME=>'column_family',TTL=><number_of_seconds>
here number_of_seconds stands for duration after which data will be automatically deleted.
There's no single command to clear Hbase table, but you can use 2 workarounds: disable, delete, create table, or scan all records and delete each.
Actually, disable, delete and create table again takes about 4 seconds.
// get Hbase client
$client = <Your code here>;
$t = "table_name";
$tables = $client->getTableNames();
if (in_array($t, $tables)) {
if ($client->isTableEnabled($t))
$client->disableTable($t);
$client->deleteTable($t);
}
$descriptors = array(
new ColumnDescriptor(array("name" => "c1", "maxVersions" => 1)),
new ColumnDescriptor(array("name" => "c2", "maxVersions" => 1))
);
$client->createTable($t, $descriptors);
If there's not a lot of data in table - scan all rows and delete each is much faster.
$client = <Your code here>;
$t = "table_name";
// i don't remember, if list of column families is actually needed here
$columns = array("c1", "c2");
$scanner = $client->scannerOpen($t, "", $columns);
while ($result = $client->scannerGet($scanner)) {
$client->deleteAllRow($t, $result[0]->row);
}
In this case data is not deleted physically, actually it's "marked as deleted" and stays in table until next major compact.
Perhaps using one of these two commands:
DELETE FROM your_table WHERE 1;
Or
TRUNCATE your_table;
Regards!

Categories