Laravel 5.2 - Updating multiple rows with one query - php

So I do a lot of calculations and at the end I have rates that need to be saved to existing rows in a table.
The array I have will be similar to the following:
[
<model_id> => [
'rate' => <some rate>
]
<model_id_2> => [
'rate' => <some other rate>
]
.....
]
Now obviously I could foreach through this array and do an update for each and every item in the array but I could end up with 100 update calls. Is there a way (through laravel's eloquent OR even a raw sql query) to do all these updates through one call?

You may try with Eloquent update() for multiple records update. Here is some code which I am using for update multiple records into the my table.
\App\Notification::where('to_id', '=', 0)
->update(['is_read' => 1]);

If you are worried about the request spent time you can handle this by firing an event and then queueing your listener/job, who will save your model, so it can be processed asynchronously. For examples, go to Laravel Docs for Queues
As long as I know you cannot update multiple rows on Laravel.

Related

Should I use Laravel Eloquent in this situation or Query Builder?

Should I use Query Builder for this in Laravel 9?
DB::table('galleries')->insert(
['book_id' => $book->id, 'photo' => $name, 'cover' => 1],
);
or Laravel Eloquent?
Gallery::create(
['book_id' => $book->id, 'photo' => $name, 'cover' => 1],
);
And what is the difference?
They are both basically the same. The only difference comes when you want to insert a huge amount of data.
The performance of query builder is much faster than that of the eloquent ORM when handling VERY LARGE amounts of data.
insert() Query builder method difference:
the difference is, Query Builder insert() runs mysql raw insert query under the hood and As like mysql raw query you can create multiple rows in single query.
DB::table('galleries')->insert(
['book_id' => $book->id, 'photo' => <photo A path>, 'cover' => 1],
['book_id' => $book->id, 'photo' => <photo B path>, 'cover' => 0],
);
The above query will insert two rows in single query.
The second thing is, It is very fast and optimize in speed as compare to eloquent builder create() method.
Third, It's not emits model events i.e. creating, created events etc.
It's not returns created models in result.
create() eloquent method Difference:
It's insert only single row at a time. whenever you want insert multiple rows/models then you need to run create() method in foreach loop or needs to use createMany() method.
All of them works same like create(). let suppose you want to insert 100 rows in database table then both will runs 100 insert queries under the hood.
It emits model events i.e. creating, created events etc.
It returns inserted model in result.
It's slower than insert() query builder method because it's handling a lot of computations, events pipelines during insertion.
Suggestion:
If you want to catch model event then must use eloquent(). If you want query optimization, fast insertion and doesn't have any concern with model events then user insert() method.

Paginate and search associated query with dynamic AJAX loading in CakePHP 3

I am working on an application in CakePHP 3.7.
We have 3 database tables with the following hierarchy. These tables are associated correctly according to the Table classes:
regulations
groups
filters
(The associations are shown in a previous question: CakePHP 3 - association is not defined - even though it appears to be)
I can get all of the data from all three tables as follows:
$regulations = TableRegistry::getTableLocator()->get('Regulations');
$data = $regulations->find('all', ['contain' => ['Groups' => ['Filters']]]);
$data = $data->toArray();
The filters table contains >1300 records. I'm therefore trying to build a feature which loads the data progressively via AJAX calls as the user scrolls down the page similar to what's described here: https://makitweb.com/load-content-on-page-scroll-with-jquery-and-ajax/
The problem is that I need to be able to count the total number of rows returned. However, the data for this exists in 3 tables.
If I do debug($data->count()); it will output 8 because there are 8 rows in the regulations table. Ideally I need a count of the rows it's returning in the filters table (~1300 rows) which is where most of the data exists in terms of the initial load.
The problem is further complicated because this feature allows a user to perform a search. It might be the case that a given search term exists in all 3 tables, 1 - 2 of the tables, or not at all. I don't know whether the correct way to do this is to try and count the rows returned in each table, or the rows overall?
I've read How to paginate associated records? and Filtering By Associated Data in the Cake docs.
Additional information
The issue seems to come down to how to write a query using the ORM syntax Cake provides. The following (plain MySQL) query will actually do what I want. Assuming the user has searched for "Asbestos":
SELECT
regulations.label AS regulations_label,
groups.label AS groups_label,
filters.label AS filters_label
FROM
groups
JOIN regulations
ON groups.regulation_id = regulations.id
JOIN filters
ON filters.group_id = groups.id
WHERE regulations.label LIKE '%Asbestos%'
OR groups.label LIKE '%Asbestos%'
OR filters.label LIKE '%Asbestos%'
ORDER BY
regulations.id ASC,
groups_label ASC,
filters_label ASC
LIMIT 0,50
Let's say there are 203 rows returned. The LIMIT condition means I am getting the first 50. Some js on the frontend (which isn't really relevant in terms of how it works) will make an ajax call as the user scrolls to the bottom of the page to re-run this query with a different limit (e.g. LIMIT 51, 100 for the next set of results).
The problem seems to be two fold:
If I write the query using Cake's ORM syntax the output is a nested array. Conversely if I write it in plain SQL it's returning a table which has just 3 columns with the labels that I need to display. This structure is much easier to work with in terms of outputting the required data on the page.
The second - and perhaps more important issue - is that I can't see how you could write the LIMIT condition in the ORM syntax to make this work due to the nested structure described in 1. If I added $data->limit(50) for example, it only imposes this limit on the regulations table. So essentially the search works for any associated data on the first 50 rows in regulations. As opposed to the LIMIT condition I've written in MySQL which would take into consideration the entire result set which includes the columns from all 3 tables.
To further elaborate point 2, assume the tables contain the following numbers of rows:
regulations: 150
groups: 1000
filters: 5000
If I use $data->limit(50) it would only apply to 50 rows in the regulations table. I need to apply the LIMIT the result set after searching all rows in all 3 tables for a given term.
Creating that query using the query builder is pretty simple, if you want joins for non-1:1 associations, then use the *JoinWith() methods instead of contain(), in this case innerJoinWith(), something like:
$query = $GroupsTable
->find()
->select([
'regulations_label' => 'Regulations.label',
'groups_label' => 'Groups.label',
'filters_label' => 'Filters.label',
])
->innerJoinWith('Regulations')
->innerJoinWith('Filters')
->where([
'OR' => [
'Regulations.label LIKE' => '%Asbestos%',
'Groups.label LIKE' => '%Asbestos%',
'Filters.label LIKE' => '%Asbestos%',
],
])
->order([
'Regulations.id' => 'ASC',
'Groups.label' => 'ASC',
'Filters.label' => 'ASC',
]);
See also
Cookbook > Database Access & ORM > Query Builder > Using innerJoinWith
Cookbook > Database Access & ORM > Query Builder > Adding Joins

Fastest way to insert/update a million rows in Laravel 5.7

I'm using Laravel 5.7 to fetch large amounts of data (around 500k rows) from an API server and insert it into a table (call it Table A) quite frequently (at least every six hours, 24/7) - however, it's enough to insert only the changes the next time we insert (but at least 60-70% of the items will change). So this table will quickly have tens of millions of rows.
I came up with the idea to make a helper table (call it Table B) to store all the new data into it. Before inserting everything into Table A, I want to compare it to the previous data (with Laravel, PHP) from Table B - so I will only insert the records that need to be updated. Again it will usually be around 60-70% of the records.
My first question is if this above-mentioned way is the preferred way of doing it, in this situation (obviously I want to make it happen as fast as possible.) I assume that searching for an updating the records in the table would take a lot more time and it would keep the table busy / lock it. Is there a better way to achieve the same (meaning to update the records in the DB).
The second issue I'm facing is the slow insert times. Right now I'm using a local environment (16GB RAM, I7-6920HQ CPU) and MySQL is inserting the rows very slowly (about 30-40 records at a time). The size of one row is around 50 bytes.
I know it can be made a lot faster by fiddling around with InnoDB's settings. However, I'd also like to think that I can do something on Laravel's side to improve performance.
Right now my Laravel code looks like this (only inserting 1 record at a time):
foreach ($response as $key => $value)
{
DB::table('table_a')
->insert(
[
'test1' => $value['test1'],
'test2' => $value['test2'],
'test3' => $value['test3'],
'test4' => $value['test4'],
'test5' => $value['test5'],
]);
}
$response is a type of array.
So my second question: is there any way to increase the inserting time of the records to something like 50k/second - both on the Laravel application layer (by doing batch inserts) and MySQL InnoDB level (changing the config).
Current InnoDB settings:
innodb_buffer_pool_size = 256M
innodb_log_file_size = 256M
innodb_thread_concurrency = 16
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = normal
innodb_use_native_aio = true
MySQL version is 5.7.21.
If I forgot to tell/add anything, please let me know in a comment and I will do it quickly.
Edit 1:
The server that I'm planning to use will have SSD on it - if that makes any difference. I assume MySQL inserts will still count as I/O.
Disable autocommit and manually commit at end of insertion
According to MySQL 8.0 docs. (8.5.5 Bulk Data Loading for InnoDB Tables)
You can increase the INSERT speed by turning off auto commit:
When importing data into InnoDB, turn off autocommit mode, because it performs a log flush to disk for every insert. To disable autocommit during your import operation, surround it with SET autocommit and COMMIT statements:
SET autocommit=0;
... SQL import statements ...
COMMIT;
Other way to do it in Laravel is using Database Transactions:
DB::beginTransaction()
// Your inserts here
DB::commit()
Use INSERT with multiple VALUES
Also according to MySQL 8.0 docs (8.2.5.1 Optimizing INSERT Statements) you can optimize INSERT speed by using multiple VALUES on a single insert statement.
To do it with Laravel, you can just pass an array of values to the insert() method:
DB::table('your_table')->insert([
[
'column_a'=>'value',
'column_b'=>'value',
],
[
'column_a'=>'value',
'column_b'=>'value',
],
[
'column_a'=>'value',
'column_b'=>'value',
],
]);
According to the docs, it can be many times faster.
Read the docs
Both MySQL docs links that I put on this post have tons of tips on increasing INSERT speed.
Avoid using Laravel/PHP for inserting it
If your data source is (or can be) a CSV file, you can run it a lot faster using mysqlimport to import the data.
Using PHP and Laravel to import data from a CSV file is an overhead, unless you need to do some data processing before inserting.
Thanks #Namoshek, I had also the same problem. solution is like this.
$users= array_chunk($data, 500, true);
foreach ($users as $key => $user) {
Model::insert($user);
}
Depends on data, you can also make use of array_push() and then insert.
Don't call insert() inside a foreach() because it will execute n number of queries to the database when you have n number of data.
First create an array of data objects matching with the database column names. and then pass the created array to insert() function.
This will only execute one query to the database regardless of how many number of data you have.
This is way faster, way too faster.
$data_to_insert = [];
foreach ($response as $key => $value)
{
array_push($data_to_insert, [
'test1' => $value['test1'],
'test2' => $value['test2'],
'test3' => $value['test3'],
'test4' => $value['test4'],
'test5' => $value['test5'],
]);
}
DB::table('table_a')->insert($data_to_insert);
You need to do multiple row insert but also chunk your insert to not exceed your DB limits
You can do this by chunking your array
foreach (array_chunk($response, 1000) as $responseChunk)
{
$insertableArray = [];
foreach($responseChunk as $value) {
$insertableArray[] = [
'test1' => $value['test1'],
'test2' => $value['test2'],
'test3' => $value['test3'],
'test4' => $value['test4'],
'test5' => $value['test5'],
];
}
DB::table('table_a')->insert($insertableArray);
}
You can increase the size of the chunk 1000 till you approach you DB configuration limit. Make sure to leave some security margin (0.6 times your DB limit).
You can't go any faster than this using laravel.

get record ids after bulk insert in laravel

I'm using laravel 5.4 I want to get record ids after I insert them in the table. here's data that I want to insert, it stored in array
$data = [
[
"count" => "100",
"start_date" => 1515628800
],
[
"count" => "102",
"start_date" => 1515715200
]
];
here I insert the array of items at once
\Auth::user()->schedule()->insert($data);
but this method returns boolean and I want to get ids(or all columns) of these new items after they been inserted, how should I do this? it doesn't matter if it will be done with eloquent or querybuilder. I already tried insertGetId method, but it doesn't seem to work with multidimensional array. if its not possible with laravel, what would be the best way to implement this?
Workaround: Select highest PK value and you should know what values they had. You might need locking to prevent a race with other users though.
edit:
LOCK TABLES schedule WRITE;
// Select last ID
UNLOCK TABLES;
the list of id's is then the last id - insert count
My approach is to generate the id's server side any time I run into this issue but you have to make sure the id's are unique. Also, you can do it for exactly the tables you need (not the entire database). Hope this helps.

Optimize UpdateOrCreate in Laravel

I download XML from external URL and parse it into mysql.
Rate::updateOrCreate([
'exchanger_id' => $exchangerId,
'signature_from_id' => $signatureFromId,
'signature_to_id' => $signatureToId
], [
'in' => $item->in,
'out' => $item->out,
'amount' => $item->amount
]);
The thing is XML contains many items, and I parse many sites, so it results into 20K queries for 20-25 URLS. Later on I'll parse about 300 URLS and the number of queries will rise.
How could I optimize this process? I mean the updateOrCreate part. If a row with exchanger_id, signature_from_id and signature_to_id exists I need to update it, otherwise create a new row. And repeat it for every xml item.
As I realize Laravel makes at least 2 queries: first is a select which checks out if the row exists, second is create/update.
Couldn't think about any batch examples :(
Update
I made a unique composite key for first three columns (exchanger_id, signature_from_id, signature_to_id) and downloaded this trait https://github.com/yadakhov/insert-on-duplicate-key
Number of queries become 26 (was about 20000). But the amount of time required to handle all this didn't change. What am I missing...
Why not do this instead if your business case allows it.
(1) Store all the xml in bulk in some folder in your app
(2) Create Cron job that will do the processing for you and fire an event that you can capture when the processing is complete so you can take the next step? Take a look at scheduling jobs here. Also take a look at queues and eventing in laravel here for some more advance ideas.

Categories