I'm trying to calculate the number of unique records based on a mobile column that has an index via the Laravel collect and unique method. I have 200,000 rows and have a column called optout_csv_schedule_id that has an index on it along with the mobile. Right now, it's been running over 15 minutes for the query to execute, how can I improve the performance of this as I need to calculate the number of unique numbers out of the 200,000, my current query is:
/**
* Get valid lead count
*/
protected function getValidLeadCount($schedule_id)
{
$optoutConnectionLogs = OptoutConnectionLog::where('optout_csv_schedule_id', $schedule_id)
->get();
// no leads
if (!$optoutConnectionLogs) {
return 0;
}
// count total unique leads
$uniqueLeads = collect($optoutConnectionLogs)->unique('mobile')->count();
return $uniqueLeads;
}
It seems to be difficult to calculate the number of unique numbers out of the 200,000 in Laravel.
Try to change as follows:
protected function getValidLeadCount($schedule_id)
{
$uniqueLeads = OptoutConnectionLog::where('optout_csv_schedule_id', $schedule_id)
->distinct('mobile')
->count('mobile');
return $uniqueLeads;
}
You are not using the database to do the unique, you already got the records with ->get(), and are using PHP/Laravel to do it. That will be much slower than using the database.
Use distinct() to get unique records, eg:
$optoutConnectionLogs = OptoutConnectionLog::where('optout_csv_schedule_id', $schedule_id)
->select('mobile')
->distinct()
->get();
You read all the data into memory, convert it into PHP objects, and then iterate to count the numbers. The database index you created is not used at all.
Your needs should be simplified into the following code
return OptoutConnectionLog::where('optout_csv_schedule_id', $schedule_id)
->distinct('mobile')
->count();
Related
I have a tasks table which tracks the car odometer values of a task. A task relates to itself and has one previous task defined by a has one relationship. I'm trying to create a scope to pull in a complete_at_odometer value in the query.
This value is equal to a completed_at_odometer value on the previous task plus the frequency of which the current task needs to be repeated. For example the current Task A needs to be completed every 10 miles. The last task, Task B was completed at 5 miles. I'm trying to create a scope to add the value complete_at_odometer to the results which should equal 15.
Here's what I have so far:
public function scopeWithCompleteByOdometer($query)
{
return $query
->join('tasks as oldTask', 'oldTask.current_task_id', '=', 'tasks.id')
->select(
'*',
'oldTask.completed_at_odometer as last_odometer',
'tasks.distance_interval as interval'
);
}
I'm not sure how can I add the values last_odometer and interval together whilst staying in the Database Layer of my application?
I'm using Laravel 7 and MySql 5.7.
Thanks for any help!
Using DB::raw query:
return $query
->join('tasks as oldTask', 'oldTask.current_task_id', '=', 'tasks.id')
->select(DB::raw('(oldTask.completed_at_odometer + tasks.distance_interval) AS total'));
OR take sum from the result obtained like this
$a = $row->completed_at_odometer;
$b = $row->distance_interval;
$total = $a+$b
I have a column which shows the number of total referrals a user have.
Below is my code. I want to paginate order by the person have highest number of refs. I tried with max('totalref') but then paginate not works.
So I am confused with this type of alignment
public function leaders()
{
$accounts = Account::Where('totalref','>' , 5)->orderBy('id','ASC')->paginate(200);
return view('admin.leaders', compact('accounts'));
}
When you are working with paginated data, you want to do ordering, searching and etc. in the query. As based on your comment, it seems like totalref is a string in your database, convert it to an integer and the following will work.
$accounts = Account::Where('totalref','>' , 5)->orderBy('totalref','desc')->paginate(200);
Now there are 2 ways of get from Laravel Database. And I want to know which is more efficient.
1. get with count
$cnt = \App\Models\Res_Times::where(...)
->count();
if ($cnt > 0) {
$result = \App\Models\Res_Times::where(...)
->get();
}
2. get directly
$result = \App\Models\Res_Times::where(...)
->get();
I don't know PHP mysql count function is so fast worthy to use.
Please let me know.
The second one is better. In the first case you build 2 queries you fire up to the database. If there are maybe relationships laravel will not try to load them if there aren't no results.
The more efficient way to see if the response exists is to use exists because we determinate if the rows for the query actually exist before we actually loaded into the collection, for example:
$result = \App\Models\Res_Times::where(...)
if ($result->exists()) {
return $result->get();
}
I have a problem with Laravel's ORM Eloquent chunk() method.
It misses some results.
Here is a test query :
$destinataires = Destinataire::where('statut', '<', 3)
->where('tokenized_at', '<', $date_active)
->chunk($this->chunk, function ($destinataires) {
foreach($destinataires as $destinataire) {
$this->i++;
}
}
echo $this->i;
It gives 124838 results.
But :
$num_dest = Destinataire::where('statut', '<', 3)
->where('tokenized_at', '<', $date_active)
->count();
echo $num_dest;
gives 249676, so just TWICE as the first code example.
My script is supposed to edit all matching records in the database. If I launch it multiple times, it just hands out half the remaining records, each time.
I tried with DB::table() instead of the Model.
I tried to add a ->take(20000) but it doesn't seem to be taken into account.
I echoed the query with ->toSql() and eveything seems to be fine (the LIMIT clause is added when I add the ->take() parameter).
Any suggestions ?
Imagine you are using chunk method to delete all of the records. The table has 2,000,000 records and you are going to delete all of them by 1000 chunks.
$query->orderBy('id')->chunk(1000, function ($items) {
foreach($items as $item) {
$item->delete();
}
});
It will delete the first 1000 records by getting first 1000 records in a query like this:
SELECT * FROM table ORDER BY id LIMIT 0,1000
And then the other query from chunk method is:
SELECT * FROM table ORDER BY id LIMIT 1000,2000
Our problem is here, that we delete 1000 records and then getting results from 1000 to 2000. Actually we are missing first 1000 records and this means that we are not going to delete 1000 records in first step of chunk! This scenario will be the same for other steps. In each step we are going to miss 1000 records and this is the reason that we are not getting best result in these situations.
I made an example for deletion because this way we could know the exact behavior of chunk method.
UPDATE:
You can use chunkById() for deleting safely.
Read more here:
http://laravel.at.jeffsbox.eu/laravel-5-eloquent-builder-chunk-chunkbyid
https://laravel.com/api/5.4/Illuminate/Database/Eloquent/Builder.html#method_chunkById
Quick answer: Use chunkById() instead of chunk().
When updating or deleting records while iterating over them, any changes to the primary key or foreign keys could affect the chunk query. This could potentially result in records not being included in the results.
The explanation can be found in the Laravel documentation:
If you are updating database records while chunking results, your chunk results could change in unexpected ways. If you plan to update the retrieved records while chunking, it is always best to use the chunkById method instead. This method will automatically paginate the results based on the record's primary key.
Example usage of chunkById():
DB::table('users')->where('active', false)
->chunkById(100, function ($users) {
foreach ($users as $user) {
DB::table('users')
->where('id', $user->id)
->update(['active' => true]);
}
});
(end of the update)
Below is the original answer which used the cursor() method instead of the chunk() method to solve the problem:
I had the same problem - only half of the total results were passed to the callback function of the chunk() method.
Here is the code which had the same problem - half of the transactions were not processed:
Transaction::whereNull('processed')->chunk(100, function ($transactions) {
$transactions->each(function($transaction){
$transaction->process();
});
});
I used Laravel 5.4 and managed to solve the problem replacing the chunk() method with cursor() method and changing the code accordingly:
foreach (Transaction::whereNull('processed')->cursor() as $transaction) {
$transaction->process();
}
Even though the answer doesn't address the problem itself, it provides a valuable solution.
For anyone looking for a bit of code that solves this, here you go:
while (Model::where('x', '>', 'y')->count() > 0)
{
Model::where('x', '>', 'y')->chunk(10, function ($models)
{
foreach ($models as $model)
{
$model->delete();
}
});
}
The problem is in the deletion / removal of the model while chunking away at the total. Including it in a while loop makes sure you get them all! This example works when deleting Models, change the while condition to suit your needs!
When you fetch data using chunk the same SQL query is being executed only the offset is different. Actually increasing as specified on the chunk method param. For example:
SELECT * FROM users WHERE status = 0;
Let's say there are 200 records(let's suppose that is a lot so we want to retrieve these data as chunks). So this looks like:
SELECT * FROM users WHERE status = 0 LIMIT 50 OFFSET 0(offset has a dynamic value
which means next time is 50, after that 100, and the last time 150).
The problem when using laravel chunk while updating is that we are only changing the offset. And this means the number of results is different each time we try to retrieve a chunk of data. So the first time there are 200 records that match the where condition. But if we update the status, for example to 1(status = 1) this means the next time when we try to fetch data we still execute the same query:
SELECT * FROM users WHERE status = 0 LIMIT 50 OFFSET 50(offset has a dynamic value
which means next time 100 and the last time 150).
We only have 150 records that match this query since we updated the table status = 1 for 50rows. Also we said the offset on the second time is going to be 50. And what is going to happen is that we skip 50 rows from 150rows since the offset is 50. And do the same update to these data. This means rows from 50->100 status is being updated to 1(status = 1) from the total of 150 rows.
The third time we run this query:
SELECT * FROM users WHERE status = 0 LIMIT 50 OFFSET 150(offset is going to be 150).
But the result of the query is 100 users in total that have status = 0. So no more data to go through.
This is not what you would expect to happen on the first thought. But this is how it works and why only half of data are being updated and the other part of data is being skipped.
I'm using Laravel 4, and I need to insert some rows into a MySQL table, and I need to get their inserted IDs back.
For a single row, I can use ->insertGetId(), however it has no support for multiple rows. If I could at least retrieve the ID of the first row, as plain MySQL does, would be enough to figure out the other ones.
It's mysql behavior of
last-insert-id
Important
If you insert multiple rows using a single INSERT statement, LAST_INSERT_ID() returns the value generated for the first inserted row only. The reason for this is to make it possible to reproduce easily the same INSERT statement against some other server.
u can try use many insert and take it ids or after save, try use $data->id should be the last id inserted.
If you are using INNODB, which supports transaction, then you can easily solve this problem.
There are multiple ways that you can solve this problem.
Let's say that there's a table called Users which have 2 columns id, name and table references to User model.
Solution 1
Your data looks like
$data = [['name' => 'John'], ['name' => 'Sam'], ['name' => 'Robert']]; // this will insert 3 rows
Let's say that the last id on the table was 600. You can insert multiple rows into the table like this
DB::begintransaction();
User::insert($data); // remember: $data is array of associative array. Not just a single assoc array.
$startID = DB::select('select last_insert_id() as id'); // returns an array that has only one item in it
$startID = $startID[0]->id; // This will return 601
$lastID = $startID + count($data) - 1; // this will return 603
DB::commit();
Now, you know the rows are between the range of 601 and 603
Make sure to import the DB facade at the top using this
use Illuminate\Support\Facades\DB;
Solution 2
This solution requires that you've a varchar or some sort of text field
$randomstring = Str::random(8);
$data = [['name' => "John$randomstring"], ['name' => "Sam$randomstring"]];
You get the idea here. You add that random string to a varchar or text field.
Now insert the rows like this
DB::beginTransaction();
User::insert($data);
// this will return the last inserted ids
$lastInsertedIds = User::where('name', 'like', '%' . $randomstring)
->select('id')
->get()
->pluck('id')
->toArray();
// now you can update that row to the original value that you actually wanted
User::whereIn('id', $lastInsertedIds)
->update(['name' => DB::raw("replace(name, '$randomstring', '')")]);
DB::commit();
Now you know what are the rows that were inserted.
As user Xrymz suggested, DB::raw('LAST_INSERT_ID();') returns the first.
According to Schema api insertGetId() accepts array
public int insertGetId(array $values, string $sequence = null)
So you have to be able to do
DB::table('table')->insertGetId($arrayValues);
Thats speaking, if using MySQL, you could retrive the first id by this and calculate the rest. There is also a DB::getPdo()->lastInsertId(); function, that could help.
Or if it returened the last id with some of this methods, you can calculate it back to the first inserted too.
EDIT
According to comments, my suggestions may be wrong.
Regarding the question of 'what if row is inserted by another user inbetween', it depends on the store engine. If engine with table level locking (MyISAM, MEMORY, and MERGE) is used, then the question is irrevelant, since thete cannot be two simultaneous writes to the table.
If row-level locking engine is used (InnoDB), then, another possibility might be to just insert the data, and then retrieve all the rows by some known field with whereIn() method, or figure out the table level locking.
$result = Invoice::create($data);
if ($result) {
$id = $result->id;
it worked for me
Note: Laravel version 9