I would like to retrieve the last file inserted into my table. I know that the method first() exists and provides you with the first file in the table but I don't know how to get the last insert.
You'll need to order by the same field you're ordering by now, but descending.
As an example, if you have a time stamp when the upload was done called upload_time, you'd do something like this;
For Pre-Laravel 4
return DB::table('files')->order_by('upload_time', 'desc')->first();
For Laravel 4 and onwards
return DB::table('files')->orderBy('upload_time', 'desc')->first();
For Laravel 5.7 and onwards
return DB::table('files')->latest('upload_time')->first();
This will order the rows in the files table by upload time, descending order, and take the first one. This will be the latest uploaded file.
Use the latest scope provided by Laravel out of the box.
Model::latest()->first();
That way you're not retrieving all the records. A nicer shortcut to orderBy.
You never mentioned whether you are using Eloquent, Laravel's default ORM or not. In case you are, let's say you want to get the latest entry of a User table, by created_at, you probably could do as follow:
User::orderBy('created_at', 'desc')->first();
First it orders users by created_at field, descendingly, and then it takes the first record of the result.
That will return you an instance of the User object, not a collection. Of course, to make use of this alternative, you got to have a User model, extending Eloquent class. This may sound a bit confusing, but it's really easy to get started and ORM can be really helpful.
For more information, check out the official documentation which is pretty rich and well detailed.
To get last record details
Model::all()->last(); or
Model::orderBy('id', 'desc')->first();
To get last record id
Model::all()->last()->id; or
Model::orderBy('id', 'desc')->first()->id;
Many answers and some where I don't quite agree. So I will summarise again with my comments.
In case you have just created a new object.
By default, when you create a new object, Laravel returns the new object.
$lastCreatedModel = $model->create($dataArray);
dd($lastCreatedModel); // will output the new output
echo $lastCreatedModel->key; // will output the value from the last created Object
Then there is the approach to combine the methods all() with (last()and first()) without a condition.
Very bad! Don't do that!
Model::get()->last();` // the most recent entry
Model::all()->last();` // the most recent entry
Model::get()->first();` // the oldest entry
Model::all()->first();` // the oldest entry
Which is basically the wrong approach! You get() all() the records, and in some cases that can be 200,000 or more, and then pick out just one row. Not good! Imagine your site is getting traffic from Facebook and then a query like that. In one month that would probably mean the CO² emissions of a city like Paris in a year. Because the servers have to work unnecessarily hard. So forget this approach and if you find it in your code, replace it/rewrite it. Maybe you don't notice it with 100 data sets but with 1000 and more it can be noticeable.
Very good would be:
Model::orderBy('id', 'desc')->last(); // the most recent record
Model::latest('id')->first(); // the most recent record
Model::latest('id')->limit(1)->get(); // the most recent record
Model::orderBy('id', 'desc')->limit(1)->get(); // the most recent entry
Model::orderBy('id', 'desc')->first(); // the most recent entry
Model::orderBy('id', 'asc')->first(); // the oldest entry
Model::orderBy('id', 'asc')->limit(1)->get(); // the oldest entry
Model::orderBy('id', 'asc')->first(); // the oldest entry
If orderBy is used in this context, the primarykey should always be used as a basis and not create_at.
Laravel collections has method last
Model::all() -> last(); // last element
Model::all() -> last() -> pluck('name'); // extract value from name field.
This is the best way to do it.
You can use the latest scope provided by Laravel with the field you would like to filter, let's say it'll be ordered by ID, then:
Model::latest('id')->first();
So in this way, you can avoid ordering by created_at field by default at Laravel.
Try this :
Model::latest()->get();
Don't use Model::latest()->first(); because if your collection has multiple rows created at the same timestamp (this will happen when you use database transaction DB::beginTransaction(); and DB::commit()) then the first row of the collection will be returned and obviously this will not be the last row.
Suppose row with id 11, 12, 13 are created using transaction then all of them will have the same timestamp so what you will get by Model::latest()->first(); is the row with id: 11.
To get the last record details, use the code below:
Model::where('field', 'value')->get()->last()
Another fancy way to do it in Laravel 6.x (Unsure but must work for 5.x aswell) :
DB::table('your_table')->get()->last();
You can access fields too :
DB::table('your_table')->get()->last()->id;
Honestly this was SO frustrating I almost had to go through the entire collection of answers here to find out that most of them weren't doing what I wanted. In fact I only wanted to display to the browser the following:
The last row ever created on my table
Just 1 resource
I wasn't looking to ordering a set of resources and order that list through in a descending fashion, the below line of code was what worked for me on a Laravel 8 project.
Model::latest()->limit(1)->get();
Use Model::where('user_id', $user_id)->latest()->get()->first();
it will return only one record, if not find, it will return null.
Hope this will help.
Model($where)->get()->last()->id
For laravel 8:
Model::orderBy('id', 'desc')->withTrashed()->take(1)->first()->id
The resulting sql query:
Model::orderBy('id', 'desc')->withTrashed()->take(1)->toSql()
select * from "timetables" order by "id" desc limit 1
If you are looking for the actual row that you just inserted with Laravel 3 and 4 when you perform a save or create action on a new model like:
$user->save();
-or-
$user = User::create(array('email' => 'example#gmail.com'));
then the inserted model instance will be returned and can be used for further action such as redirecting to the profile page of the user just created.
Looking for the last inserted record works on low volume system will work almost all of the time but if you ever have to inserts go in at the same time you can end up querying to find the wrong record. This can really become a problem in a transactional system where multiple tables need updated.
Somehow all the above doesn't seem to work for me in laravel 5.3,
so i solved my own problem using:
Model::where('user_id', '=', $user_id)->orderBy('created_at', 'desc')->get();
hope am able to bail someone out.
be aware that last(), latest() are not deterministic if you are looking for a sequential or event/ordered record. The last/recent records can have the exact same created_at timestamp, and which you get back is not deterministic. So do orderBy(id|foo)->first(). Other ideas/suggestions on how to be deterministic are welcome.
You just need to retrive data and reverse them you will get your desire record let i explain code for laravel 9
return DB::table('files')->orderBy('upload_time', 'desc')->first();
and if you want no. of x last result
return DB::table('files')->orderBy('upload_time', 'desc')->limit(x)->get();
If the table has date field, this(User::orderBy('created_at', 'desc')->first();) is the best solution, I think.
But there is no date field, Model ::orderBy('id', 'desc')->first()->id; is the best solution, I am sure.
you can use this functions using eloquent :
Model::latest()->take(1)->get();
With pdo we can get the last inserted id in the docs
PDO lastInserted
Process
use Illuminate\Support\Facades\DB;
// ...
$pdo = DB::getPdo();
$id = $pdo->lastInsertId();
echo $id;
I have more than 200k rows in a table, I need to get all the rows and perform some operations on it. I tried laravel paginator but then was facing issue iterating over each page and getting the data in the laravel backend api code
There is a function called chunk() which splits all data into separate selects, like pagination
User::chunk(100, function ($users) {
foreach ($users as $user) {
// code logic here to perform on user
}
});
What it actually does is running a loop of selecting 100 entries, then doing updates with them, and then another 100 entries, another update and so on. Which means that at no point there is a huge amount of data taken from the database – you are working with a chunk of entries, not the whole table.
I'm trying to understand the Zend Paginator and would mostly like to make sure it doesn't break my scripts.
For example, I have the following snippet which successfully loads some contacts one at a time:
$offset = 1;
//returns a paginator instance using a dbSelect;
$contacts = $ContactsMapper->fetchAll($fetchObj);
$contacts->setCurrentPageNumber($offset);
$contacts->setItemCountPerPage(1);
$allContacts = count($contacts);
while($allContacts >= $offset) {
foreach($contacts as $contact) {
//do something
}
$offset++;
$contacts->setCurrentPageNumber($offset);
$contacts->setItemCountPerPage(1);
}
However I can have hundreds of thousands of contacts in the database and matched by the SELECT I send to the paginator. Can I be sure it only loads one at a time in this example? And how does it do it, does it run a customized query with limit and offset?
From the official documentation : Zend Paginator Usage
Note
Instead of selecting every matching row of a given query, the DbSelect
adapter retrieves only the smallest amount of data necessary for
displaying the current page. Because of this, a second query is
dynamically generated to determine the total number of matching rows.
If your using Zend\Paginator\Adapter\DbSelect it will apply limit and offset to the query you're passing it, and it will just fetch the wanted records. This is done in the getItems() function of DbSelect, you could see that these lines in the source code.
You could also read this from the documentation :
This adapter does not fetch all records from the database in order
to count them. Instead, the adapter manipulates the original query to
produce a corresponding COUNT query. Paginator then executes that
COUNT query to get the number of rows. This does require an extra round-trip to the database, but this is many times faster than
fetching an entire result set and using count(), especially with
large collections of data.
is there a way for Eloquent/raw queries to execute a function before a query is fired? It would also be nice if I could extend the functionality to pass a parameter if the function should be run before or not. Depending on the outcome of the function (true/false) the query shouldn't be executed.
I would be nice to use the principal of "DB::listen", but I'm not sure if I can stop or run the query from within this function.
The reason for this is that I would like to build a little data warehouse myself for permanently saving results to a warehouse (db) and not query a huge database all the time.
The method I'm would like to use is to create a hash of a query, check if the hash exists in the warehouse. If it exists, then the value is returned. If not the query is executed and the output is saved together with the hash into the warehouse.
Any ideas?
///// EDIT /////
I should clarify, that I would like to access the queries and update the value if the calculated value needs to be updated. i.e.: Number of cars in december: While I'm in december, I need to keep updating the value every so often. So I store the executed query in the db and just retrieve it, run it and then update the value.
//// EDIT 2 /////
Github: https://github.com/khwerhahn/datawarehouselibrary/blob/master/DataWareHouseLib.php
What I would like to achieve is to hook into Laravels query/Eloquent logic and use the data warehouse logic in the background.
Maybe something like this:
$invalid_until = '2014-12-31 23:59:59'; // date until query needs to be updated every ten minutes
$cars = Cars::where('sales_month', '=', 12)->dw($invalid_until)->get();
If the dw($date_parameter) is added I would like Laravel to execute the data warehouse logic in the background and if found in the db then not execute the query again.
You don't need to use events to accomplish this. From the 4.2 docs:
Caching Queries
You may easily cache the results of a query using the remember method:
$users = DB::table('users')->remember(10)->get();
In this example, the results of the query will be cached for ten
minutes. While the results are cached, the query will not be run
against the database, and the results will be loaded from the default
cache driver specified for your application.
http://laravel.com/docs/4.2/queries#caching-queries
You can also use this for Eloquent objects,
eg: User::where($condition)->remember(60)->get()
I get what you're trying to do, but as I view it (I might not still be getting it right, though) you still can get away with using rememberForever() (if you don't want a specific time limit)
So, let's pretend you have a Cars table.
$cars = Cars::where('sales_month', '=', 12)->rememberForever()->get();
To work around the problem of deleting the cache, you can assign a key to the caching method, and then retrievit by that key. Now, the above query becomes:
$cars = Cars::where('sales_month', '=', 12)->rememberForever('cars')->get();
Every time you run that query you will be getting the same results, first time from the DB, all the others from the cache.
Now you say you're going to update the table, and you want to reset the cache, right?
So, run your update, then forget() the Cache with the cars index:
// Update query
Cache::forget('cars');
Your next call to the Cars query will issue a new resultset, and it will be cached. In case you're wondering, the remember() and rememberForever() are methods of the QueryBuilder class that use the same Cache class you can see in the docs in its own section.
Alternatively, in fact, you could also use the Cache class directly (it gives you a better control):
if (null == $cars= Cache::get('cars')) {
$cars = Cars::where('sales_month', '=', 12)->get();
Cache::forever('cars', $cars);
}
By overriding the method runSelect exsists in Illuminate\Database\Query\Builder that runs in every select query in Laravel.
see this package:
https://github.com/TheGeekyM/caching-queries
I've got this server setting a live traffic log DB that holds a big stats table. Now I need to create a smaller table from it, let's say 30 days back.
This server also has a slave server that copies the data and is 5 sec behind the master.
I created this slave in order to reduce server process for selecting queries so it only works with insert/update for the traffic log.
Now I need to copy the last day to the smaller table, and still not to use the "real" DB,
so I need to select from the slave and insert to the real smaller table. (The slave only allows read operations).
I am working with PHP and I can't solve this with one query using two different databases at one query... If it's possible, please let me know how?
When using two queries I need to hold the last day as a PHP MySQL object. For 300K-650K of rows, it's starting to be a cache memory problem. I would use a partial select by ID(by setting the ids at the where term) chunks but I don't have an auto increment id field and there's no id for the rows (when storing traffic data id would take a lot of space).
So I am trying this idea and I would like to get a second opinion.
If I will take the last day at once (300K rows) it will overload the PHP memory.
I can use limit chunks, or a new idea: selecting one column at a time and copying this one to the new real table. But I don't know if the second method is possible. Does insert looks at the first open space at a column level or row level?
the main idea is reducing the size of the select.. so is it possible to build a select by columns and then insert them as columns at mysql?
If this is simply a memory problem in PHP you could try using PDO and fetching 1 result row at a time instead of a all at the same time.
From PHP.net for PDO:
<?php
function getFruit($conn) {
$sql = 'SELECT name, color, calories FROM fruit ORDER BY name';
foreach ($conn->query($sql) as $row) {
print $row['name'] . "\t";
print $row['color'] . "\t";
print $row['calories'] . "\n";
}
}
?>
well here is where php start to be weird.. i took your advice and started to use chunks for the data. i used a loop for advancing a limit in 2000 rows jumps. but what was interesting is when i started to use php memory usage and memory peak functions i found out that the reason the chunks method doesn't work in large scales and looping is because setting a new value to a var doesn't release the memory of what was before the new setting.. so you must use unset or null in order to keep your memory at php, –