Why is the speed of elquent is lower than db query builder? - php

I have two query in lumen like this
Doc:all();
DB:table('docs')->get();
Speed of first is 946 ms, speed of second is 46 ms (in Postman)
Can you tell me about this?

DB::table('docs')->get();
returns a collection of basic objects with no added logic.
Doc:all();
returns a collection of Doc models. Hydrating every Doc model in the results with all the added logic ($appends, $casts, $with, $withCount) takes more processing time.

For these cases you'll always get better speed in query builder, but in long term for example while playing with table relations and other complex queries you have to write very less and simple code with eloquent. In case of query builder you have to deal with complex join and stuff.

Related

Laravel Query efficiency difference

Please could somebody tell me which one is most efficient select in Laravel:
$car = Car::all(); ------- $car = Car::find();
$car = DB::table('car')->get(); ------ $car = DB::table('car')->first();
Your first approach:
$car = Car::all(); ------- $car = Car::find();
Makes use of Eloquent. This means, all the rows received from the query will be hydrated to Model instances, and all of those will be injected into an instance of Collection (for multiple elements, of course). This is useful because you then will have all the benefits that this brings. However, this comes with a little decrease on performance (understandable)
Your second one:
$car = DB::table('car')->get(); ------ $car = DB::table('car')->first();
Uses the Query Builder instead. The results (as a whole) will be also casted into an instance of Collection, but its items will be simple arrays. This means that the process will be faster (more performant) but on detriment of not having all the cool features of Eloquent.
There's even a more performant option: Using raw queries. That also has tradeoffs: Results are not hydrated into a Collection instance.
Which one to use? It depends on your needs. Usually I go for the Eloquent option. I use the Query Builder directly when I need to make queries to big databases and need speed.
For me most efficient is selecting from the Model: like Car:all(), but it's always better if you use pagination or just don't take all of the records from the database with all() method.
But selecting with DB is a bit faster and in some cases maybe it would be better to use.
In the end, it always depends on what`s your problem and which way do you want to solve it.
for a better understanding I recommend you to watch this video and after that maybe keep going to search for some more information or just try it out yourself.
https://www.youtube.com/watch?v=uVsY_OXRq5o&t=205s

Will I get better performance if I use COUNT query instead of looping through entities in Symfony 4?

For example I need to get review count, one way of doing it is like this:
public function getActiveReviews()
{
return $this->getReviews()->filter(function(Review $review) {
return $review->isActive() && !$review->isDeleted();
})->count();
}
Another way is to use Query Builder like this:
$qb = $this->createQueryBuilder('r')
->where('r.active = true')
->andWhere('r.deleted = false')
->select('count(r)')
Which way will give me better performance and why?
Of course count query will be faster because it will result into single SQL query that will return single value.
Iteration over entities will require:
Run of SQL query for fetching data rows
Actual fetching of data
Entity objects instantiation and persisting fetched data into them
Depending on amount of affected data difference may be very big.
The only case when running count over entities may be fast enough is a case when you already have all entities fetched and just need to count them.
It depends on Symfony count() implementation, but you probably will. Usually RDBMS counts its rows quicker internally, and it requires much less resources.
In first case you request a whole rowset, which can be huge, then you iterate through it, you apply your filter function to every row, and then you just look at your filtered rowset size and drop everything. (But, of course, this might be optimized by your framework somehow).
In second case you just ask the database how much rows it has satisfying the criteria. And DB returns you a number, and that's all.
As other people said, the only case when first choice might be quicker is when you have already cached rowset (no need to connect to DB) — and when your DB connection is very slow at the same time.
I saw databases which were slow on some COUNT requests (Oracle) on big tables, but they were still faster than PHP code on same rowset. DBs are optimized for data filtering and counting. And usually COUNT request are very fast.

Laravel Eloquent - Is it inefficient

I'm looking at developing a new site with Laravel. I was reading a tutorial on http://vegibit.com/laravel-eloquent-orm-tutorial/, regarding using Eloquent to join tables and retrieve data. In the last example they are essentially trying to do a join but still Eloquent is executing two queries instead of one. Is Eloquent inefficient or is this simply a poor example? It seems that this could have simply been done with a single left join query. Can anyone provide some insight on this? Is there a better way to execute this query? The specific example is:
Route::get('/', function()
{
$paintings = Painting::with('painter')->get();
foreach($paintings as $painting){
echo $painting->painter->username;
echo ' painted the ';
echo $painting->title;
echo '<br>';
}
});
Results in the following queries:
string ‘select * from paintings‘ (length=25)
string ‘select * from painters where painters.id in (?, ?, ?)’ (length=59)
Leonardo Da Vinci painted the Mona Lisa
Leonardo Da Vinci painted the Last Supper
Vincent Van Gogh painted the The Starry Night
Vincent Van Gogh painted the The Potato Eaters
Rembrandt painted the The Night Watch
Rembrandt painted the The Storm on the Sea of Galilee
Eloquent is extremely useful for building something quickly and also has query caching that it's really easy to use however it might not be ideal for big projects that need to scale over time, my approach is to use Repository pattern with contracts that way I can still use the power of Eloquent and have a system that will allow me to easily change the implementation if needed.
If what you want is to get all the data in a single query to use joins, then you must specify it as:
Painting::join('Painter as p', 'p.IdPainter', '=', 'Painting.IdPainting')->get()
And if you need some conditional then:
Painting::join('Painter as p', 'p.IdPainter', '=', 'Painting.IdPainting')->where('cond1', '=', 'value')->get();
Personally Eloquent is inefficient, i prefer Doctrine in this regard.
The simple answer to your question: No. It is not inefficient.
Using Eloquent in the way described by Jorge really defeats its purpose as an ORM.
Whilst you can write joins as in the example given, an ORM really isn't designed with that in mind.
The example you've given isn't an n+1 query (where there's an additional query run for each item in the loop - the worst type of query).
Using two queries as reported by your output isn't a huge overhead, and I think you'd be fine to use that in production. Your use of Eloquent's 'with' eager loading is precisely what you should use in this context.
An example of an n+1 query (which you want to avoid) is as follows:
foreach (Book::all() as $book)
{
echo $book->author->name;
}
If 20 books are returned by Book::all(), then the loop would execute 21 queries in total. 1 to get all of the books, and 1 for each iteration of $book to get the author's name.
Using Eloquent with eager loading, combined with caching would be enough to minimise any performance issues.

Caching only a relation data from FuelPHP ORM result

I'm developing app with FuelPHP & mySql and I'm using the provided ORM functionality. The problem is with following tables:
Table: pdm_data
Massive table (350+ columns, many rows)
Table data is rather static (updates only once a day)
Primary key: obj_id
Table: change_request
Only few columns
Data changes often (10-20 times / min)
References primary key (obj_id from table pdm_data)
Users can customize datasheet that is visible to them, eg. they can save filters (eg. change_request.obj_id=34 AND pdm_data.state = 6) on columns which then are translated to query realtime with ORM.
However, the querying with ORM is really slow as the table pdm_data is large and even ~100 rows will result in many mbs of data. The largest problem seems to be in FuelPHP ORM: even if the query itself is relatively fast model hydration etc. takes many seconds. Ideal solution would be to cache results from pdm_data table as it is rather static. However, as far as I know FuelPHP doesn't let you cache tables through relations (you can cache the complete result of query, thus both tables or none).
Furthermore, using normal SQL query with join instead of ORM is not ideal solution, as I need to handle other tasks where hydrated models are awesome.
I have currently following code:
//Initialize the query and use eager-loading
$query = Model_Changerequest::query()->related('pdmdata');
foreach($filters as $filter)
{
//First parameter can point to either table
$query->where($filter[0], $filter[1], $filter[2]);
}
$result = $query->get();
...
Does someone have a good solution for this?
Thanks for reading!
The slowness of the version 1 ORM is a known problem which is being addressed with v2. My current benchmarks are showing that v1 orm takes 2.5 seconds (on my machine, ymmv) to hydrate 40k rows while the current v2 alpha takes around 800ms.
For now I am afraid that the easiest solution is to do away with the ORM for large selects and construct the queries using the DB class. I know you said that you want to keep the abstraction of the ORM to ease development, one solution is to use as_object('MyModel') to return populated model objects.
On the other hand if performance is your main concern then the ORM is simply not suitable.

OO performance question

I will be quick and simple on this.
Basically I need to merge multiple Invoices(Object) quickly and fast.
A simple idea is to
$invoice1 = new Invoice(1);
$invoice2 = new Invoice(2);
$invoice3 = new Invoice(3);
$invoice1->merge($invoice2,invoice3);
$invoice1->save();
Since each object will query it's own data, the number of queries increase as the number of invoices needed to be merge increases.
However, this is a case where a single query
SELECT * FROM invoice WHERE id IN (1,2,3)
Will suffice, however the implementation will not be as elegant as the above.
Initial benchmarks on sample data indicates a 2.5x-3x decrease in speed on the above due to the sheer number of mysql queries.
Advice please
Use an Invoice factory. You ask it for invoices using various methods. newest(n) get(id) get(array(id,id,id)) so on, and it returns arrays of invoices or single invoice objects.
<?php
$invoice56 = InvoiceFactory::Get(56); // Get's invoice 56
$invoices = InvoiceFactory::Newest(25); // Get's an array of the newest 25 invoices
?>
Could you make the Invoice object lazy and let merge load everything that hasn't been loaded?
Make sure you work on the same db connection all the time. Check that it does not reconnect in one script execution thread.
I could suggest looking into using an actual ORM (object relational mapping) in order to create a seperation between your actual queries and the objects used.. Take a look at Propel or (my favorite) Doctrine (version 2 is very easy to use)
That way you could have exactly what you want in just the same amount of code...

Categories