Laravel 5.5 optimize query - php

I have question about reducing database connection in for every time retrieve data from DB. I assume I have 5 tables (Notifications, Orders, Users, Products, Transactions)
In dashboard, I have to show:
Unread notifications
Order statistics
Number of products
Transactions statistics
For the easiest ways (pseudo-code):
$notis = Notification::where('unread')->get();
$orderStat = Order::join('user')->where('count(order.id)', 'status')->groupby('status');
$product = Product::count();
$transactions = Transaction::join('user')->where('count(trans.id)', 'status')->groupby('status');
So I have to run 4 separate queries, as my mentor said that this solution would reduce the speech of server in case there are many many records in each tables or the dashboard needs more tables (not for joining) to query.
I already did:
Index on foreign key columns
Eager-loading for eloquent if it's available
(OR using ajax to load data after UI rendered)
I want to ask: Are there any else method to reduce processing time for the above case?.
And other question about connection pool, he said use it to increase speed. After researching a found Laravel already did connection pool, isn't it?
Edit:
User have many notification, orders, transaction.
I just want to ask which method to improve the performance that I not mention above.

Firstly correct your pseudo code which should be like:
$notis = Notification::where('unread')->get();
$orderStat = Order::join('user')->select('count(order.id)', 'status')->groupBy('status');
$product = Product::count();
$transactions = Transaction::join('user')->select('count(trans.id)', 'status')->groupBy('status');
I have a few additional tips which you can take into consideration:
1) Do not use a query cache (The query cache is deprecated as of MySQL 5.7.20, and is removed in MySQL 8.0.)
2) use InnoDB as Storage Engine for your tables and SQL Server buffer pool to improve a capacity - https://dev.mysql.com/doc/refman/5.5/en/innodb-buffer-pool.html
3) If it is possible do not get all records from query use LIMIT and OFFSET to narrow data (limit and skip method in Laravel or only paginate)
4) If you do not need user data do not use INNER JOIN with users table (if each transaction and order always has user)
5) test requests using ab http://httpd.apache.org/docs/2.2/programs/ab.html to set your server correctly
6) use php >= 7.0 with fpm and opcache it will increase the limit of requests
7) store results of your queries with Cache API https://laravel.com/docs/5.5/cache (update cache if data is changed)
8) Profile Your Queries using https://github.com/barryvdh/laravel-debugbar

At this situation, you better use Cache and doesn't need to hit the database for every time.
It can reduce the processing time and have better performance.

You have to use Laravel Relationship go through below link for proper guidance
For many to many relationship:
https://laravel.com/docs/5.5/eloquent-relationships#many-to-many
According to your condition use appropriate one. Hope it will help you.
After setting relationship you can access table's data calling single object.

Related

Mysql unable to update a row, when multiple selects are in process or taking too much time

I have a table called Settings which has only one row. The settings are very important in all the cases for my program, The Settings is been read by 200 to 300 users every second. I haven't used any caching yet. I cannot update the settings table for a value like Limit. Change the limit from 5 -10 Or anything from an API.
Ex: Limit Products 5 - 10. The update query runs forever.
From the Workbench, I can update the record, But from Admin Panel through API it's not updating or take too much time. Table - InnoDB
1. Already Tried Locking With Read - Write.
2. Transaction.
3. Made a View of the table and Tried to update the table, the Same Issue remains.
4. The Update query is fine from Workbench, But through an API. It runs all day.
Is there anyway, I can lock the read operations on the table and update the table. I have only one row in a table.
Any help would be highly appreciated, Thanks in advance.
This sounds like a really good use case for using query cache.
The query cache stores the text of a SELECT statement together with the corresponding result that was sent to the client. If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again. The query cache is shared among sessions, so a result set generated by one client can be sent in response to the same query issued by another client.
The query cache can be useful in an environment where you have tables that do not change very often and for which the server receives many identical queries.
To enable the query cache, you can run:
SET GLOBAL query_cache_size = 1000000;
And then edit your mysql config file (typically /etc/my.cnf or /etc/mysql/my.cnf):
query_cache_size=1000000
query_cache_type=2
query_cache_limit=100000
And then for your query you can change it to:
SELECT SQL_CACHE * FROM your_table;
And that should make it so you are able to update the table (as it won't be constantly locked).
You would need to restart the server.
As an alternative, you could implement cacheing in your PHP application. I would use something like memcached, but as a very simplistic solution you could do something like:
$settings = json_decode(file_get_contents("/path/to/settings.json"), true);
$minute = intval(date('i'));
if (isset($settings['minute']) && $settings['minute'] !== $minute) {
$settings = get_settings_from_mysql();
$settings['minute'] = intval(date('i'));
file_put_contents("/path/to/settings.json", json_encode($settings), LOCK_EX);
}
Are the queries being run in the context of transactions with say a transaction isolation level for repeatable read? It sounds like the update isn't able to complete due to a lock on the table, in which case caching isn't likely to be able to help you, as on a write the cache will be purged. More information on repeatable reads can be found at https://www.percona.com/blog/2012/08/28/differences-between-read-committed-and-repeatable-read-transaction-isolation-levels/.

More efficient - multiple SQL queries or one query and process in php?

I have a php application showing 3 tables of data, each from the same MySQL table. Each record has an integer field named status which can have values 1, 2 or 3. Table 1 shows all records with status = 1, Table 2 showing status = 2 and table 3 showing status = 3.
To achieve this three MySQL queries could be run using WHERE to filter by status, iterating through each set of results once to populate the three tables.
Another approach would be to select all from the table and then iterate through the same set of results once for each table, using php to test the value of status each time.
Would one of these approaches be significantly more efficient than the other? Or would one of them be considered better practice than the other?
Generally, it's better to filter on the RDBMS side so you can reduce the amount of data you need to transfer.
Transferring data from the RDBMS server over the network to the PHP client is not free. Networks have a capacity, and you can generate so much traffic that it becomes a constraint on your application performance.
For example, recently I helped a user who was running queries many times per second, each generating 13MB of result set data. The queries execute quickly on the server, but they couldn't get the data to his app because he was simply exhausting his network bandwidth. This was a performance problem that didn't happen during his testing, because when he ran one query at a time, it was within the network capacity.
If you use the second method you connect with database only once, thus it's more efficient.
And even if it wasn't, it's more elegant that way IMO.
Of course there are some situations that it would be better to connect three times (eg. getting info from this query would be complicated), but for most of the cases I would do it the second way.
I would create a store procedure that return all the fields you need pre-formatted, no more, no less.
And then just loop on php without calling any other table.
This way you run only 1 query, and you only get the bytes you need. So same bandwidth, less http request = more performance.

MySQL updating same row by multiple users like counter [duplicate]

This question already has answers here:
Increment a database field by 1
(5 answers)
Closed 8 years ago.
I'm trying to increment a like counter for a users post. A post in my MySQL table has a field likes which shows how many likes this specific post has. now what will happen if multiple user like the same post at the same time? I think this will result in a conflict or not increment correctly? How can I avoid this, do I need to lock the row or which options do I have?
my query could look something like this:
UPDATE posts
SET likes = likes + 1,
WHERE id = some_value
also when a user unlikes a post this should decrement --> likes -1
Thanks for any help!
It's a simple enough query to run:
UPDATE mytable SET mycolumn = mycolumn + 1;
That way even if you have multiple people liking at the same time the queries won't run at exactly the same time and so you'll get the correct number at the end.
Queries such as these run in fractions of a second, so you don't need to worry about multiple users clicking on them unless you've got millions of users, and then you'll have lots of problems to do with queries.
Someone liking the same post at the same time will only cause issues in long and complex queries.
This should be more than sufficient for an increment otherwise the entirety of SQL would be rendered useless.
UPDATE posts
SET likes = likes + 1
WHERE id = some_value
You could always run this query twice programmaticly and see what happens. I can guarantee that it will go from 0 to 2.
someTableAdapter.LikePost(postID);
someTableAdapter.LikePost(postID);
What you described is called a race condition (as in, race to the finish. Kind of like playing musical chairs with two people, but only one chair). I believe if you research transactions you may be able to implement your code and have pseudo-concurrent likes. Here's a link to the MySQL Manual.
MySQL 5.1 Manual: Transactional and Locking Statements
YouTube: MySQL Transactions
"MySQL uses row-level locking for InnoDB tables to support simultaneous write access by multiple sessions, making them suitable for multi-user, highly concurrent, and OLTP applications.
MySQL uses table-level locking for MyISAM, MEMORY, and MERGE tables, allowing only one session to update those tables at a time, making them more suitable for read-only, read-mostly, or single-user applications. "
MySQL Manual, 8.7.1: Internal Locking Methods
But, if transactions are too much, make sure you are using at least the InnoDB storage engine and you should be alright if you are using an ACID compliant database with the the proper level of isolation.
There are lots of good references, but hopefully I've pointed you in the right direction.
Anthony

Problem with simultaneous clicks from different users

I've got a website that allows users to join a "team". The team has a member limit defined in $maxPlayersPerTeam.
When the user clicks the link to join a team this code gets executed
//query to get players count
if ($players < $maxPlayersPerTeam) {
// query to insert the player
}
However, if two users click the join link at the same time, both can join the team even if $players is equal to $maxPlayersPerTeam.
What can I do to avoid this?
You should acquire a lock on the dataset (i hope you're using a database, right?), execute your check and eventually update the dataset. So if two people really execute the code simultaneously, one of both has to wait for the lock of the other one. After acquiring the lock a second time the dataset has already been updated and the person can't join your team also.
Be happy some people have already worked on this kind of problems and offer you some possible solution : database transactions.
The best to handle those is to use PDO and its beginTransaction, commit and rollBack methods.
Your tables will have to be using a database engine which accept transactions (so innoDb instead of MyISAM).
Assuming you're using a database to store the information your database system should have method for transactional processing and managing them which would cater for the events of multiple transactions occuring at the same time (even if this is an incredibly rare case). There's a wiki article on this (even if I hestitate to link to it). http://en.wikipedia.org/wiki/Transaction_processing.
MySQL has methods for doing this: http://dev.mysql.com/doc/refman/5.0/en/commit.html. As does PostgreSQL: http://www.postgresql.org/docs/8.3/static/tutorial-transactions.html.

Porting SQL results into memcached

I have few tables which are accessed frequently by users. The same kind of queries are running again and again which cause extra load on the server.
The records do not insert/update frequently, I was thinking to cache the IDs into memcached and then fetch them from database this will reduce the burden of searching/sorting etc.
Here is an example
SELECT P.product_id FROM products P, product_category C WHERE P.cat_id=C.cat_id AND C.cat_name='Fashion' AND P.is_product_active=true AND C.is_cat_active=true ORDER BY P.product_date DESC
The above query will return all the product ids of a particular category which will be imported into memcached and then rest of the process (i.e., paging) will be simulated same as we do with mysql result sets.
The insert process will either expire the cache or insert the product id on the first row of the array.
My question is this the practical apporach? How do people deal with searches say if a person is searching for a product which returns 10000 results (practically may not possible) do they search every time tables? Is there any good example available of memcached and mysql which shows how these tasks can be done?
you may ask yourself if you really need to invalidate the cache upon insert/update of a product.
Usually a 5 minutes cache can be acceptable for a product list.
If your invalidation scheme is time-based only (new entries will only appear after 5 min) there is a quick&dirty trick that you can use with memcache : simply use as a memcache key an md5 of your sql query string, and tell memcache to keep the result of the select for 5 minutes.
I hope this will help you

Categories