I've got a website that allows users to join a "team". The team has a member limit defined in $maxPlayersPerTeam.
When the user clicks the link to join a team this code gets executed
//query to get players count
if ($players < $maxPlayersPerTeam) {
// query to insert the player
}
However, if two users click the join link at the same time, both can join the team even if $players is equal to $maxPlayersPerTeam.
What can I do to avoid this?
You should acquire a lock on the dataset (i hope you're using a database, right?), execute your check and eventually update the dataset. So if two people really execute the code simultaneously, one of both has to wait for the lock of the other one. After acquiring the lock a second time the dataset has already been updated and the person can't join your team also.
Be happy some people have already worked on this kind of problems and offer you some possible solution : database transactions.
The best to handle those is to use PDO and its beginTransaction, commit and rollBack methods.
Your tables will have to be using a database engine which accept transactions (so innoDb instead of MyISAM).
Assuming you're using a database to store the information your database system should have method for transactional processing and managing them which would cater for the events of multiple transactions occuring at the same time (even if this is an incredibly rare case). There's a wiki article on this (even if I hestitate to link to it). http://en.wikipedia.org/wiki/Transaction_processing.
MySQL has methods for doing this: http://dev.mysql.com/doc/refman/5.0/en/commit.html. As does PostgreSQL: http://www.postgresql.org/docs/8.3/static/tutorial-transactions.html.
Related
I have question about reducing database connection in for every time retrieve data from DB. I assume I have 5 tables (Notifications, Orders, Users, Products, Transactions)
In dashboard, I have to show:
Unread notifications
Order statistics
Number of products
Transactions statistics
For the easiest ways (pseudo-code):
$notis = Notification::where('unread')->get();
$orderStat = Order::join('user')->where('count(order.id)', 'status')->groupby('status');
$product = Product::count();
$transactions = Transaction::join('user')->where('count(trans.id)', 'status')->groupby('status');
So I have to run 4 separate queries, as my mentor said that this solution would reduce the speech of server in case there are many many records in each tables or the dashboard needs more tables (not for joining) to query.
I already did:
Index on foreign key columns
Eager-loading for eloquent if it's available
(OR using ajax to load data after UI rendered)
I want to ask: Are there any else method to reduce processing time for the above case?.
And other question about connection pool, he said use it to increase speed. After researching a found Laravel already did connection pool, isn't it?
Edit:
User have many notification, orders, transaction.
I just want to ask which method to improve the performance that I not mention above.
Firstly correct your pseudo code which should be like:
$notis = Notification::where('unread')->get();
$orderStat = Order::join('user')->select('count(order.id)', 'status')->groupBy('status');
$product = Product::count();
$transactions = Transaction::join('user')->select('count(trans.id)', 'status')->groupBy('status');
I have a few additional tips which you can take into consideration:
1) Do not use a query cache (The query cache is deprecated as of MySQL 5.7.20, and is removed in MySQL 8.0.)
2) use InnoDB as Storage Engine for your tables and SQL Server buffer pool to improve a capacity - https://dev.mysql.com/doc/refman/5.5/en/innodb-buffer-pool.html
3) If it is possible do not get all records from query use LIMIT and OFFSET to narrow data (limit and skip method in Laravel or only paginate)
4) If you do not need user data do not use INNER JOIN with users table (if each transaction and order always has user)
5) test requests using ab http://httpd.apache.org/docs/2.2/programs/ab.html to set your server correctly
6) use php >= 7.0 with fpm and opcache it will increase the limit of requests
7) store results of your queries with Cache API https://laravel.com/docs/5.5/cache (update cache if data is changed)
8) Profile Your Queries using https://github.com/barryvdh/laravel-debugbar
At this situation, you better use Cache and doesn't need to hit the database for every time.
It can reduce the processing time and have better performance.
You have to use Laravel Relationship go through below link for proper guidance
For many to many relationship:
https://laravel.com/docs/5.5/eloquent-relationships#many-to-many
According to your condition use appropriate one. Hope it will help you.
After setting relationship you can access table's data calling single object.
This question already has answers here:
Increment a database field by 1
(5 answers)
Closed 8 years ago.
I'm trying to increment a like counter for a users post. A post in my MySQL table has a field likes which shows how many likes this specific post has. now what will happen if multiple user like the same post at the same time? I think this will result in a conflict or not increment correctly? How can I avoid this, do I need to lock the row or which options do I have?
my query could look something like this:
UPDATE posts
SET likes = likes + 1,
WHERE id = some_value
also when a user unlikes a post this should decrement --> likes -1
Thanks for any help!
It's a simple enough query to run:
UPDATE mytable SET mycolumn = mycolumn + 1;
That way even if you have multiple people liking at the same time the queries won't run at exactly the same time and so you'll get the correct number at the end.
Queries such as these run in fractions of a second, so you don't need to worry about multiple users clicking on them unless you've got millions of users, and then you'll have lots of problems to do with queries.
Someone liking the same post at the same time will only cause issues in long and complex queries.
This should be more than sufficient for an increment otherwise the entirety of SQL would be rendered useless.
UPDATE posts
SET likes = likes + 1
WHERE id = some_value
You could always run this query twice programmaticly and see what happens. I can guarantee that it will go from 0 to 2.
someTableAdapter.LikePost(postID);
someTableAdapter.LikePost(postID);
What you described is called a race condition (as in, race to the finish. Kind of like playing musical chairs with two people, but only one chair). I believe if you research transactions you may be able to implement your code and have pseudo-concurrent likes. Here's a link to the MySQL Manual.
MySQL 5.1 Manual: Transactional and Locking Statements
YouTube: MySQL Transactions
"MySQL uses row-level locking for InnoDB tables to support simultaneous write access by multiple sessions, making them suitable for multi-user, highly concurrent, and OLTP applications.
MySQL uses table-level locking for MyISAM, MEMORY, and MERGE tables, allowing only one session to update those tables at a time, making them more suitable for read-only, read-mostly, or single-user applications. "
MySQL Manual, 8.7.1: Internal Locking Methods
But, if transactions are too much, make sure you are using at least the InnoDB storage engine and you should be alright if you are using an ACID compliant database with the the proper level of isolation.
There are lots of good references, but hopefully I've pointed you in the right direction.
Anthony
I have two tables, to make it easy, consider the following as an example.
contacts (has name and email)
messages (messages but also has name and email w/c needs to be synced to the contacts table)
now please, for those who are itching to say "use relational method" or foreign key etc. I know, but this situation is different. I need to have a "copy" of the name and email of the messages on the messages table itself and need to sync it from time to time only.
As per the syncing requirement, I need to sync the names on the messages with the latest names on the contacts table.
I basically have the following UPDATE SQL in a loop for all rows in Contacts table
UPDATE messages SET name=(
SELECT name FROM contacts WHERE email = '$cur_email')
WHERE email='$cur_email'
the above loops through all the contacts and is fired as many contacts as I have.
I have several looping ideas to do this as well without using internal SELECT but I just thought the above would be more efficient (is it?), but I was wondering if there's an SQL way that's more efficient? Like:
UPDATE messages SET name=(
SELECT name FROM contacts WHERE email = '$cur_email')
WHERE messages.email=contacts.email
something that looks like a join?
I think it should be more efficient
UPDATE messages m JOIN contacts n on m.email=n.email SET m.name=n.name
Ok. i figured it out now.. using JOINS on update
like:
UPDATE messages JOIN contacts ON messages.email=contacts.email
SET messages.email = contacts.email
WHERE messages.email != contacts.email
it's fairly simple!
BUT... I'm not sure if this is really the ANSWER TO MY POST, since my question is what the "BEST WAY is" in terms of efficiency..
Executing the above query on 2000 records took my system a 4second pause.. where as executing a few select , php loop, and a few update statements felt like it was faster..
hmmmmm
------ UPDATE --------
Well i went ahead and created 2 scripts to test this ..
on my QuadCore i7 Ivybridge machine, surprisingly
a single Update query via SQL JOIN is MUCH SLOWER than doing a rather multi query and loop approach..
on one side i have the above simple query running on 1000 records, where all records need updating...
script execution time was 4.92 seconds! and caused my machine to hicup for a split second.. noticed a 100% spike on one of my cores..
succeeding calls to the script (where no fields where needing update) took the same amount of time! ridiculous..
The other side, involving SELECT JOIN query to all rows needing an update, and a simple UPDATE query looped in a foreach() function in PHP..
took the script
3.45 seconds to do all the updates.. # around 50% single core spike
and
1.04 seconds on succeeding queries (where no fields where needing update)
Case closed...
hope this helps the community!
ps
This is what i meant when debating some logic with programmers who are too much into "coding standards".. where their argument is "do it on the SQL side" if you can as it is faster and more of the standard rather than crude method of evaluating and updating in loops w/c they said was "dirty" code.. sheesh.
I have a database call that i am not sure if i am doing it the most efficient way. Basically the call queries a table of events with zip codes and then joins a zip code database that gives the lat/lon of that events zip. Then it joins the logged in user to the query and that user has a lat/lon of upon logging in. So the whole query pulls events from within so many miles of of the users lat/lon.
My question, is there a better way to do it then calling this query each time the page is loaded? would a stored procedure be faster? I dont have any experience with them. I am using MySQL.
$this->db->select('*');
$this->db->from('events');
$this->db->join('zipcodes', 'zipcodes.zipcode = courses.courseZip');
$this->db->join('eventTypes', 'eventTypes.eventTypeID = events.eventType');
$this->db->where('eventApproved', 1);
$this->db->select('(DEGREES(ACOS(SIN(RADIANS('.$this->user['userLat'].'))
* SIN(RADIANS(latitude))
+ COS(RADIANS('.$this->user['userLat'].'))
* COS(RADIANS(latitude))
* COS(RADIANS('.$this->user['userLon'].' - longitude))))) * 69.09 AS distance');
$this->db->having('distance <', 100);
Yes it will help to have a stored procedure here.
Reasons are
1. it makes your Database layer more managable
2. SP are precompiled. When you run then first them the engine will create a execution plan and save the plan. Next time when it runs it will reuse the plan. So you get some performance benifit. In your case you might get lot of benifit if the underlined table is not changing (updated/deleted) too much after SP is made. If it is then you can RECOMPILE the sp(Running it WITH RECOMPILE OPTION) and it will create and save a new plan.
How you do it .
Well it is quite easy. If you are using HeidiSQL for MySQL Front End or the query browser of MYSQL enterprise 5.0 you might be able to generate the SP graphically . But even if you want to code it from scratch it is easy.
http://dev.mysql.com/doc/refman/5.0/en/stored-routines-syntax.html
Sp also are advisible from Security point of view because they can stop SQL Injection attacks.
Once you have the SP you can tune the table to make the SP faster.
1. Create a Index (nonclustered ) on the columns in where clause
2. Include the column that you bringing in SELECT in this Index.
In Microsoft SQL Server you can do this usign covering index. I am not sure if you can do it in MYSQL or not. But even if you can you should try to create a index covering as many columns as you can in either clustered or non clustered index.
HTH
Your going to want to store as much data (such as the user's lat/long) in Session as possible. This way your not querying for this data, that's isn't really changing, on every page load.
I have few tables which are accessed frequently by users. The same kind of queries are running again and again which cause extra load on the server.
The records do not insert/update frequently, I was thinking to cache the IDs into memcached and then fetch them from database this will reduce the burden of searching/sorting etc.
Here is an example
SELECT P.product_id FROM products P, product_category C WHERE P.cat_id=C.cat_id AND C.cat_name='Fashion' AND P.is_product_active=true AND C.is_cat_active=true ORDER BY P.product_date DESC
The above query will return all the product ids of a particular category which will be imported into memcached and then rest of the process (i.e., paging) will be simulated same as we do with mysql result sets.
The insert process will either expire the cache or insert the product id on the first row of the array.
My question is this the practical apporach? How do people deal with searches say if a person is searching for a product which returns 10000 results (practically may not possible) do they search every time tables? Is there any good example available of memcached and mysql which shows how these tasks can be done?
you may ask yourself if you really need to invalidate the cache upon insert/update of a product.
Usually a 5 minutes cache can be acceptable for a product list.
If your invalidation scheme is time-based only (new entries will only appear after 5 min) there is a quick&dirty trick that you can use with memcache : simply use as a memcache key an md5 of your sql query string, and tell memcache to keep the result of the select for 5 minutes.
I hope this will help you