I'm trying to create an application that has the ability to sell gift cards but I'm fearing the consistency of data, so let me explain what I mean:
I have 3 tables
transactions(id, created_date),
cards(id, name) and
vouchers(id, code, card_id, transaction_id).
The database contains many cards and vouchers and each voucher belongs to one card.
The user will want to select a card to buy and choose a quantity.
So, the application will select vouchers according to the selected card and LIMIT according to quantity then creating a new transaction and adding the transaction_id into the selected vouchers to flag them as bought cards for this new transaction.
So, what I am afraid of is what if multiple users sent the same buying request for the same card at the exact same time, then will some data collision occur? and what is the best approach to fix this?
I am currently using MySQL 8 Community Edition with InnoDB engine for all tables.
What I am searching for is if I can create some kind of a queue that activates transactions one by one without collision even if it means that the users will have to wait until they get to their turn in the queue.
Thanks in advance
This is a job for MySQL transactions. With SQL you can do something like this.
START TRANSACTION;
SELECT id AS card_id FROM cards WHERE name = <<<<chosen name>>>> FOR UPDATE;
--do whatever it takes in SQL to complete the operation
COMMIT;
Multiple php processes, may try to do something with the same row of cards concurrently. Using START TRANSACTION combined with SELECT ... FOR UPDATE will prevent that by making the next process wait until the first process does COMMIT.
Internally, MySQL has a queue of waiting processes. As long as you do the COMMIT promptly after you do BEGIN TRANSACTION your users won't notice this. This most often isn't called "queuing" at the application level, but rather transaction consistency.
Laravel / eloquent makes this super easy. It does the START TRANSACTION and COMMIT for you, and offers lockForUpdate().
DB::transaction(function() {
DB::table('cards')->where('name', '=', $chosenName)->lockForUpdate()->get();
/* do whatever you need to do for your transaction */
});
Related
I am working on a small project and I want to flag some items in a table as "claimed".
So most items would have a claim_id = NULL, when I want to claim an item I will call this command to update the next available item:
update ci_items
set claim_id = ?
where claim_id IS NULL
order by id
LIMIT 1;
I think this should always either update the next available item, or nothing if all items have been claimed.
In the context of a web application where 2 db connections could run this at the same time, is there any risk that an item gets claimed twice at the same time, one claim_id overriding the previous one?
I am thinking I could use a transaction but the table structure is currently MyISAM and I don't think that supports transactions
What you have show us works well. Individual SQL update queries have an implicit transaction (also known as "autocommit") for their duration.
Go for it.
And don't stop asking yourself this kind of Atomicity, Consistency, Isolation, Durability (ACID) question as you design and build your application.
currently I am running into a problem and I am breaking my head over it (although I might be over thinking it).
Currently i have a table in my SQL DB with some products and the amount in stock. People can visit the product page, or order it (or update it if you are an admin). But now I am affraid of race conditions.
The order process happens as following:
1) The session starts an Transaction.
2) It gets the current amount of units available.
3) It checks that the amount to order is available, and it substract the amount.
4) It updates the product table with the new "total amount" value. Here is the code very short(without using prepared statements etc. etc.).
BEGIN;
SELECT amount FROM products WHERE id=100;
$available=$result->fetch_array(MYSQLI_NUM)[0];
if($order<=$available){
$available-=$order;
UDPATE products SET amount=$available WHERE id=100;
}
//error checking and then ROLLBACK or COMMIT
My question now is:
What do i do to prevent dirty readings in step 2, and so the write back of wrong values in step 4?
example: If 1 person orders 10 things of product A, and while it is at step 3, the second person also orders 5 things of product A. So in step 2 it will still get the "old" value and work with that, and thus restores an incorrect number in step 4.
I know i can use "SELECT.... FOR UPDATE" which will put an exclusive lock on the row, but this also prevents an normal user who is just checking the availability(on the product page) to prevent instantaneously loading, while I rather have them to load the page quick than an on the second accurate inventory. So basically i want the read-lock only to apply to clients who will update the value in the same transaction.
Is what I want possible, or do I need to work with what i got?
Thanks in advance!
There are two ways you can address the problem:
You can use a function in MySQL, that shall update the stocks and give an error"Sorry, you product just went out of stock!", whenever the balance after deduction goes below 0.
OR (preferred way)
You can use locking in MySQL. In this case, it shall be a write lock.The write lock shall disable other read requests(BY second person), till the lock is released(BY First Person).
I hope that helps you!
I am writing an application which shall track the financial transactions (as in a bank), to maintain the balance amount. I am using Denormalizing technique to keep the performance in check(and not have to calculate the balance at runtime) as discussed Here and Here.
Now, I am facing a Race Condition if two people simultaneously did a transaction related to same entity, the balance calculation as discussed above, shall return/set inconsistent data, as discussed Here and Here, And also as suggested in the answers..
I am going for Mysql Transactions.
Now My question is,
What Happens to the other similar queries when a mysql Transaction is underway?
I wish to know if other transactions fail as in Error 500 or are they queued and executed, once the first transaction finishes.
I also need to know how to deal with the either result from the php point of view.
And since these transactions are going to be an element of a larger set of operation in php with many prior insert queries, should I also device a mechanism to roll-back those successfully executed queries too, since I want Atomicity not only as in individual queries but also as in whole operation logic(php).
Edit 1 :-
Also, if former is the case, should I check for the error, and wait a few second and try that specific transaction query again after some time?
Edit 2 :-
Also Mysql Triggers is not an option for me.
With code like this, there is no race condition. Instead, one transaction could be aborted (ROLLBACK'd).
BEGIN;
SELECT balance FROM Accounts WHERE acct_id = 123 FOR UPDATE;
if balance < 100, then ROLLBACK and exit with "insufficient funds"
UPDATE Accounts SET balance = balance - 100 WHERE acct_id = 123;
UPDATE Accounts SET balance = balance + 100 WHERE acct_id = 456;
COMMIT;
And check for errors at each step. If error, ROLLBACK and rerun the transaction. On the second time, it will probably succeed. If not, then abort -- it is probably a logic bug. Only then should you give http error 500.
When two users "simultaneously" try to do similar transactions, then one of these things will happen:
The 'second' user will be stalled until the first finishes.
If that stall is more than innodb_lock_wait_timeout, your queries are too slow or something else. You need to fix the system.
If you get a "Deadlock", there may be ways to repair the code. Meanwhile, simply restarting the transaction is likely to succeed.
But it will not mess up the data (assuming the logic is correct).
There is no need to "wait a second" -- unless you have transactions that take "a second". Such would be terribly slow for this type of code.
What I am saying works for "real money", non-real money, non-money, etc.; whatever you need to tally carefully.
I'm a bit new to coding in general and seem to be struggling to wrap my mind around how to store data effectively for my application. (I'm attempting this in Laravel, with mySql)
I realise this question might lean towards being opinion-specific, but I am really looking for obvious pointers on false assumptions I have made, or a nudge in the direction of best-practices.
I will be tracking a number of messages, as if they were credits in a bulk email management system. One message-batch could use an undetermined number of credits to fire off a full batch of messages.
Groups of users will be able to send messages if they have credits to do so, One solution I have thought of is to have a table: id, user_group_id, debt/credit, reference_code - Where user_group_id would be linked to the group to which the user belongs, the debit/credit column could hold a positive or negative number (of a message related transaction), and the reference_code would track the type of transaction. Debits/Credit transactions would come about where the user_group account received new credits (purchased block of new credits), or in the case of a debits example, where batches of messages had been sent.
All this background leads to my question .. I still don't hold a single value for the available number of credits a user_group has. Before being able to send a new batch, should I be running a database query each time that sums all the "accounting" of positive and negative transactions to determine whether a user is "in the black" to be able to send further message batches, or should I have an additional table and keeps the result of their available credits total separately?
If I do store the total-available-credits value by itself, when should this single value be updated, at the end of every message-related task my application performs ? *user adds new credits, update total - user sends batch, update total.. etc.
This is an interesting question. Opinionated as you pointed out but nonetheless interesting. Database design is not my strong suit but here is how I would do it.
First, ensure integrity with INNODB tables and foreign key constraints. I would keep the total remaining credits each user group has in the user group table. You cold then create a transaction table with a Transaction ID, the User Group ID, and the credits used for that transaction so that you could follow each user group's transaction history.
Just a thought. Like I said, I'm by no means an expert. It may be useful however, to have a log of some sort so that you could verify transactions later in case of a credit discrepancy. You could always use this later to recalculate the remaining credits to ensure the numbers align.
Since these transactions may be important for credit/billing purposes, you may also want to turn off MySQL's auto commit and use the commit and rollback features to ensure your data stays in tact in case of an error.
" should I be running a database query each time that sums all the "accounting" of positive and negative transactions to determine whether a user is "in the black" to be able to send further message batches "
YES
I've got a website that allows users to join a "team". The team has a member limit defined in $maxPlayersPerTeam.
When the user clicks the link to join a team this code gets executed
//query to get players count
if ($players < $maxPlayersPerTeam) {
// query to insert the player
}
However, if two users click the join link at the same time, both can join the team even if $players is equal to $maxPlayersPerTeam.
What can I do to avoid this?
You should acquire a lock on the dataset (i hope you're using a database, right?), execute your check and eventually update the dataset. So if two people really execute the code simultaneously, one of both has to wait for the lock of the other one. After acquiring the lock a second time the dataset has already been updated and the person can't join your team also.
Be happy some people have already worked on this kind of problems and offer you some possible solution : database transactions.
The best to handle those is to use PDO and its beginTransaction, commit and rollBack methods.
Your tables will have to be using a database engine which accept transactions (so innoDb instead of MyISAM).
Assuming you're using a database to store the information your database system should have method for transactional processing and managing them which would cater for the events of multiple transactions occuring at the same time (even if this is an incredibly rare case). There's a wiki article on this (even if I hestitate to link to it). http://en.wikipedia.org/wiki/Transaction_processing.
MySQL has methods for doing this: http://dev.mysql.com/doc/refman/5.0/en/commit.html. As does PostgreSQL: http://www.postgresql.org/docs/8.3/static/tutorial-transactions.html.