I have a game website and I want to update the users money, however if I use 2 pc's at the exact same time this code will execute twice and the user will be left with minus money. How can I stop this from happening? It's driving me crazy.
$db = getDB();
$sql = "UPDATE users SET money = money- :money WHERE username=:user";
$stmt = $db->prepare($sql);
$stmt->bindParam(':money', $amount, PDO::PARAM_STR);
$stmt->bindParam(':user', $user, PDO::PARAM_STR);
$stmt->execute();
Any help is appreciated.
Echoing the comment from #GarryWelding: the database update isn't an appropriate place in the code to handle the use case that is described. Locking a row in the user table isn't the right fix.
Back up a step. It sounds like we are wanting some fine grained control over user purchases. Seems like we need a place to store a record of user purchases, and then we can can check that.
Without diving into a database design, I'm going to throw out some ideas here...
In addition to the "user" entity
user
username
account_balance
Seems like we are interested in some information about purchases a user has made. I'm throwing out some ideas about the information/attributes that might be of interest to us, not making any claim that these are all needed for your use case:
user_purchase
username that made the purchase
items/services purchased
datetime the purchase was originated
money_amount of the purchase
computer/session the purchase was made from
status (completed, rejected, ...)
reason (e.g. purchase is rejected, "insufficient funds", "duplicate item"
We don't want to try to track all of that information in the "account balance" of a user, especially since there can be multiple purchases from a user.
If our use case is much simpler than that, and we only to keep track of the most recent purchase by a user, then we could record that in the user entity.
user
username
account_balance ("money")
most_recent_purchase
_datetime
_item_service
_amount ("money")
_from_computer/session
And then with each purchase, we could record the new account_balance, and overwrite the previous "most recent purchase" information
If all we care about is preventing multiple purchases "at the same time", we need to define that... does that mean within the same exact microsecond? within 10 milliseconds?
Do we only want to prevent "duplicate" purchases from different computers/sessions? What about two duplicate requests on the same session?
This is not how I would solve the problem. But to answer the question you asked, if we go with a simple use case - "prevent two purchases within a millisecond of each other", and we want to do this in an UPDATE of user table
Given a table definition like this:
user
username datatype NOT NULL PRIMARY KEY
account_balance datatype NOT NULL
most_recent_purchase_dt DATETIME(6) NOT NULL COMMENT 'most recent purchase dt)
with the datetime (down to the microsecond) of the most recent purchase recorded in the user table (using the time returned by the database)
UPDATE user u
SET u.most_recent_purchase_dt = NOW(6)
, u.account_balance = u.account_balance - :money1
WHERE u.username = :user
AND u.account_balance >= :money2
AND NOT ( u.most_recent_purchase_dt >= NOW(6) + INTERVAL -1000 MICROSECOND
AND u.most_recent_purchase_dt < NOW(6) + INTERVAL +1001 MICROSECOND
)
We can then detect the number of rows affected by the statement.
If we get zero rows affected, then either :user wasn't found, or :money2 was greater than the account balance, or most_recent_purchase_dt was within a range of +/- 1 millisecond of now. We can't tell which.
If more than zero rows are affected, then we know that an update occurred.
EDIT
To emphasize some key points which might have been overlooked...
The example SQL is expecting support for fractional seconds, which requires MySQL 5.7 or later. In 5.6 and earlier, DATETIME resolution was only down to the second. (Note column definition in the example table and SQL specifies resolution down to microsecond... DATETIME(6) and NOW(6).
The example SQL statement is expecting username to be the PRIMARY KEY or a UNIQUE key in the user table. This is noted (but not highlighted) in the example table definition.
The example SQL statement overrides update of user for two statements executed within one millisecond of each other. For testing, change that millisecond resolution to a longer interval. for example, change it to one minute.
That is, change the two occurrences of 1000 MICROSECOND to 60 SECOND.
A few other notes: use bindValue in place of bindParam (since we're providing values to the statement, not returning values from the statement.
Also make sure PDO is set to throw an exception when an error occurs (if we aren't going to check the return from the PDO functions in the code) so the code isn't putting it's (figurative) pinky finger to the corner of our mouth Dr.Evil style "I just assume it will all go to plan. What?")
# enable PDO exceptions
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$sql = "
UPDATE user u
SET u.most_recent_purchase_dt = NOW(6)
, u.account_balance = u.account_balance - :money1
WHERE u.username = :user
AND u.account_balance >= :money2
AND NOT ( u.most_recent_purchase_dt >= NOW(6) + INTERVAL -60 SECOND
AND u.most_recent_purchase_dt < NOW(6) + INTERVAL +60 SECOND
)";
$sth = $dbh->prepare($sql)
$sth->bindValue(':money1', $amount, PDO::PARAM_STR);
$sth->bindValue(':money2', $amount, PDO::PARAM_STR);
$sth->bindValue(':user', $user, PDO::PARAM_STR);
$sth->execute();
# check if row was updated, and take appropriate action
$nrows = $sth->rowCount();
if( $nrows > 0 ) {
// row was updated, purchase successful
} else {
// row was not updated, purchase unsuccessful
}
And to emphasize a point I made earlier, "lock the row" is not the right approach to solving the problem. And doing the check the way I demonstrated in the example, doesn't tell us the reason the purchase was unsuccessful (insufficient funds or within specified timeframe of preceding purchase.)
for the negative balance change your code to
$sql = "UPDATE users SET money = money- :money WHERE username=:user AND money >= :money";
First idea:
If you're using InnoDB, you can use transactions to provide fine-grained mutual exclusion. Example:
START TRANSACTION;
UPDATE users SET money = money- :money WHERE username=:user;
COMMIT;
If you're using MyISAM, you can use LOCK TABLE to prevent B from accessing the table until A finishes making its changes. Example:
LOCK TABLE t WRITE;
UPDATE users SET money = money- :money WHERE username=:user;
Second idea:
If update don't work, you may delete and insert new row (if you have auto increment id, there won't be duplicates).
Related
I have found two different ways to, first, get the next invoice number and, then, save the invoice in a multi-tenant database where, of course, each tenant will have his own invoices with different incremental numbers.
My first (and actual) approach is this (works fine):
Add a new record to the invoices tables. No matter the invoice number yet (for example, 0, or empty)
I get the unique ID of THAT created record after insert
Now I do a "SELECT table where ID = $lastcreatedID **FOR UPDATE**"
Here I get the latest saved invoice number with "SELECT #A:=MAX(NUMBER)+1 FROM TABLE WHERE......"
Finally I update the previously saved record with that invoice number with an "UPDATE table SET NUMBER = $mynumber WHERE ID = $lastcreatedID"
This works fine, but I don't know if the "for update" is really needed or if this is the correct way to do this in a multi-tenant DB, due to performance, etc.
The second (and simpler) approach is this (and works too, but I don't know if it is a secure approach):
INSERT INTO table (NUMBER,TENANT) SELECT COALESCE(MAX(NUMBER),0)+1,$tenant FROM table WHERE....
That's it
Both methods are working, but I would like to know the differences between them regarding speed, performance, if it may create duplicates, etc.
Or... is there any better way to do this?
I'm using MySQL and PHP. The application is an invoice/sales cloud software that will be used by a lot of customers (tenants).
Thanks
Regardless of if you're using these values as database IDs or not, re-using IDs is virtually guaranteed to cause problems at some point. Even if you're not re-using IDs you're going to run into the case where two invoice creation requests run at the same time and get the same MAX()+1 result.
To get around all this you need to reimplement a simple sequence generator that locks its storage while a value is being issued. Eg:
CREATE TABLE client_invoice_serial (
-- note: also FK this back to the client record
client_id INTEGER UNSIGNED NOT NULL PRIMARY KEY,
serial INTEGER UNSIGNED NOT NULL DEFAULT 0
);
$dbh = new PDO('mysql:...');
/* this defaults to 'on', making every query an implicit transaction. it needs to
be off for this. you may or may not want to set this globally, or just turn it off
before this, and back on at the end. */
$dbh->setAttribute(PDO::ATTR_AUTOCOMMIT,0);
// simple best practice, ensures that SQL errors MUST be dealt with. is assumed to be enabled for the below try/catch.
$dbh->setAttribute(PDO::ATTR_ERRMODE_EXCEPTION,1);
$dbh->beginTransaction();
try {
// the below will lock the selected row
$select = $dbh->prepare("SELECT * FROM client_invoice_serial WHERE client_id = ? FOR UPDATE;");
$select->execute([$client_id]);
if( $select->rowCount() === 0 ) {
$insert = $dbh->prepare("INSERT INTO client_invoice_serial (client_id, serial) VALUES (?, 1);");
$insert->execute([$client_id]);
$invoice_id = 1;
} else {
$invoice_id = $select->fetch(PDO::FETCH_ASSOC)['serial'] + 1;
$update = $dbh->prepare("UPDATE client_invoice_serial SET serial = serial + 1 WHERE client_id = ?");
$update->execute([$client_id])
}
$dbh->commit();
} catch(\PDOException $e) {
// make sure that the transaction is cleaned up ASAP, then let the exception bubble up into your general error handling.
$dbh->rollback();
throw $e; // or throw a more pertinent error/exception of your choosing.
}
// both committing and rolling back will release the lock
At a very basic level this is what MySQL is doing in the background for AUTOINCREMENT columns.
Do not use MAX(id)+1. It will, someday, bite you. There will be two invoices with the same number, and it will take us a few paragraphs to explain why it happened.
Instead, use AUTO_INCREMENT the way it is intended.
INSERT INTO Invoices (id, ...) VALUES (NULL, ...);
SELECT LAST_INSERT_ID(); -- specific to the conne ction
That is safe even when multiple connections are doing the same thing. No FOR UPDATE, no BEGIN, etc is necessary. (You may want such for other purposes.)
And, never delete rows. Instead, use the standard business practice of invalidating bad invoices. Imagine being audited.
All that said, there is still a potential problem. After a ROLLBACK or system crash, an id may be "burned". Also things like INSERT IGNORE allocate the id before checking to see whether it will be needed.
If you can live with the caveats, use AUTO_INCREMENT.
If not, then create a 1-row, 2-column table to simulate a sequence number generator: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#sequence
Or use MariaDB's SEQUENCE
Both the approaches do work, but each with its own demerits in high traffic situations.
The first approach runs 3 queries for every invoice you create, putting extra load on your server.
The second approach can lead to duplicates in events where two invoices are generated with very little time difference (such that the SELECT query return same max number for both invoices).
Both the approaches may lead to problems in high traffic conditions.
Two solutions to the problems are listed below:
Use generated columns: Mysql supports generated columns, which are basically derived using other column values for each row. Refer this
Calculate invoice number on the fly: Since you're using the primary key as part of the invoice, let the DB handle generating unique primary keys, and then generate invoice numbers on the fly in your business logic using the id for each invoice.
I want to only run the update query if row exists (and was inserted). I tried several different things but this could be a problem with how I am looping this. The insert which works ok and creates the record and the update should take the existing value and add it each time (10 exists + 15 added, 25 exists + 15 added, 40 exists... I tried this in the loop but it ran for every item in a list and was a huge number each time. Also the page is run each time when a link is clicked so user exits and comes back
while($store = $SQL->fetch_array($res_sh))
{
$pm_row = $SQL->query("SELECT * FROM `wishlist` WHERE shopping_id='".$store['id']."'");
$myprice = $store['shprice'];
$sql1 = "insert into posted (uid,price) Select '$uid','$myprice'
FROM posted WHERE NOT EXISTS (select * from `posted` WHERE `uid` = '$namearray[id]') LIMIT 1";
$query = mysqli_query($connection,$sql1);
}
$sql2 = "UPDATE posted SET `price` = price + '$myprice', WHERE shopping_id='".$_GET['id']."'";
$query = mysqli_query($connection,$sql2);
Utilizing mysqli_affected_rows on the insert query, verifying that it managed to insert, you can create a conditional for the update query.
However, if you're running an update immediately after an insert, one is led to believe it could be accomplished in the same go. In this case, with no context, you could just multiply $myprice by 2 before inserting - you may look into if you can avoid doing this.
Additionally, but somewhat more complex, you could utilize SQL Transactions for this, and make sure you are exactly referencing the row you would want to update. If the insert failed, your update would not happen.
Granted, if you referenced the inserted row perfectly for your update then the update will not happen anyway. For example, having a primary, auto-increment key on these rows, use mysqli_insert_id to get the last inserted ID, and updating the row with that ID. But then this methodology can break in a high volume system, or just a random race event, which leads us right back to single queries or transaction utilization.
i want to get last balance and update some transaction of xxx user from backend..
unfortunately, at the same time, xxx also do the transaction from frontend, so when I processed my query, xxx is processing same query too, so it get same last balance.
here is my script.
assume : xxx last balance is 10000
$transaction = 1000;
$getData = mysqli_fetch_array(mysqli_query($conn,"select balance from tableA where user='xxx'"));
$balance = $getData["balance"] - $transaction; //10000 - 1000 = 9000
mysqli_query($conn,"update tableA set balance='".$balance."' where user='xxx'");
at the same time user xxx do transaction from frontend..
$transaction = 500;
$getData = mysqli_fetch_array(mysqli_query($conn,"select balance from tableA where user='xxx'"));
$balance = $getData["balance"] - $transaction; //10000-500 it should be 9000-500
mysqli_query($conn,"update tableA set balance='".$balance."' where user='xxx'");
how can I done my query first, then user xxx may processed the query?
You can lock the table "TableA" using the MySQL LOCK TABLES command.
Here's the logic flow :
LOCK TABLES "TableA" WRITE;
Execute your first query
Then
UNLOCK TABLES;
See:
http://dev.mysql.com/doc/refman/5.5/en/lock-tables.html
This is one of the available approaches.
You have to use InnoDB engine for your table. InnoDB supports row locks so you won't need to lock the whole table for UPDATEing just one ROW related to a given user.
(Table lock will prevent other INSERT/UPDATE/DELETE operations from being executed resulting in that they will have to wait for this table LOCK to be released).
In InnoDB you can achieve ROW LOCK when you are executing SELECT query by using FOR UPDATE.
(but in this you have to use transaction to achieve the LOCK). When you do SELECT ... FOR UPDATE
in a transaction mysql locks the given row you are selecting until the transaction is committed.
And lets say you make SELECT ... FOR UPDATE query in your backend for user entry XXX and at the same time frontend makes the same query for the same XXX.
The first query (from backend) that was executed will lock the entry in the DB and the second query will wait for the first one to complete,
which may result in some delay for the frontend request to complete.
But for this scenario to work you have to put both frontend and backend queries
in transaction and both SELECT queries must have FOR UPDATE in the end.
So your code will look like this:
$transaction = 1000;
mysqli_begin_transaction($conn);
$getData = mysqli_fetch_array(mysqli_query($conn,"SELECT balance FROM tableA WHERE user='xxx' FOR UPDATE"));
$balance = $getData["balance"] - $transaction; //10000 - 1000 = 9000
mysqli_query($conn,"UPDATE tableA SET balance='".$balance."' WHERE user='xxx'");
mysqli_commit($conn);
If this is your backend code, the frontend code should look very similar - having begin/commit transaction + FOR UPDATE.
One of the best thing about FOR UPDATE is that if you need a query to LOCK some row and do some calculations with this data
in a given scenario but at the same time you need other queries that are selecting the same row and they do NO need the most recent data in that row,
than you can simply do this queries with no transaction and with no FOR UPDATE in the end. So you will have LOCKED row and other normal SELECTs that are reading from it (of course they will read the old info ... stored before the LOCK started).
Using InoBD engine and transaction to make it ACID(https://en.wikipedia.org/wiki/ACID)
mysqli_begin_transaction($conn);
...
mysqli_commit($conn)
In additional, why dont you use query to increate balance
mysqli_query($conn,"update tableA set balance= balance + '".$transaction."' where user='xxx'");
There are basically two ways you can go about this:
By locking the table.
By using transactions.
The most common one in this situation is using transactions, to make sure all of the operations you do are atomic. Meaning that if one step fails, everything gets rolled back to before the changes started.
Normally one would also do the operation itself in the query, for something as simple as this. As database engines are more than capable of doing simple calculations. In this situation you might want to check that the user actually has enough credit on his account, which in turn states that you need to check.
I'd just move the check to after you've subtracted the amount, just to be on the safe side. (Protection against racing conditions etc)
A quick example to get you started with:
$conn = new mysqli();
/**
* Updates the user's credit with the amount specified.
*
* Returns false if the resulting amount is less than 0.
* Exceptions are thrown in case of SQL errors.
*
* #param mysqli $conn
* #param int $userID
* #param int $amount
* #throws Exception
* #return boolean
*/
function update_credit (mysqli $conn, $userID, $amount) {
// Using transaction so that we can roll back in case of errors.
$conn->query('BEGIN');
// Update the balance of the user with the amount specified.
$stmt = $conn->prepare('UPDATE `table` SET `balance` = `balance` + ? WHERE `user` = ?');
$stmt->bind_param ('dd', $amount, $userID);
// If query fails, then roll back and return/throw an error condition.
if (!$stmt->execute ()) {
$conn->query ('ROLLBACK');
throw new Exception ('Count not perform query!');
}
// We need the updated balance to check if the user has a positive credit counter now.
$stmt = $conn->prepare ('SELECT `balance` FROM `table` WHERE `user` = ?');
$stmt->bind_param ('d', $userID);
// Same as last time.
if (!$stmt->execute ()) {
$conn->query ('ROLLBACK');
throw new Exception ('Count not perform query!');
}
$stmt->bind_result($amount);
$stmt->fetch();
// We need to inform the user if he doesn't have enough credits.
if ($amount < 0) {
$conn->query ('ROLLBACK');
return false;
}
// Everything is good at this point.
$conn->query ('COMMIT');
return true;
}
Maybe your problem is just your way to store balance. Why do you put it in a field? You lose all the history of the transactions doing that.
Create a table: transactions_history. then for each transaction, do an INSERT query, passing the user, transaction value and operation (deposit or withdraw).
Then, to show to your user his current balance, just do a SELECT on all his transaction history, doing the operations correctly, in the end he will see the actual correct balance. And you also prevent the error from doing 2 UPDATE queries at the same (although "same time" its not so common as we may think).
you can use transaction like this.
$balance is balance you want to subtract.if query perform well than it will show updated balance otherwise it will be rollback to initial position and exception error will show you the error of failure.
try {
$db->beginTransaction();
$db->query('update tableA set balance=balance-'".$balance."' where user='xxx'" ');
$db->commit();
} catch (Exception $e) {
$db->rollback();
}
In my application i want to implementing this mySql command as a single command to use that and get result on time without use any other command by programing out of this box such as PHP or etc,
what i want to implementing action:
check user money
IF user has money then
decrease money from himself
AND
increase money of other user
RETURN result
ELSE
RETURN result as false
this command is my implementation but its not correct
SELECT *, (case when (money >= 200)
THEN
if(
(update money_repositories set money = money-200 where userId = 1)
AND
(update money_repositories set money = money+200 where userId = 34)
) as state
ELSE
false
END)
as state from money_repositories where userId = 1;
how can i fix this command? Thank you very much
What we have here is a financial transaction. It would be horrible if the money was deducted from the first user and not second user. Is it a coincidence then that mysql has something called a transaction?
You cannot have an update inside a select. You need to have two different update statements here. First to deduct from user1, second to credit into user2's account. Transactions ensure that both operations succeed together or the first query is rolled back preserving user1's money.
The other aspect of transactions ensure's that another thread does not make a similiar modification changing the balance between the two update queries.
I have a website that has user ranking as a central part, but the user count has grown to over 50,000 and it is putting a strain on the server to loop through all of those to update the rank every 5 minutes. Is there a better method that can be used to easily update the ranks at least every 5 minutes? It doesn't have to be with php, it could be something that is run like a perl script or something if something like that would be able to do the job better (though I'm not sure why that would be, just leaving my options open here).
This is what I currently do to update ranks:
$get_users = mysql_query("SELECT id FROM users WHERE status = '1' ORDER BY month_score DESC");
$i=0;
while ($a = mysql_fetch_array($get_users)) {
$i++;
mysql_query("UPDATE users SET month_rank = '$i' WHERE id = '$a[id]'");
}
UPDATE (solution):
Here is the solution code, which takes less than 1/2 of a second to execute and update all 50,000 rows (make rank the primary key as suggested by Tom Haigh).
mysql_query("TRUNCATE TABLE userRanks");
mysql_query("INSERT INTO userRanks (userid) SELECT id FROM users WHERE status = '1' ORDER BY month_score DESC");
mysql_query("UPDATE users, userRanks SET users.month_rank = userRanks.rank WHERE users.id = userRanks.id");
Make userRanks.rank an autoincrementing primary key. If you then insert userids into userRanks in descending rank order it will increment the rank column on every row. This should be extremely fast.
TRUNCATE TABLE userRanks;
INSERT INTO userRanks (userid) SELECT id FROM users WHERE status = '1' ORDER BY month_score DESC;
UPDATE users, userRanks SET users.month_rank = userRanks.rank WHERE users.id = userRanks.id;
My first question would be: why are you doing this polling-type operation every five minutes?
Surely rank changes will be in response to some event and you can localize the changes to a few rows in the database at the time when that event occurs. I'm pretty certain the entire user base of 50,000 doesn't change rankings every five minutes.
I'm assuming the "status = '1'" indicates that a user's rank has changed so, rather than setting this when the user triggers a rank change, why don't you calculate the rank at that time?
That would seem to be a better solution as the cost of re-ranking would be amortized over all the operations.
Now I may have misunderstood what you meant by ranking in which case feel free to set me straight.
A simple alternative for bulk update might be something like:
set #rnk = 0;
update users
set month_rank = (#rnk := #rnk + 1)
order by month_score DESC
This code uses a local variable (#rnk) that is incremented on each update. Because the update is done over the ordered list of rows, the month_rank column will be set to the incremented value for each row.
Updating the users table row by row will be a time consuming task. It would be better if you could re-organise your query so that row by row updates are not required.
I'm not 100% sure of the syntax (as I've never used MySQL before) but here's a sample of the syntax used in MS SQL Server 2000
DECLARE #tmp TABLE
(
[MonthRank] [INT] NOT NULL,
[UserId] [INT] NOT NULL,
)
INSERT INTO #tmp ([UserId])
SELECT [id]
FROM [users]
WHERE [status] = '1'
ORDER BY [month_score] DESC
UPDATE users
SET month_rank = [tmp].[MonthRank]
FROM #tmp AS [tmp], [users]
WHERE [users].[Id] = [tmp].[UserId]
In MS SQL Server 2005/2008 you would probably use a CTE.
Any time you have a loop of any significant size that executes queries inside, you've got a very likely antipattern. We could look at the schema and processing requirement with more info, and see if we can do the whole job without a loop.
How much time does it spend calculating the scores, compared with assigning the rankings?
Your problem can be handled in a number of ways. Honestly more details from your server may point you in a totally different direction. But doing it that way you are causing 50,000 little locks on a heavily read table. You might get better performance with a staging table and then some sort of transition. Inserts into a table no one is reading from are probably going to be better.
Consider
mysql_query("delete from month_rank_staging;");
while(bla){
mysql_query("insert into month_rank_staging values ('$id', '$i');");
}
mysql_query("update month_rank_staging src, users set users.month_rank=src.month_rank where src.id=users.id;");
That'll cause one (bigger) lock on the table, but might improve your situation. But again, that may be way off base depending on the true source of your performance problem. You should probably look deeper at your logs, mysql config, database connections, etc.
Possibly you could use shards by time or other category. But read this carefully before...
You can split up the rank processing and the updating execution. So, run through all the data and process the query. Add each update statement to a cache. When the processing is complete, run the updates. You should have the WHERE portion of the UPDATE reference a primary key set to auto_increment, as mentioned in other posts. This will prevent the updates from interfering with the performance of the processing. It will also prevent users later in the processing queue from wrongfully taking advantage of the values from the users who were processed before them (if one user's rank affects that of another). It also prevents the database from clearing out its table caches from the SELECTS your processing code does.