i want to get last balance and update some transaction of xxx user from backend..
unfortunately, at the same time, xxx also do the transaction from frontend, so when I processed my query, xxx is processing same query too, so it get same last balance.
here is my script.
assume : xxx last balance is 10000
$transaction = 1000;
$getData = mysqli_fetch_array(mysqli_query($conn,"select balance from tableA where user='xxx'"));
$balance = $getData["balance"] - $transaction; //10000 - 1000 = 9000
mysqli_query($conn,"update tableA set balance='".$balance."' where user='xxx'");
at the same time user xxx do transaction from frontend..
$transaction = 500;
$getData = mysqli_fetch_array(mysqli_query($conn,"select balance from tableA where user='xxx'"));
$balance = $getData["balance"] - $transaction; //10000-500 it should be 9000-500
mysqli_query($conn,"update tableA set balance='".$balance."' where user='xxx'");
how can I done my query first, then user xxx may processed the query?
You can lock the table "TableA" using the MySQL LOCK TABLES command.
Here's the logic flow :
LOCK TABLES "TableA" WRITE;
Execute your first query
Then
UNLOCK TABLES;
See:
http://dev.mysql.com/doc/refman/5.5/en/lock-tables.html
This is one of the available approaches.
You have to use InnoDB engine for your table. InnoDB supports row locks so you won't need to lock the whole table for UPDATEing just one ROW related to a given user.
(Table lock will prevent other INSERT/UPDATE/DELETE operations from being executed resulting in that they will have to wait for this table LOCK to be released).
In InnoDB you can achieve ROW LOCK when you are executing SELECT query by using FOR UPDATE.
(but in this you have to use transaction to achieve the LOCK). When you do SELECT ... FOR UPDATE
in a transaction mysql locks the given row you are selecting until the transaction is committed.
And lets say you make SELECT ... FOR UPDATE query in your backend for user entry XXX and at the same time frontend makes the same query for the same XXX.
The first query (from backend) that was executed will lock the entry in the DB and the second query will wait for the first one to complete,
which may result in some delay for the frontend request to complete.
But for this scenario to work you have to put both frontend and backend queries
in transaction and both SELECT queries must have FOR UPDATE in the end.
So your code will look like this:
$transaction = 1000;
mysqli_begin_transaction($conn);
$getData = mysqli_fetch_array(mysqli_query($conn,"SELECT balance FROM tableA WHERE user='xxx' FOR UPDATE"));
$balance = $getData["balance"] - $transaction; //10000 - 1000 = 9000
mysqli_query($conn,"UPDATE tableA SET balance='".$balance."' WHERE user='xxx'");
mysqli_commit($conn);
If this is your backend code, the frontend code should look very similar - having begin/commit transaction + FOR UPDATE.
One of the best thing about FOR UPDATE is that if you need a query to LOCK some row and do some calculations with this data
in a given scenario but at the same time you need other queries that are selecting the same row and they do NO need the most recent data in that row,
than you can simply do this queries with no transaction and with no FOR UPDATE in the end. So you will have LOCKED row and other normal SELECTs that are reading from it (of course they will read the old info ... stored before the LOCK started).
Using InoBD engine and transaction to make it ACID(https://en.wikipedia.org/wiki/ACID)
mysqli_begin_transaction($conn);
...
mysqli_commit($conn)
In additional, why dont you use query to increate balance
mysqli_query($conn,"update tableA set balance= balance + '".$transaction."' where user='xxx'");
There are basically two ways you can go about this:
By locking the table.
By using transactions.
The most common one in this situation is using transactions, to make sure all of the operations you do are atomic. Meaning that if one step fails, everything gets rolled back to before the changes started.
Normally one would also do the operation itself in the query, for something as simple as this. As database engines are more than capable of doing simple calculations. In this situation you might want to check that the user actually has enough credit on his account, which in turn states that you need to check.
I'd just move the check to after you've subtracted the amount, just to be on the safe side. (Protection against racing conditions etc)
A quick example to get you started with:
$conn = new mysqli();
/**
* Updates the user's credit with the amount specified.
*
* Returns false if the resulting amount is less than 0.
* Exceptions are thrown in case of SQL errors.
*
* #param mysqli $conn
* #param int $userID
* #param int $amount
* #throws Exception
* #return boolean
*/
function update_credit (mysqli $conn, $userID, $amount) {
// Using transaction so that we can roll back in case of errors.
$conn->query('BEGIN');
// Update the balance of the user with the amount specified.
$stmt = $conn->prepare('UPDATE `table` SET `balance` = `balance` + ? WHERE `user` = ?');
$stmt->bind_param ('dd', $amount, $userID);
// If query fails, then roll back and return/throw an error condition.
if (!$stmt->execute ()) {
$conn->query ('ROLLBACK');
throw new Exception ('Count not perform query!');
}
// We need the updated balance to check if the user has a positive credit counter now.
$stmt = $conn->prepare ('SELECT `balance` FROM `table` WHERE `user` = ?');
$stmt->bind_param ('d', $userID);
// Same as last time.
if (!$stmt->execute ()) {
$conn->query ('ROLLBACK');
throw new Exception ('Count not perform query!');
}
$stmt->bind_result($amount);
$stmt->fetch();
// We need to inform the user if he doesn't have enough credits.
if ($amount < 0) {
$conn->query ('ROLLBACK');
return false;
}
// Everything is good at this point.
$conn->query ('COMMIT');
return true;
}
Maybe your problem is just your way to store balance. Why do you put it in a field? You lose all the history of the transactions doing that.
Create a table: transactions_history. then for each transaction, do an INSERT query, passing the user, transaction value and operation (deposit or withdraw).
Then, to show to your user his current balance, just do a SELECT on all his transaction history, doing the operations correctly, in the end he will see the actual correct balance. And you also prevent the error from doing 2 UPDATE queries at the same (although "same time" its not so common as we may think).
you can use transaction like this.
$balance is balance you want to subtract.if query perform well than it will show updated balance otherwise it will be rollback to initial position and exception error will show you the error of failure.
try {
$db->beginTransaction();
$db->query('update tableA set balance=balance-'".$balance."' where user='xxx'" ');
$db->commit();
} catch (Exception $e) {
$db->rollback();
}
Related
I want to only run the update query if row exists (and was inserted). I tried several different things but this could be a problem with how I am looping this. The insert which works ok and creates the record and the update should take the existing value and add it each time (10 exists + 15 added, 25 exists + 15 added, 40 exists... I tried this in the loop but it ran for every item in a list and was a huge number each time. Also the page is run each time when a link is clicked so user exits and comes back
while($store = $SQL->fetch_array($res_sh))
{
$pm_row = $SQL->query("SELECT * FROM `wishlist` WHERE shopping_id='".$store['id']."'");
$myprice = $store['shprice'];
$sql1 = "insert into posted (uid,price) Select '$uid','$myprice'
FROM posted WHERE NOT EXISTS (select * from `posted` WHERE `uid` = '$namearray[id]') LIMIT 1";
$query = mysqli_query($connection,$sql1);
}
$sql2 = "UPDATE posted SET `price` = price + '$myprice', WHERE shopping_id='".$_GET['id']."'";
$query = mysqli_query($connection,$sql2);
Utilizing mysqli_affected_rows on the insert query, verifying that it managed to insert, you can create a conditional for the update query.
However, if you're running an update immediately after an insert, one is led to believe it could be accomplished in the same go. In this case, with no context, you could just multiply $myprice by 2 before inserting - you may look into if you can avoid doing this.
Additionally, but somewhat more complex, you could utilize SQL Transactions for this, and make sure you are exactly referencing the row you would want to update. If the insert failed, your update would not happen.
Granted, if you referenced the inserted row perfectly for your update then the update will not happen anyway. For example, having a primary, auto-increment key on these rows, use mysqli_insert_id to get the last inserted ID, and updating the row with that ID. But then this methodology can break in a high volume system, or just a random race event, which leads us right back to single queries or transaction utilization.
I have a game website and I want to update the users money, however if I use 2 pc's at the exact same time this code will execute twice and the user will be left with minus money. How can I stop this from happening? It's driving me crazy.
$db = getDB();
$sql = "UPDATE users SET money = money- :money WHERE username=:user";
$stmt = $db->prepare($sql);
$stmt->bindParam(':money', $amount, PDO::PARAM_STR);
$stmt->bindParam(':user', $user, PDO::PARAM_STR);
$stmt->execute();
Any help is appreciated.
Echoing the comment from #GarryWelding: the database update isn't an appropriate place in the code to handle the use case that is described. Locking a row in the user table isn't the right fix.
Back up a step. It sounds like we are wanting some fine grained control over user purchases. Seems like we need a place to store a record of user purchases, and then we can can check that.
Without diving into a database design, I'm going to throw out some ideas here...
In addition to the "user" entity
user
username
account_balance
Seems like we are interested in some information about purchases a user has made. I'm throwing out some ideas about the information/attributes that might be of interest to us, not making any claim that these are all needed for your use case:
user_purchase
username that made the purchase
items/services purchased
datetime the purchase was originated
money_amount of the purchase
computer/session the purchase was made from
status (completed, rejected, ...)
reason (e.g. purchase is rejected, "insufficient funds", "duplicate item"
We don't want to try to track all of that information in the "account balance" of a user, especially since there can be multiple purchases from a user.
If our use case is much simpler than that, and we only to keep track of the most recent purchase by a user, then we could record that in the user entity.
user
username
account_balance ("money")
most_recent_purchase
_datetime
_item_service
_amount ("money")
_from_computer/session
And then with each purchase, we could record the new account_balance, and overwrite the previous "most recent purchase" information
If all we care about is preventing multiple purchases "at the same time", we need to define that... does that mean within the same exact microsecond? within 10 milliseconds?
Do we only want to prevent "duplicate" purchases from different computers/sessions? What about two duplicate requests on the same session?
This is not how I would solve the problem. But to answer the question you asked, if we go with a simple use case - "prevent two purchases within a millisecond of each other", and we want to do this in an UPDATE of user table
Given a table definition like this:
user
username datatype NOT NULL PRIMARY KEY
account_balance datatype NOT NULL
most_recent_purchase_dt DATETIME(6) NOT NULL COMMENT 'most recent purchase dt)
with the datetime (down to the microsecond) of the most recent purchase recorded in the user table (using the time returned by the database)
UPDATE user u
SET u.most_recent_purchase_dt = NOW(6)
, u.account_balance = u.account_balance - :money1
WHERE u.username = :user
AND u.account_balance >= :money2
AND NOT ( u.most_recent_purchase_dt >= NOW(6) + INTERVAL -1000 MICROSECOND
AND u.most_recent_purchase_dt < NOW(6) + INTERVAL +1001 MICROSECOND
)
We can then detect the number of rows affected by the statement.
If we get zero rows affected, then either :user wasn't found, or :money2 was greater than the account balance, or most_recent_purchase_dt was within a range of +/- 1 millisecond of now. We can't tell which.
If more than zero rows are affected, then we know that an update occurred.
EDIT
To emphasize some key points which might have been overlooked...
The example SQL is expecting support for fractional seconds, which requires MySQL 5.7 or later. In 5.6 and earlier, DATETIME resolution was only down to the second. (Note column definition in the example table and SQL specifies resolution down to microsecond... DATETIME(6) and NOW(6).
The example SQL statement is expecting username to be the PRIMARY KEY or a UNIQUE key in the user table. This is noted (but not highlighted) in the example table definition.
The example SQL statement overrides update of user for two statements executed within one millisecond of each other. For testing, change that millisecond resolution to a longer interval. for example, change it to one minute.
That is, change the two occurrences of 1000 MICROSECOND to 60 SECOND.
A few other notes: use bindValue in place of bindParam (since we're providing values to the statement, not returning values from the statement.
Also make sure PDO is set to throw an exception when an error occurs (if we aren't going to check the return from the PDO functions in the code) so the code isn't putting it's (figurative) pinky finger to the corner of our mouth Dr.Evil style "I just assume it will all go to plan. What?")
# enable PDO exceptions
$dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$sql = "
UPDATE user u
SET u.most_recent_purchase_dt = NOW(6)
, u.account_balance = u.account_balance - :money1
WHERE u.username = :user
AND u.account_balance >= :money2
AND NOT ( u.most_recent_purchase_dt >= NOW(6) + INTERVAL -60 SECOND
AND u.most_recent_purchase_dt < NOW(6) + INTERVAL +60 SECOND
)";
$sth = $dbh->prepare($sql)
$sth->bindValue(':money1', $amount, PDO::PARAM_STR);
$sth->bindValue(':money2', $amount, PDO::PARAM_STR);
$sth->bindValue(':user', $user, PDO::PARAM_STR);
$sth->execute();
# check if row was updated, and take appropriate action
$nrows = $sth->rowCount();
if( $nrows > 0 ) {
// row was updated, purchase successful
} else {
// row was not updated, purchase unsuccessful
}
And to emphasize a point I made earlier, "lock the row" is not the right approach to solving the problem. And doing the check the way I demonstrated in the example, doesn't tell us the reason the purchase was unsuccessful (insufficient funds or within specified timeframe of preceding purchase.)
for the negative balance change your code to
$sql = "UPDATE users SET money = money- :money WHERE username=:user AND money >= :money";
First idea:
If you're using InnoDB, you can use transactions to provide fine-grained mutual exclusion. Example:
START TRANSACTION;
UPDATE users SET money = money- :money WHERE username=:user;
COMMIT;
If you're using MyISAM, you can use LOCK TABLE to prevent B from accessing the table until A finishes making its changes. Example:
LOCK TABLE t WRITE;
UPDATE users SET money = money- :money WHERE username=:user;
Second idea:
If update don't work, you may delete and insert new row (if you have auto increment id, there won't be duplicates).
I would like like to delete a bulk of data. this table have approximately 11207333 now
However I have several method to delete it.
The data that will be deleted is approximately 300k. I have two method to do this but unsure which one perform faster.
My first option:
$start_date = "2011-05-01 00:00:00";
$end_date = "2011-05-31 23:59:59";
$sql = "DELETE FROM table WHERE date>='$start_date' and date <='$end_date'";
$mysqli->query($sql);
printf("Affected rows (DELETE): %d\n", $mysqli->affected_rows);
second option:
$query = "SELECT count(*) as count FROM table WHERE date>='$start_date' and date <='$end_date'";
$result = $mysqli->query($query);
$row = $result->fetch_array(MYSQLI_ASSOC);
$total = $row['count'];
if ($total > 0) {
$query = "SELECT * FROM table WHERE date>='$start_date' and date <='$end_date' LIMIT 0,$total";
$result = $mysqli->query($query);
while ($row = $result->fetch_array(MYSQLI_ASSOC)) {
$table_id = $row['table_id']; // primary key
$query = "DELETE FROM table where table_id = $table_id LIMIT 0,$total";
$mysqli->query($query);
}
}
This table data is displayed to client to see, I afraid that if the deletion go wrong and it will affect my client.
I was wondering are there any method better than mine.
If you guys need more info from me just let me know.
Thank you
In my opinion, the first option is faster.
The second option contains looping which I think will be slower because it keeps looping several times looking for your table id.
If you did not provide the wrong start and end date, I think you're safe either option, but option 1 is faster in my opinion.
and yea, i dont see any deletion in option 2, but I assume you have it in mind but using looping method.
Option one is your best bet.
If you are afraid something will "go wrong" you could protect yourself by backing up the data first, exported the rows you plan to delete, or implementing a logical delete flag.
Assuming that there is indeed a DELETE query in it, the second method is not only slower, it may break if another connection deletes one of the rows you intend to delete in your while loop, before it had a chance to do it. For it to work, you need to wrap it in a transaction:
mysqli_query("START TRANSACTION;");
# your series of queries...
mysql_query("COMMIT;");
This will allow the correct processing of your queries in isolation of the rest of the events happening in the db.
At any rate, if you want the first query to be faster, you need to tune your table definition by adding an index on the column used for the deletion, namely `date` (however, recall that this new index may amper other queries in your app, if there are already several indexes on that table).
Without that index, mysql will basically process the query more or less the same way as in method 2, but without:
PHP interpretation,
network communication and
query analysis overhead.
You don't need any SELECTS to make the delete in a loop. Just use LIMIT in your delete query and check if there are affected rows:
$start_date = "2011-05-01 00:00:00";
$end_date = "2011-05-31 23:59:59";
$deletedRecords = 0;
$sql = "DELETE FROM table WHERE date>='$start_date' and date <='$end_date' LIMIT 100";
do {
$mysqli->query($sql);
$deletedRecords += $mysqli->affected_rows;
while ($mysqli->affected_rows > 0);
}
printf("Affected rows (DELETE): %d\n", $deletedRecords);
Which method is better depends on the storage engine you are using.
If you are using InnoDB, this is the recommended way. The reason is that the DELETE statement runs in a transaction (even in auto-commit mode, every sql statement is run in a transaction, in order to be atomic... if it fails in the middle, the whole delete will be rolled back and you won't end with half-data). Which means that you will have a long running transaction, and you will have a lot of locked rows during the transaction, which will block anyone who wants to update such data (it can block insterts if there are unique indexes involved) and reads will be done via the rollback log. In other words, for InnoDB, large deletes are faster if performed in chunks.
In MyISAM however, the delete locks the entire table. If you do in lot of small chunks, you will have too many LOCK/UNLOCK commands executed, which will actually slow the process. I would make it in a loop for MyISAM as well, to give chance to other processes to use the table, but in larger chunks compared to InnoDB. I would never do it row by row for MyISAM based table because of the LOCK/UNLOCK overhead.
I have a process that selects the next item to process from a MySQL InnoDB Table based on some criteria. When a row has been selected as the next to process, it's processing field is set to 1 while processing is happening outside the database. I do this so that many processors can be run at once, and they won't process the same row.
If I use transactions to execute the following queries, are they guaranteed to be executed together ( eg. Without any other MySQL connections executing queries. )? If they are not, then multiple processors could get the same id from the SELECT query and then processing will be redundant.
Pseudo Code Example
Prepare Transaction...
$id = SELECT id
FROM companies
WHERE processing = 0
ORDER BY last_crawled ASC
LIMIT 1;
UPDATE companies
SET processing = 1
WHERE id = $id;
Execute Transaction
I've been struggling to accomplish this fast enough using a single UPDATE query ( see this question ). Assume that is not an option for the purposes of this question.
You still have a possibility of a race condition, even though you execute the SELECT followed by the UPDATE in a single transaction. SELECT by itself does not lock anything, so you could have two concurrent sessions both SELECT and get the same id. Then both would attempt to UPDATE, but only one would "win" - the other would have to wait.
To get around this, use the SELECT...FOR UPDATE clause, which creates a lock on the rows it returns.
Prepare Transaction...
$id = SELECT id
FROM companies
WHERE processing = 0
ORDER BY last_crawled ASC
LIMIT 1
FOR UPDATE;
This means that the lock is created as the row is selected. This is atomic, which means no other session can sneak in and get a lock on the same row. If they try, their transaction will block on the SELECT.
UPDATE companies
SET processing = 1
WHERE id = $id;
Commit Transaction
I changed your "execute transaction" pseudocode to "commit transaction." Statements within a transaction execute immediately, which means they create locks and so on. Then when you COMMIT, the locks are released and any changes are committed. Committed means they can't be rolled back, and they are visible to other transactions.
Here's a quick example of using mysqli to accomplish this:
$mysqli = new mysqli(...);
$mysqli->report_mode = MYSQLI_REPORT_STRICT; /* throw exception on error */
$mysqli->begin_transaction();
$sql = "SELECT id
FROM companies
WHERE processing = 0
ORDER BY last_crawled ASC
LIMIT 1
FOR UPDATE";
$result = $mysqli->query($sql);
while ($row = $result->fetch_array(MYSQLI_ASSOC)) {
$id = $row["id"];
}
$sql = "UPDATE companies
SET processing = 1
WHERE id = ?";
$stmt = $mysqli->prepare($sql);
$stmt->bind_param("i", $id);
$stmt->execute();
$mysqli->commit();
Re your comment:
I tried an experiment and created a table companies, filled it with 512 rows, then started a transaction and issues the SELECT...FOR UPDATE statement above. I did this in the mysql client, no need to write PHP code.
Then, before committing my transaction, I examined the locks reported:
mysql> show engine innodb status\G
=====================================
2013-12-04 16:01:28 7f6a00117700 INNODB MONITOR OUTPUT
=====================================
...
---TRANSACTION 30012, ACTIVE 2 sec
2 lock struct(s), heap size 376, 513 row lock(s)
...
Despite using LIMIT 1, this report shows transaction appears to lock every row in the table (plus 1, for some reason).
So you're right, if you have hundreds of requests per second, it's likely that the transactions are queuing up. You should be able to verify this by watching SHOW PROCESSLIST and seeing many processes stuck in a state of Locked (i.e. waiting for access to rows that another thread has locked).
If you have hundreds of requests per second, you may have outgrown the ability for an RDBMS to function as a fake message queue. This isn't what an RDBMS is good at.
There are a variety of scalable message queue frameworks with good integration with PHP, like RabbitMQ, STOMP, AMQP, Gearman, Beanstalk.
Check out http://www.slideshare.net/mwillbanks/message-queues-a-primer-international-php-conference-fall-2012
That depends. There are (in general) differet isolation levels in SQL. In MySQL you can change which one to use using SET TRANSACTION ISOLATION LEVEL.
While "SERIALIZABLE" (which is the strictest one) still doesn't imply that no other actions are executed in between the ones from your transaction, it DOES make sure that there is no difference if simultanious transactions are executed one after another or not - if it would make a difference, on transaction is rolled back and executed later.
Note however that the stricter the isolation is, the more locking and rollbacks has to be done. So makre sure you really need that before using it.
I am need to use batchId in my one of project, one or more rows can have single batchId. So when I will go to insert a bunch of 1000 rows from a single user, I will give this 1000 rows a single batchId. this batchId is next autoincrement batchId.
Currently I maintain a separate database table for unique_ids, and storing last batchId there.
Whenever I need to insert a batch of rows in table, I update the batchId in unique_ids table by 1 and use it for batch insertion.
update unique_ids set nextId = nextId + 1 where `key` = 'batchId';
select nextId from unique_ids where `key` = 'batchId';
I call up a function which fires above two queries and return me the nextId for batch (batchId).
Here is my PHP class and function call for same. I am using ADODB, You can ignore that ADODB related code.
class UniqueId
{
static public $db;
public function __construct()
{
}
static public function getNextId()
{
self::$db = getDBInstance();
$updUniqueIds = "Update unique_ids set nextId = nextId + 1 where `key` = 'batchId'";
self::$db->EXECUTE($updUniqueIds);
$selUniqueId = "Select nextId from unique_ids where `key` = 'batchId'";
$resUniqueId = self::$db->EXECUTE($selUniqueId);
return $resUniqueId->fields['nextId'];
}
}
Now whenever I require a next batchId, I just call below line of code.
`$batchId = UniqueId::getNextId();`
But the real problem is When there are hundreds of simultaneous requests in a second, It gives same batchId to two different batches. It is a serious issue for me. I need to solve that.
Please suggest me what should I do? can I restrict only a single instance of this class so no simultaneous requests can call this function at a time and never give a single batchId to two different batches.
Have a look into atomic operations or transactions. It will lock the database and only allow one write query at any given instance in time.
This might affect your performance, since now other users have to wait for a unlocked database!
I am not sure what sort of support ADODB provides for atomicity though!
Basic concept is:
Acquire Lock
Read from DB
Write to DB with new ID
Release Lock
If a lock is already acquired, the script will be blocked (busy waiting) until it is released again. But this way you are guaranteed no data hazards occur.
Begin tran
Update
Select
Commit
That way the update locks prevents two concurrent runs from pulling the same value.
If you select first,the shared lock will not isolate the two