Fast refreshing a php get with multiple queries - php

Things to note before reading:
I am aware the code isn't that brilliant. Please don't comment on my old work ;)
I am aware that mysql_query is deprecated. Updating that at the moment isn't within the scope of this question
Question background
I got an interesting bug report via an old website today which has caused me a huge amount of concern as I didn't ever expect this bug to occur.
The page is simple. On the original load a table is displayed after looping through a mysql query to the database. Each of those rows display a link with:
url.com/items.php?use=XXX&confirm=0
The XXX relates to the ID of the item in the items table in the database. The confirm=0 has the following code:
if(isset($_GET['use'])){
$id=#mysql_real_escape_string($_GET['use']);
if(isset($_GET['confirm'])){
$confirm=#mysql_real_escape_string($_GET['confirm']);
if($confirm==0){
// show a confirm button of YES / NO for them
// to click which has 1 for confirm
The user can then click on YES which transfers them to:
url.com/items.php?use=XXX&confirm=1
The code then goes to an else from the above code which does the following checks:
if($id<1){
echo "<p class='error-message'>An error has occurred.</p>";
print "<p class='center'><a href='http://www.url.com/items.php'>[Back]</a></p>";
include("inc/ftr.php");
exit();
}
if(empty($id)){
echo "<p class='error-message'>An error has occurred.</p>";
print "<p class='center'><a href='http://www.url.com/items.php'>[Back]</a></p>";
include("inc/ftr.php");
exit();
}
$quantity = 0;
$result=#mysql_query("SELECT * FROM inventory WHERE item_id=$id AND u_id=$user_id");
$num_rows=#mysql_num_rows($result);
$r=#mysql_fetch_array($result);
$quantity=$r['quantity'];
if($num_rows==0){
echo "<p class='error-message'>You do not own any of these.</p>";
print "<p class='center'><a href='http://www.url.com/items.php'>[Back]</a></p>";
include("inc/ftr.php");
exit();
}
if($quantity<1){
echo "<p class='error-message'>You don't have any of these left!</p>";
print "<p class='center'><a href='http://www.url.com/items.php'>[Back]</a></p>";
include("inc/ftr.php");
exit();
}
$result=#mysql_query("SELECT * FROM items WHERE id=$id");
$r=#mysql_fetch_array($result);
$type=$r['type'];
$item_name=$r['item_name'];
The above performs the relevant checks to make sure the ID exists and then queries the database to get the current quantity from the inventory and checks it's not below 0. If it is below 0 then it blocks the page at that point.
The code after this point removes the quantity of the item from the database and implements the "effect" of the item. Let's just assume an update is performed.
The problem:
The actual problem I am having here is that if a user refreshes the page multiple times they can actually get the update query to perform but they can actually skip the check on the quantity. The update query is run over and over but the check on the quantity is never run more than once as there are no error messages. An example today was when I had 3 items in my inventory and I pressed f5 about 100 times. I managed to get the query update to run 16 times without any error message displaying. If I then waited a few seconds and pressed f5 again it would display an error message saying I didn't have any of those items.
The following solutions are not an option as I don't want to waste time coding:
Create an ajax call to prevent multiple submits before all queries have been processed.
Implementing an MVC structure and redirecting the user to a separate page which prevents multiple submits
If anyone could explain the reason for this bug (with relevant reading material) or even offer a solution to resolve it that would be great! Thanks!

You have a race condition due to the time between querying the database for the stock level, and a subsequent update to reduce it. If you send several requests very quickly then each will receive the same stock level (3 in this case) before the first request has had time to update the stock level.
You need to change your code such that your query & decrement is atomic - i.e. there are no gaps.
One possible solution is to attempt an update, where stock level > 0 and see how many rows are affected.
UPDATE products set `stockLevel`=`stocklevel`-1 where `productId` = 'something' and `stocklevel`>0
If the number of rows affected is 0, you had no stock. If the number of rows affected is 1 then you had stock. Multiple queries will reduce stock to zero, at which point you should see some error messages.

The problem is likely due to having multiple concurrent threads running on your web server, responding to requests simultaneously for non-blocking / non-transactional database operations. Some of the requests may pass the inventory quantity check while other requests are still being processed.
One possible solution would be to use MySQL transactions, but this would probably require migrating to mysqli or PDO which seems to be outside the scope of your desired solution, and require InnoDB tables which you might not have.
Should you ever choose to upgrade to use mysqli, here is some useful information:
http://dev.mysql.com/doc/refman/5.0/en/commit.html
http://coders-view.blogspot.com/2012/03/how-to-use-mysql-transactions-with-php.html
Another solution would be to implement "locking" functionality.
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html
mysql_query("LOCK TABLES inventory WRITE;");
// all your other PHP/SQL here
mysql_query("UNLOCK TABLES;");
This will prevent other clients from reading the inventory table while the first client is still busy processing your PHP/MySQL code

Related

Issue with maintaining a MySQL WooCommerce Customer Table

Well, I'm afraid that I will not be able to post a minimum reproducible example, and for that I apologize. But, here goes nothing.
Ours is a weekly prepared meals service. I track order volume in many ways. Here is the structure of the relevant table:
So then I utilize the highlighted fields in many ways, such as indicating to delivery drivers if a customer is returning from the prior order being more than a month ago (last_order_w - prev_order_w > 4), for instance.
Lately I have been noticing that the data is not consistently updating properly. In the past 3 weeks, I would say it is an occurrence of 5%. If it were more consistent, I would be more confident in my ability to track down the issue, but I am not even sure how to provoke it, as I only really notice it after the fact.
The code that should cause the update is below:
<?php
//retrieve and iterate over IDs of orders placed since last synchronization.
$newOrders=array_map('reset',$dbh->query("select id from wp_posts where id > (select max(synced) from fitaf_weeks) and post_type='shop_order' and post_status='wc-processing'")->fetchAll(PDO::FETCH_NUM));
foreach($newOrders as $no){
//retrieve the metadata for the current order
$newMetas=array_map('reset',$dbh->query("select meta_key,meta_value from wp_postmeta where post_id=$no")->fetchAll(PDO::FETCH_GROUP|PDO::FETCH_UNIQUE));
//check if the current order is associated with an existing customer
$exist=$dbh->query("select * from fitaf_customers where id=".$newMetas['_customer_user'])->fetch();
//if not, gather the information we want to store from this post
$noExist=[$newMetas['_customer_user'],$newMetas['_shipping_first_name'],$newMetas['_shipping_last_name'],$newMetas['_shipping_address_1'],(strlen($newMetas['_shipping_address_2'])==0?NULL:$newMetas['_shipping_address_2']),$newMetas['_shipping_city'],$newMetas['_shipping_state'],$newMetas['_shipping_postcode'],$phone,$newMetas['_billing_email'],1,1,$no,$newMetas['_paid_date'],$week[3],$newMetas['_order_total']];
if($exist){
//if we found a record in the customer table, retrieve the data we want to modify
$oldO=$dbh->query("select last_order_id,last_order,last_order_w,lo,num_orders from fitaf_customers where id=".$newMetas['_customer_user'])->fetch(PDO::FETCH_GROUP|PDO::FETCH_ASSOC|PDO::FETCH_UNIQUE);
//make changes to the retrieved data, and make sure we are storing the most recently used delivery address and prepare the data points for the update command
$exists=[$phone,$newMetas['_shipping_first_name'],$newMetas['_shipping_last_name'],$newMetas['_shipping_postcode'],$newMetas['_shipping_address_1'],(strlen($newMetas['_shipping_address_2'])==0?NULL:$newMetas['_shipping_address_2']),$newMetas['_shipping_city'],$newMetas['_shipping_state'],$newMetas['_paid_date'],$no,$week[3],$oldO['last_order'],$oldO['last_order_id'],$oldO['last_order_w'],($oldO['num_orders']+1),($oldO['lo']+$newMetas['_order_total']),$newMetas['_customer_user']];
}
if(!$exist){
//if the customer did not exist, perform an insert
$dbh->prepare("insert into fitaf_customers(id,fname,lname,addr1,addr2,city,state,zip,phone,email,num_orders,num_weeks,last_order_id,last_order,last_order_w,lo) values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)")->execute($noExist);
}
else{
//if the customer did exist, update their data
$dbh->prepare("update fitaf_customers set phone=?,fname=?,lname=?,zip=?,addr1=?,addr2=?,city=?,`state`=?,last_order=?,last_order_id=?,last_order_w=?,prev_order=?,prev_order_id=?,prev_order_w=?,num_orders=?,lo=? where id=?")->execute($exists);
}
}
//finally retrieve the most recent post ID and update the field we check against when the syncornization script runs
$lastPlaced=$dbh->query('select max(id) from wp_posts where post_type="shop_order"')->fetch()[0];
$updateSync=$dbh-> query("update fitaf_weeks set synced=$lastPlaced order by id desc limit 1");
?>
Unfortunately I don't have any relevant error logs to show, however, as I documented the code for this post, I realized a potential shortcoming. I should be utilizing the data retrieved from the initial query of new posts, rather than a selecting the highest post id after performing this logic. However, I have timers running on my scripts, and this section hasn't taken over 3 seconds to run in a long time. So it seems unlikely, that the script, which runs on a cron every 5 minutes, is experiencing this unintended overlap?
While I have made the change to pop the highest ID off of $newOrders, and hope it solves the issue, I am still curious to see if anyone has any insights on what could cause this logic to fail at such a low occurrence.
It seems likely your problem comes from race conditions between multiple operations accessing your db.
First of all, your last few lines of code do SELECT MAX(ID) and then uses that value to update something. You Can't Do Thatâ„¢. If somebody else adds a row to that wp_posts table anytime after the entry you think is relevant, you'll use the wrong ID. I don't understand your app well enough to recommend a fix. But I do know this is a serious and notorious problem.
You have another possible race condition as well. Your logic is this:
SELECT something.
make a decision based on what you SELECTED.
INSERT or UPDATE based on that decision.
If some other operation, done by some other user of the db, intervenes between step 1 and step 3, your decision might be wrong.
You fix this with a db transaction. The ->beginTransaction() operation, well, begins the transaction. The ->commit() operation concludes it. And, the SELECT operation you use for step one should say SELECT ... FOR UPDATE.

Mysqli/PHP prevent certain dirty reads

currently I am running into a problem and I am breaking my head over it (although I might be over thinking it).
Currently i have a table in my SQL DB with some products and the amount in stock. People can visit the product page, or order it (or update it if you are an admin). But now I am affraid of race conditions.
The order process happens as following:
1) The session starts an Transaction.
2) It gets the current amount of units available.
3) It checks that the amount to order is available, and it substract the amount.
4) It updates the product table with the new "total amount" value. Here is the code very short(without using prepared statements etc. etc.).
BEGIN;
SELECT amount FROM products WHERE id=100;
$available=$result->fetch_array(MYSQLI_NUM)[0];
if($order<=$available){
$available-=$order;
UDPATE products SET amount=$available WHERE id=100;
}
//error checking and then ROLLBACK or COMMIT
My question now is:
What do i do to prevent dirty readings in step 2, and so the write back of wrong values in step 4?
example: If 1 person orders 10 things of product A, and while it is at step 3, the second person also orders 5 things of product A. So in step 2 it will still get the "old" value and work with that, and thus restores an incorrect number in step 4.
I know i can use "SELECT.... FOR UPDATE" which will put an exclusive lock on the row, but this also prevents an normal user who is just checking the availability(on the product page) to prevent instantaneously loading, while I rather have them to load the page quick than an on the second accurate inventory. So basically i want the read-lock only to apply to clients who will update the value in the same transaction.
Is what I want possible, or do I need to work with what i got?
Thanks in advance!
There are two ways you can address the problem:
You can use a function in MySQL, that shall update the stocks and give an error"Sorry, you product just went out of stock!", whenever the balance after deduction goes below 0.
OR (preferred way)
You can use locking in MySQL. In this case, it shall be a write lock.The write lock shall disable other read requests(BY second person), till the lock is released(BY First Person).
I hope that helps you!

Unique Codes - Given to two users who hit script in same second

Hi have a bunch of unique codes in a database which should only be used once.
Two users hit a script which assigns them at the same time and got the same codes!
The script is in Magento and the user can order multiple codes. The issue is if one customer orders 1000 codes the script grabs the top 1000 codes from the DB into an array and then runs through them setting them to "Used" and assigning them to an order. If a second user hits the same script at a similar time the script then grabs the top 1000 codes in the DB at that point in time which crosses over as the first script hasn't had a chance to finish assigning them.
This is unfortunate but has happened quite a few times!
My idea was to create a new table, once the user hits the script a row is made with "order_id" "code_type". Then in the same script a check is done so if a row is in this new table and the "code_type" matches that of which the user is ordering it will wait 60 seconds and check again until the previous codes are issued and the table is empty where it will then create a row and off it goes.
I am not sure if this is the best way or if two users hit at the same second again whether two rows will just be inserted and off we go with the same problem!
Any advice is much appreciated!
The correct answer depends on the database you use.
For example in MySQL with InnoDB the possible solution is a transaction with SELECT ... LOCK IN SHARE MODE.
Schematically it works this by firing following queries:
START TRANSACTION;
SELECT * FROM codes WHERE used = 0 LIMIT 1000 LOCK IN SHARE MODE;
// save ids
UPDATE codes SET used=1 WHERE id IN ( ...ids....);
COMMIT;
More information at http://dev.mysql.com/doc/refman/5.7/en/innodb-locking-reads.html

Simple concurrency in PHP?

I have a small PHP function on my website which basically does 3 things:
check if user is logged in
if yes, check if he has the right to do this action (DB Select)
if yes, do the related action (DB Insert/Update)
If I have several users connected at the same time on my website that try to access this specific function, is there any possibility of concurrency problem, like we can have in Java for example? I've seen some examples about semaphore or native PHP synchronization, but is it relevant for this case?
My PHP code is below:
if ( user is logged ) {
sql execution : "SELECT....."
if(sql select give no results){
sql execution : "INSERT....."
}else if(sql select give 1 result){
if(selected column from result is >= 1){
sql execution : "UPDATE....."
}
}else{
nothing here....
}
}else{
nothing important here...
}
Each user who accesses your website is running a dedicated PHP process. So, you do not need semaphores or anything like that. Taking care of the simultaneous access issues is your database's problem.
Not in PHP. But you might have users inserting or updating the same content.
You have to make shure this does not happen.
So if you have them update their user profile only the user can access. No collision will occur.
BUT if they are editing content like in a Content-Management System... they can overwrite each others edits. Then you have to implement some locking mechanism.
For example(there are a lot of ways...) if you write an update on the content keeping the current time and user.
Then the user has a lock on the content for maybe 10 min. You should show the (in this case) 10 min countdown in the frontend to the user. And a cancel button to unlock the content and ... you probably get the idea
If another person tries to load the content in those 10 min .. it gets an error. "user xy is already... lock expires at xx:xx"
Hope this helps.
In general, it is not safe to decide whether to INSERT or UPDATE based on a SELECT result, because a concurrent PHP process can INSERT the row after you executed your SELECT and saw no row in the table.
There are two solutions. Solution number one is to use REPLACE or INSERT ... ON DUPLICATE KEY UPDATE. These two query types are "atomic" from perspective of your script, and solve most cases. REPLACE tries to insert the row, but if it hits a duplicate key it replaces the conflicting existing row with the values you provide, INSERT ... ON DUPLICATE KEY UPDATE is a little bit more sophisticated, but is used in a similar situations. See the documentation here:
http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html
http://dev.mysql.com/doc/refman/5.0/en/replace.html
For example, if you have a table product_descriptions, and want to insert a product with ID = 5 and a certain description, but if a product with ID 5 already exists, you want to update the description, then you can just execute the following query (assuming there's a UNIQUE or PRIMARY key on ID):
REPLACE INTO product_description (ID, description) VALUES(5, 'some description')
It will insert a new row with ID 5 if it does not exist yet, or will update the existing row with ID 5 if it already exists, which is probably exactly what you want.
If it is not, then approach number two is to use locking, like so:
query('LOCK TABLE users WRITE')
if (num_rows('SELECT * FROM users WHERE ...')) {
query('UPDATE users ...');
}
else {
query('INSERT INTO users ...');
}
query('UNLOCK TABLES')

mysql 'FOR UPDATE' command not working correctly

I have two php page, page1.php & page2.php
page1.php
execute_query('START TRANSACTION');
$res =execute_query('SELECT * FROM table WHERE id = 1 FOR UPDATE');
sleep(20);
print $res->first_name;
execute_query('COMMIT');
print"\n OK";
page2.php
$res =execute_query('SELECT * FROM table WHERE id = 1');
print $res->first_name;
I executing both pages almost same time
So according to the mysql 'FOR UPDATE' condition,the result in page2.php will display only after the execution of page1.php (ie after display 'OK' in page1.php), because both page reading same row.
But what is happening is,
page2.php suddenly display the result, even before completing the execution of page1.php
May i know whats wrong with ' FOR UPDATE' command.?
I'm assuming that the table is InnoDB (not MyISAM or MEMORY).
You are using a SELECT within a transaction. I don't know your isolation level, but I guess that your transactions are not blocking each other.
See this page for details: http://dev.mysql.com/doc/refman/5.5/en/set-transaction.html
EDIT:
I'm going to explain better this concept, as requested. The isolation level is a session/global variable which determines the way the transactions are performed. Some isolation levels block other transactions when they try to modify the same row, but some isolation levels don't.
For example, if you used UNCOMMITTED, it doesn't block anything, because you access the actual version of the rows (which may become obsolete before the transaction ends). The other SELECT (page2) only reads the table, so it doesn't have to wait that the first transaction ends.
SERIALIZABLE is much more safe. It is not the default because it is the slowest isolation level. If you are using it, make sure that FOR UPDATE still makes sense for you.
I Think your SELECT FOR UPDATE is inside BEGIN TRANSACTION, so it will not lock the record until COMMIT statement reached , and you delayed execution with sleep(20). so page2.php will be execute.

Categories