I have a question about concurrent request a user from different browser
Imagine we have balance in wallet for just buy one product, if a user request a product at the same time with two different browser, can user buy a product two times? if it is possible, how can I prevent the second action?
example:
user A balance : 100$
user A ----> mozila ----> request ----> product A50(price 100$)
user A ----> chrome ----> request ----> product A50(price 100$)
above request happen at the same time after that some process the amount of wallet decrease
You should perform these operations in SQL TRANSACTIONs having an appropriate isolation level. All of the operations performed within the transaction will be "all or nothing," which means that all of the changes take effect if you COMMIT and none of them do if you instead ROLLBACK. Furthermore, if two transactions attempt to touch the same row, one of them will either be forced to wait or will be turned away. Also, the other transaction will not see anything that hasn't yet been committed.
For instance, if you want to "deduct money from the user's account and apply it to an order," you would perform both updates in one transaction. So, "if everything worked, both updates happened instantaneously." And, "if it didn't work and the transaction was rolled back, nothing changed anywhere."
But it's important that you also test the user's balance within the same transaction! (Otherwise, there would be a "race" between testing the balance and proceeding with the sale.) Your logic might be something like this pseudocode:
BEGIN TRANSACTION with proper isolation level
Retrieve user's account.
If there isn't enough money:
ROLLBACK
exit
Else:
UPDATE user account to withdraw money.
UPDATE the invoice to show payment.
INSERT a new entry into the (financial ...) transaction log table.
COMMIT
This works as intended because the entire set of operations that occurs within the transaction is "atomic."
SQL servers vary slightly in their implementation of transactions but here is a web-page on the topic (covering MS SQL Server):
https://www.sqlserverlogexplorer.com/types-of-transaction-isolation-level/
Related
I am building a financial web app, sort of an e-banking application. Whenever a transaction is being performed e.g Funds transfer, a transaction charge is deducted and updates the system account's earnings/balance by querying the current balance and adding up the deducted charge and then finally updating the balance field.
Now my problem is that when multiple transactions maybe 200 of them are simultaneously executing by different users with different accounts, there is a discrepancy between the total earnings balance that I expected from what is being actually registered in the system. I believe it has something to do with having to asynchronous nature of server-request or something, how can I prevent this??
There is transaction in codeigniter. you can refer Codeigniter Transaction
you can use as
$this->db->trans_start();
YOUR QUERY / QUERYS
$this->db->trans_complete();
I have a situation for shop market, when user wants to pay, first a factor will create then redirect him to the payment gateway.
In factor creation, number of items that user is selected, reduce from stock number count of existence items, so if user close the payment gateway browser's tab, wanted items are reduced from stock items count and never returns.
How manage this situation for payment??
My Solution
I think a lot on that and come with this solution that create a reserved factors table to store current factor that is going to pay and when back from payment gateway, simply delete it.
If browser's tab was close by user then calculate that if time of reserved factor is more than payment gateway the delete it from reserved table and add reduced items number to stock number count.
I add this code to my construct (because I think this is right place to check for all items in reserved factor right before show items to user. This helps for items that are not available, to be available now) but in another hand I think if number of reserved factors in database are big enough, it might have a huge effect of loading performance.
SO what is the right solution for situations like this?
Can I have something like a schedule plan in MySQL to delete those records? or even in PHP?
OR ANY IDEA...
EDIT:
I do want a code base solution if any exists.
Can I have something like a schedule plan in MySQL to delete those records? or even in PHP?
Usually this sort of thing is done using your server's "cron" feature (or "Scheduled Tasks" if your server is running Windows). You can write a PHP script that clears these abandoned carts when run, and configure your server to execute that PHP script at regular intervals.
MySQL also has an "Event Scheduler" feature; I don't think that gets used very often, but it is an option.
...so if user close the payment gateway browser's tab, wanted items are reduced from stock items count and never returns.
This is the wrong way to look at this problem. Consider these scenarios:
User accidentally closes their browser, re-opens it, and tries to finish purchasing
User loses internet access and cannot finish purchasing... browser isn't closed
While it is possible to use the Beacon API in newer browsers to send an update when a browser is closed, for these reasons it's a bad idea.
You have two general options:
Option 1: Track the user/cart activity
Every time the user does something meaningful on the site, update their last active time on a record for their cart. If a user is still browsing the site and they have a product in their cart, keep it reserved there until the sale is complete or until they are no longer active for some period of time. For example, if they haven't been on the page in 15 minutes, release the reservation for the product. (Be prepared to reserve it again if they come back and the product is still available.)
Option 2: Don't reserve until purchase
Keep the item in-stock until it's actually bought. If at the time of purchase the product is no longer available, let the user know.
Which of these options you choose depends on your business conditions and the sort of stuff you're selling.
I am currently working on a site that allows users to rent equipment. When the user wishes to add accessories to the current rent a pop up window will open with the accessories available for the equipment. When the user selects an accessory, I use a JavaScript function with AJAX to validate the users input and to check the existence of the accessories in my database.
After this validation the pop up window closes and I need to start a MySQL transaction to modify the accessories picked for the rented equipment. I need it to be a transaction because the user can cancel the rent at any moment and I need to "return" everything to the way it was before the rent.
Is it possible to handle MySQL transactions using several PHP files with AJAX?
No, you can't do it with mysql transactions because you will not be able to serialize a reference to the transaction or connection, and the transaction will either be rolled back or committed (not sure which, but I think it gets committed) when the script execution stops. This means that it will not work across multiple requests.
Instead of using transactions, a possible solution would be to update your schema to add a flag for "picked" accessories. When the customer selects them, set this flag. If they cancel, unset it.
I hope my question is suitable for this site and is not too broad but I am having trouble designing the following architecture:
I have two sites. Site 1 is responsible for transferring credits between users. Site 2 is responsible for providing these users with services/products which can be paid with the credits they own in Site 1.
Lets say I have 1000 credits on Site 1. I have a service/product which costs 50 credits on Site 2 and a user wants to purchase it with his amount of credits he owns in Site 1.
Both sites communicate with REST. So for example when a user wants to purchase a service/product Site 2 prepares its request and sends it to Site 1 which makes the transaction and confirms to Site 2 that the transaction was successful (e.g. the user had enough credits for the service/product and those credits were successfully transferred to the destination)
Now here's the tricky part. In Site 1 I have the following logic:
Begin transaction
update user set credits -= 50 where id = 1
update user set credits += 50 where id = 2
REST CALL (Site 2) Success
Site 2 response - OK, commit transaction
Commit
Since REST is a call to a different site the transaction might take some time to complete. In the mean time is the whole table locked for any transactions or are the rows for user 1 and user 2 locked? Is this the proper way to implement my logic? Am I missing something?
Thank you in advance.
This is in response to your question on Casey's answer:
Yes, as long as you do it like this:
Site 2:
Customer logs in.
Ask Site 1 for credit total & transaction history (GET request) for this user (user 1).
(Any awaiting transactions which receive 'transaction succeeded' responses are made available for download/dispatch)
Use credit total to enable "Buy" buttons for things that can be afforded.
Customer clicks a Buy button
generate a transaction ID unique to site 2, store in database along with details of who bought what, when they did so, with state = pending. tell user transaction has been received and they should be notified soon whether it was successful (HTTP 202 response)
POST purchase request to Site 1, including authentication (don't want forged requests to cause people to spend money they don't want to spend) and the transaction ID.
Site 1
validate authentication
verify that Site 2's transactionID has not been used before by site 2. if it has, return an error, if not:
Begin transaction
update user set credits -= 50 where id = 1
update user set credits += 50 where id = 2
insert into transactions remoteSiteID = 'Site2', remoteTransactionID = tID, user = 1
You would not need the remoteSiteID field if site2 is the only site using credits from site1
Commit
REST CALL (Site 2) Success
Site 2:
EITHER:
1. Receive REST success call, make purchase available for download/dispatch, display some message to user saying purchase processing complete. Update local transaction record, state=succeeded.
OR
2. Site 2 is down. Transaction success will be noted next time background polling process runs (which checks status of purchase requests awaiting responses) or next time customer logs in (in which case poll is initiated too--step 3 in first list)
If you have not received a response to a transaction, perform a GET using the transaction ID. If the response is an error, Site 1 did not receive the original request, Site 2 is free to repeat the transaction (POST) request. If the response is 'transaction failed' then the user didn't have enough credits, update transaction record on site 2 accordingly. if result is 'transaction succeeded' record that too.
if a transaction fails N number of times, or a certain period elapses since the user clicked the button (say 5 minutes) then Site 2 stops retrying the purchase.
Since you're using primary keys and not ranges, it should be row-level locking. There's also the concept of a shared vs exclusive lock. Shared locks allow other processes to still read the data while an exclusive is used in an update/delete scenario and blocks all others from reading it.
On the logic in general.. if there's really only one place storing the credits and one place reading them, how important is it to sync in realtime? Would 3, 5, or 10 seconds later be sufficient? If Site 1 is completely down, do you want to allow for Site 2 to still work?
Personally, I would restructure things a bit:
A user creates an account on Site 1.
The first time a transaction is done on Site 2, it validates the account exists on S1 and gets the number of credits.. and keeps it.
Whenever a transaction is done on Site 2, you check the local credit count (cache) first and if there are enough credits you return a 202 Accepted response code. It basically means "hey, we accepted this but aren't done with it yet."
You can immediately allow the user to continue at this point.
But at the same time you did the local credit check, you made another request to S1 with the actual transaction.
Hopefully, that service can give you a success/failure message and the update definitive/official credit count.
Using that, you update the local transaction status to 204 NO Content (success) and update the cache for the next time.
Since S1 is always returning the definitive current credit count, you don't have to worry about maintaining the credit count on S2 in a 100% accurate way.. you can just wait for it from S1.
If you're really nervous about it, you could have a job that runs every N hours that polls S1 requesting updates on every account updated during that N hours.
I'm setting up a simple 'buy now' transaction from a website with these major steps:
Select product from price list
Review selection (amounts, tax etc)
Process Payment on Paypal
Receipt / Thank you
At the moment, i'm storing a database record in step 2 - which potentially means there will be a number of records where no payment is received as people decide not to go ahead with their purchase after all. These records are of no real use since i'll use Google Analytics to track how successful the checkout flow is.
I'm using Paypal IPN to verify the authenticity of the payments and log them against the records inserted at step 2 - however, could I feasibly rely solely on the data from the IPN transactions to populate the database in the first place, thus removing the need to store them at step 2 and have to do database cleanup to remove transactions that never completed?
I personally can see no reason why I wouldn't - the IPN contains all the data I need about the payment and probably more besides, and Paypal will resend IPNs for several days if they don't go through first time due to server glitchery, but am I missing anything else important?
Obviously the number one consideration is that no transactions get lost or aren't logged so that no customer unhappiness ensues!
It's important to do a 2 way validation like you have.
You save the order info (total, quantity) before the user leaves your system towards paypal. When ipn come back you validate the request (it must be from paypal ip or whatever), you validate that it's a successful transaction then your step 2 enters the scene. You validate if the total returned from paypal ipn is the same with the total that was saved before the user left (Paypal sometime may return partial payments, the user may grab the post data and do his own post from a modified html with a lower total set). Step 2 should also store the user_id of the buyer so you must compare that too.
here's a sample layer (no programming language just a dummy code):
if request comes from paypal:
# query the order
if order.total == request.total && order.user_id == request.custom:
payment may come in...
As the designer and administrator of a system that has processed over 600,000 PayPal payments in the last three years - relying exclusively on IPN will allow some errors to slip through the cracks.
Real data:
Total transactions No IPN Invalid IPN Duplicate IPN
year 1 170,000 + 2 101 0
year 2 205,000 + 54 15 3
year 3 230,000 + 20 24 13
Fortunately, our system is structured with PDT (Payment Data Transfer) as a 'backup' so we didn't lose any transaction data or have unhappy customers. Note: PDT can't be relied upon exclusively either - in fact, early this year, there was a major problem with the reliability of PDT returns.
The most common 'invalid' IPN returns are an HTML error page or truncated results ... I can provide samples if desired.
The best choice is a combination of both IPN and PDT (with your 'cart' data stored in your DB as you are). Either the IPN processs or the PDT process can create the transaction (and delete the 'cart' data record in the DB). The second process to arrive will then not have a 'cart' entry from which to record a transaction.
NOTE: - as you noted in your final solution to use a custom field - be aware there is a length limitation to the custom field and it can be truncated upon being returned to you.
I have not relied solely on IPN to do this, but PayPal will log failures to contact your server if it fails and is supposed to retry later, although I only ever had failures in development and never verified the retry. I just trust them on this one.
For a typical e-commerce site, yes you can -- it's fairly reliable. If nuclear reactors will melt down and people will die, then no you can't -- I've seen problems with it, but very infrequently.
I have developed a number of eCommerce sites, and in practice you always want to record what you can in case of any 'accidents'. You own data is probably more informative.
Like you said, yes you can do this, but I would suggest that it is not a great idea.