My requirement is, users can click and avail the deal.
One deal can be availed by only one member, so I'm locking the table when a user tries to avail a deal, and unlocking it. So that if two users clicks and tries to avail the deal, it will form a queue and it will prevent two users to avail the deal.
The code is like
LOCK TABLES deal WRITE;
//MySQL queries and my php code goes here.
UNLOCK TABLES;
The problem now is, what if some problem happens with my php code between lock and unlock,
will the table get locked permanently? Is there anyway i can set a maximum time to lock the table?
If I would have been in your place I would have created a database table named locked_deals. It would have a column named deal_id. When ever a user chooses a deal, its deal_id will get inserted into the locked_deals table. And when the next user clicks on the same deal it will first check if the deal_id is in lock table. If yes it would not allow the user to choose the deal. Finally when everything goes fine we can delete the lock_id from the lock table at the end of process. For the lock_ids which get stuck into the table because of any exception in the php code - we can create a background service that cleans the stucked ids (which ahve been stuck for the last n minutes) in the locked_deals table every n minutes.
Hope it helps.
If the connection for a client session terminates, whether normally or abnormally, the server implicitly releases all table locks held by the session.
Source
Related
"A TEMPORARY table is visible only within the current session, and is dropped automatically when the session is closed."
This fact is causing lots of heartache. I am using codeigniter 3 and Mysql RDS. Creating TEMPORARY tables didn't work due to the above quote since my multiuser app creates about 6 regular tables for each user that get deleted (dropped in sql) when they press logoff. But a large number of users will not press logoff in the app, instead pressing the X to close the tab. This leaves 6 extra tables on my RDS server. This is causing a large number of orphan tables (in 24 hours) which is slowing the database server to a crawl.
Is there anyway to "catch" when someone closes the app without pressing logout? I am thinking that if I could keep php from closing sessions constantly, that might work, but that seem pretty far fetched. I was then thinking (outside the box) that perhaps an external service like Redis could hold the temporary tables, but being on AWS I am already at my upper limit of what extra services I can afford.
I have tried taking the TEMPORARY tables and making them regular old mysql tables and deleting them when a user logs off. But the issue is that many users will exit via the X on the tab.
After trying a few different ways to solve this I ended up creating a log where I log any small temp files created. Then, when any user logs in, I check to see if any tables were created (in the log) more than two hours ago and use drop if exists to delete them. This should work without creating a cron. Is 2 hours enough? I sure hope so.
So I'm making a php website that sometimes requires a big data input from an admin-type user. This would not be frequent and only would happen to update or add certain data to the DB.
During this upload time which probably will take a minute or two, I cannot have other Users try to pull the data and and manipulate it as they would do normally as that could cause some major errors.
One solution would be to stop the servers for a time and put the site under maintenance for and admin-type user to upload the data locally.
However, there will never be more then 1-2 Users at a time on the site and these updates are short (1-2 mins) and infrequent. This makes this situation very improbable but still possible.
I was thinking of making some sort of entry in the User table that an admin could toggle before and after an update, and that my code would check before every User data manipulation. Then if after a User tries to save while that value is on, they would just have a pop-up or something that tells them to wait a few minutes.
Would this be OK? Is there a better way of going about this?
There is a way of locking a table so as to be the sole user of that table.
There includes READ locks and WRITE locks but in this situation a WRITE lock would probably be the solution.
A WRITE lock has the following features:
The only session that holds the lock of a table can read and write data from the table.
Other sessions cannot read data from and write data to the table until the WRITE lock is released.
To lock a table in mysql, simply use:
LOCK TABLE table_name WRITE;
If needed for more than one table, simply add them with a comma:
LOCK TABLES table_name1 WRITE,
table_name2 WRITE,
... ;
When the queries are finished, you can simply use UNLOCK TABLES; to unlock the tables.
Visit https://www.mysqltutorial.org/mysql-table-locking/ for a more complete tutorial on mysql table locking.
I have to update a big table (products) in a MySQL database, every 10 minutes with PHP. I have to run the PHP script with cron job, and I get the most up to date products from a CSV file. The table has currently ~18000 rows, and unfortunately I can not tell how much it will change in a 10 min period. The most important thing is of course I do not want the users to notice the update in the background.
These are my ideas and fears:
Idea1: I know that there is a way to load a csv file into a table with MySQL, so maybe I can use a transaction to truncate the table, and import the CSV. But even if I use transactions, as long as the table is large, I'm afraid that there will be a little chance for some users to see the empty database.
Idea2: I could compare the old and the new csv file with a library and only update/add/remove the changed rows. This way I think there it's not possible for a user to see an empty database, but I'm afraid this method will cost a lot of RAM and CPU, and I'm on a shared hosting.
So basically I would like to know which method is the most secure to update a table completely without the users noticing it.
Assuming InnoDB and default isolation level, you can start a transaction, delete all rows, insert your new rows, then commit. Before the commit completes, users will see the previous state.
While the transaction is open (after the deletes), updates will block, but SELECTs will not. Since it's a read only table for the user, it won't be an issue. They'll still be able to SELECT while the transaction is open.
You can learn the details by reading about MVCC. The gist of it is that any time someone performs a SELECT, MySQL uses the data in the database plus the rollback segment to fetch the previous state until the transaction is committed or rolled back.
From MySQL docs:
InnoDB uses the information in the rollback segment to perform the
undo operations needed in a transaction rollback. It also uses the
information to build earlier versions of a row for a consistent read.
Only after the commit completes will the users see the new data instead of the old data, and they won't see the new data until their current transaction is over.
I have two table 'reservation' and 'spot'.during a reservation process the 'spotStatus' column in spot table is checked and if free, it is to be updated. A user is allowed to reserve only one spot so to make sure that no other user can reserve the same spot, what can i do?
referring to some answers here,i found row locking,table locking as solutions. should i perform queries like
"select * from spot where spotId = id for update;"
and then performing necessary update to the status or is there other elegant ways to do it?
and my concern is what happens to the locked row if
1. Transaction doesnot complete successfully?
2. what happens if both user tries to reserve the same row at the same time? are both transactions cancelled?
and when is the lock released?
The problem here is in race conditions, that even transactions will not prevent by default if used naively - even if 2 reservations happen simultaneously, for example originating from 2 different Apache processes running PHP, transactional locking will just ensure the reservations are properly serialized, and as such the second one will still overwrite the first.
Usually this situation is of no real concern, given the speed of databases and servers as a whole, compared to the load on an average reservation site, the chances of this ever causing a problem are less than winning the state lottery twice in a row. If however you are implementing a site that's going to sell 50k Coldplay concert tickets in 30 seconds, chances rise aggressively.
A simple solution to this is to implement a sort of 'reservation intent' by not overwriting the spot reservation directly, but by appending the intent-to-reserve to a separate timestamped table. After this insertion you can then clean up this table for duplicates, preferring the oldest, and apply that one to the real-time data.
if its not successful, the database returns to the same data it was before the transaction (rollback) as if it never happened.
the same as it was not in the same time. only one of them will lock the db and the other wont be created.
If you are using a teradata you can use a queue table concept.
How to implement pessimistic locking in a php/mysql web application?
web-user opens a page to edit one dataset (row)
web-user clicks on the button "lock", so other users are able to read but not to write this dataset
web-user makes some modifications (takes maybe 1 to 30 minutes)
web-user clicks "save" or "cancel" and the "lock" is removed
Are there standard methods in php/mysql for this scenario? What happens if the web-user never clicks on "save"/"cancel" but closes the internet-exploror?
You need to implement a LOCKDATE and LOCKWHO field in your table. Ive done that in many applications outside of PHP/Mysql and it's always the same way.
The lock is terminated when the TTL has passed, so you could do a substraction of dates using NOW and LOCKDATE to see if the object has been locked for more than 30 minutes or 1h as you wish.
Another factor is to consider if the current user is the one locking the object. So thats why you also need a LOCKWHO. This can be a user_id from your database, a session_id from PHP. But keep it to something that identifies a user, an ipaddress is not a good way to do it.
Finaly, always think of a mass-unlock feature that simply resets all LOCKDATEs and LOCKWHOs...
Cheers
I would write the locks in one centralized table instead of adding fields to all tables.
Example table structure :
tblLocks
TableName (The name of tha locked table)
RowID (Primary key of locked table row)
LockDateTime (When the row was locked)
LockUser (Who locked the row)
With this approach you can find all locks that are made by a user without having to scan all tables. You could kill all locks when user logs out for example.
Traditionally this is done with a boolean locked column on the record in the database that is flagged appropriately.
It is a function of this sort of locking that the lock has to be released, and circumstances may prevent this happening naturally (system crashes, user stupidity, dropped network packets, etc etc etc). This is why you would need to provide some manual unlock method and/or impose a time limit (maybe with a cron job?) on how long a record can be locked for. You could implement some kind of AJAX poll to keep the record locked if the browser is still open? At any rate, you would probably be best to verify the data in the record is the same as it was when the lock was aquired before you modify it.
This limitation of this type of behaviour is particularly prevalent in web applications, but is true of anything that uses this approach - Sage Line 50, for one, is a bugger for it, I regularly have to delete lock files after machine/application crashes.