My site hosted in a shared hosting. It's a POS application (PHP, Codeigniter). It has several users. Everyone is generating invoice. Invoice number is incremental. That is when a user submit a invoice form, it fetches the last invoice number then increment it by one and then create a new row with new invoice number. This process some time (very rarely) duplicate invoice number generated when users submits the form pretty much same time.
One possible way is that make invoice unique. But if it happens again, user will see an exception or formatted error message.
I don't want show error to my users. Because when they submit the invoice form , it contains sales information that they have written. If they loose it because of this warning, they feel disturbed. AJAX will not work. Direct submit is working here(for invoice submission ).
Can SQL lock be applied for this situation? I have no idea about SQL locking.
If your concern is not performance an inefficient way to do would be something like
Insert your invoice number as null/zero and have another query update that
like
INSERT INTO invoices (id, invoice_number) VALUES (10001, null);
UPDATE invoices SET invoice_number = id WHERE invoice_number IS NULL;
For locks you can look into SELECT ... FOR UPDATE that would lock the last read row, and also inserts from other connections are also blocked but its better you try it on your DB as this depends on your Mysql version and isolation levels set.
Related
I am setting up a new part of an application with historical data requirements for the transactions table in mysql. Originally in old version transactions were not historical, with structure like this:
id|buyerid|prodid|price|status
And other fields, with the id being referenced in links to access Transaction Details page, as well as used as foreign key in other tables across the application to reference particular transactions for various purposes.
Now the requirement is to answer reporting questions like "Show all transaction that had particular status Feb 2014" AND "What did a transaction look like in Feb 2014".
The new design I'm testing at the moment is below:
id|buyerid|prodid|price|status|active|start_date|end_date
Where active used to indicate latest record, start is when it is created, no records to be modified instead end date populated and a new record created with same details plus the modification.
Now the question is - what to do about transaction id field? Because in this new design it is more of a history id, and can not be used for a foreign key across the application since it is going to change with every update.
I can think of two options:
Create a separate table, transaction_ids with just one column, primary key autoincrement tid, and a foreign key column in the main transactions table for tid - Every time a brand new transaction is created, insert the ids table and use that id for the tid to trace this particular transaction across the system.
The buyerid and prodid combination is always unique in my application, no buyer can get the same product twice.
Is the second solution better? Does anyone know of a better way to handle this?
What you are trying to achieve is called Event Sourcing.
Think in terms of events changing the status of your transaction, rather than tracing the status itself in time.
You still have your transaction with its own primary key, and you rebuild the current (or past) status applying each event.
I would also suggest you to start coding your business models, and only after that, to think about the persistence and the best way to map it to a database.
Second Solution looks better although I will say that there is a lot of ambiguity in your question.
I am saying that second solution is better because the transaction_ids table which you are talking about in solution 1 is basically REDUNDANT. It is not solving any purpose. Even if the transaction id is repeating itself in the transaction table, it does not mean that you need to have a separate table to generate the ids and make it as PK-FK relation. Most probably you will still be querying the data by user-id and prod-id and not by transaction-id
Basically what you need is some kind of audit history table where you insert a record for every operation/transaction/modification done and capture some basic details like - Username, Date/time, old value, new value etc. You do not need status or start date and end date columns. Once a record is inserted in this audit history table then it is never going to be touched again.
You will have to design your report carefully.
Taking two previous answers into consideration, here is the solution I will go with: All of the data updates in my application come through one single function, that is already set up to audit particular fields of my choosing, so I will mark the transaction status to be audited among the others. Table structure for the audit table is similar to this:
|id|table|table_id|column|old_val|new_val|who|when|
Only that there is a bit more advanced object mapping via object id's instead of simple table name. I can then use this data in a Join to the main, normal not historical transactions table to provide the reporting required.
I created a ticketing system that in its simplest form just records a user joining the queue, and prints out a ticket with the queue number.
When the user presses for a ticket, the following happens in the database
INSERT details INTO All_Transactions_Table
SELECT COUNT(*) as ticketNum FROM All_Transactions_Table WHERE date is TODAY
This serves me well in most cases. However, I recently started to see some duplicate ticket numbers. I cant seem to replicate the issue even after running the web service multiple times myself.
My guess of how it could happen is that in some scenarios the INSERT happened only AFTER the SELECT COUNT. But this is an InnoDB table and I am not using INSERT DELAYED. Does InnoDB have any of such implicit mechanisms?
I think your problem is that you have a race condition. Imagine that you have two people that come in to get tickets. Here's person one:
INSERT details INTO All_Transactions_Table
Then, before the SELECT COUNT(*) can happen, person two comes along and does:
INSERT details INTO All_Transactions_Table
Now both users get the same ticket number. This can be very hard to replicate using your existing code because it depends on the exact scheduling of threads withing MySQL which is totally beyond your control.
The best solution to this would be to use some kind of AUTO_INCREMENT column to provide the ticket number, but failing that, you can probably use transactions to achieve what you want:
START TRANSACTION
SELECT COUNT(*) + 1 as ticketNum FROM All_Transactions_Table WHERE date is TODAY FOR UPDATE
INSERT details INTO All_Transactions_Table
COMMIT
However, whether or not this works will depend on what transaction isolation level you have set, and it will not be very efficient.
I have a query that on every user purchase gets currently highest receipt_counter number from receipts table in order to create new receipt. receipt_counter is not unique in the table because it resets every year.
receipt_counter is just an integer that is used in generating receipt_label that looks like "pos_id"-"receipt_counter".
There is a possibility that people can buy a product simultaneously on the same point of sale (pos_id).
Function that gets new receipt_counter looks like this:
SELECT (MAX(receipt_counter) + 1) as next_receipt_counter FROM receipts
The problem is when multiple people are buying a product simultaneously, which triggers generating new receipt (along with receipt number), sometimes a collision occurs (multiple people get same receipt number) because there is some delay between retrieving receipt counter and inserting new receipt into DB.
Is there a best practice to deal with this kind of problems? Do I need to use some kind of deadlock, or is my initial idea flawed and I need to change tactic for generating receipt counter all together?
EDIT: receipt_counter needs to be a sequential number without gaps.
there is some delay between retrieving receipt counter and inserting new receipt into DB
You can change your software in order to instead or retrieving the ID without creating the actual receipt, it creates the receipt (with "pending" state or something like this) and then retrieve its ID. In the moment you currently create the receipt, you would just set its status to "active" or something.
Doing it this way you get rid of this time gap between getting and ID and storing the record, which in my point of view, is the main source of your problems.
You can create separate table for id only and enable auto_increment on that id column. Then add receipt in 2 steps - first add new record to id table, to receive back generated id. Then add actual receipt using received id. Then when you need just truncate table with id's when you want to reset the increment counter.
Does the receipt_counter need to be an increasing number without gaps?
If an increasing large number with gaps is okay, how about generating a number out of the current date/time? If you go down to milliseconds or nanoseconds, the chance of a collision is pretty low.
For example:
2013-11-13 13:08:15.012 -> 1113130815012
(I omitted the year because you said the number is reset every year anyway)
Just a quickey. I am developming website, where you can buy credits and spend them later for things on the website.
My question is, is it ok to store amount of credits with user (user table, column credits and iteger amount) or it is necessary (or just better) to have separate table with user id and amount ?
Thanks
Both actually.
Considering that you'll be dealing with monetary transactions to get those credits, you want to be able to get a log of all transactions (depending of the laws in your country, you will NEED this). Therefore you'll need a credits_transactions table.
user_id, transaction_id, transaction_details, transaction_delta
Since programmatically calculating your current credit balance will be too costly for users with a lot of transactions, you'll also need a credit_balance row in your user table for quick access. Use triggers to automatically update that column whenever a row is inserted from credits_transactions (technically, update and delete shouldn't be allowed in that table). Here's is the code for the insert trigger.
CREATE TRIGGER ct_insert
AFTER INSERT ON credits_transactions
BEGIN
UPDATE users SET credit_balance = credit_balance + NEW.transaction_delta WHERE user_id = NEW.user_id;
END
;;
I also have sites containing credits and found it easiest to store them in the user table, mostly because you need access to it on every page (when the user is logged in). It is only an integer so will not do much harm. I think actually creating a new table for this value might be worse perfomance wise because it needs an index aswel.
A good rule of thumb is to create a user table for the info you need on every page, and normalise the data you dont need on every page (for example adress information, descriptions etc).
Edit:
Seeing the other reactions,
If you want to have transaction logs aswel I would store them seperately as they are mainly for logging (or if the user wants to view them). Calculating them on the fly from the log is fine for smaller sites but if you really have to squeeze performance just store the actual value in the user table.
If you store in separate table, you can keep log of changing the credits. If you store in column, you will have only the current amount of credits.
If you want to keep a record of Credits History Log like
how many credit bought today.
how many spend yesterday.
what did you bought with credits
I think its better to put this in a separate table. In this way you can get these kind of results by applying mathematical operations.
Credits are like money. If a user needs to purchase them, then they are money. Money is tracked using accounts. Account has associated transactions, deposits and withdrawals -- and balance. Search the SO or google for database and account. Here are just a few examples:
one
two
three
I'd have a table which stores the purchases and bought credits, with user id.
Then calculate each time based on this, it should be fast if it's indexed, this way you will be able to easily have a purchase history.
I'm used to building websites with user accounts, so I can simply auto-increment the user id, then let them log in while I identify that user by user id internally. What I need to do in this case is a bit different. I need to anonymously collect a few rows of data from people, and tie those rows together so I can easily discern which data rows belong to which user.
The difficulty I'm having is in generating the id to tie the data rows together. My first thought was to poll the database for the highest user ID in existence, and write to the database with user ID +1. This will fail, however, if two submissions poll the database before either of them writes to it - they will each share the same user ID.
Another thought I had was to create a separate user ID table that would be set to auto-increment, and simply generate a new row, then poll that table for the id of the last row created. That also fails for the same reason as above - if two submissions create a row before either of them polls for the latest user ID, then they'll end up sharing an ID.
Any ideas? I get the impression I'm missing something obvious.
I think I'm understanding you right; I was having a similar issue. There's a super handy php function, though. After you query the database to insert a new row and auto-incrementing their user ID, do:
$user_id = mysql_insert_id();
That just returns the auto-increment value from the previous query on the current mysql connection. You can read more about it here if you need to.
You can then use this to populate the second table's data, being sure nobody will get a duplicate ID from the first one.
You need to insert the user, get the auto-generated id, and then use that id as a foreign key in the couple of rows you need to associate with the parent record. The hat rack must exist before you can hang hats on it.
This is a common issue, and to solve it, you would use a transaction. This gives you the atomic idea being being able to do more than one thing, but have it tied to either a success or fail as a package. It's an advanced db feature, and does require awareness of some more advanced programming in order to implement it in as fault-tolerant a manner as possible.