I've been reading through several topics now and did some research about logging changes to a mysql table. First let me explain my situation:
I've a ticket system with a table: 'ticket'
As of now I've created triggers which will enter a duplicate entry in my table: 'ticket_history' which has "action" "user" and "timestamp" as additional columns. After some weeks and testing I'm somewhat not happy with that build since every change is creating a full copy of my row in the history table. I do understand that disk space is cheap and I should not worry about it but in order to retrieve some kind of log or nice looking history for the user is painful, at least for me. Also with the trigger I've written I get a new row in the history even if there is no change. But this is just a design flaw of my trigger!
Here my trigger:
BEFORE UPDATE ON ticket FOR EACH ROW
BEGIN
INSERT INTO ticket_history
SET
idticket = NEW.idticket,
time_arrival = NEW.time_arrival,
idticket_status = NEW.idticket_status,
tmp_user = NEW.tmp_user,
action = 'update',
timestamp = NOW();
END
My new approach in order to avoid having triggers
After spening some time on this topic I came up with an approach I would like to discuss and implement. But first I would have some questions about that:
My idea is to create a new table:
id sql_fwd sql_bwd keys values user timestamp
-------------------------------------------------------------------------
1 UPDATE... UPDATE... status 5 14 12345678
2 UPDATE... UPDATE... status 4 7 12345678
The flow would look like this in my mind:
At first I would select something or more from the DB:
SELECT keys FROM ticket;
Then I display the data in 2 input fields:
<input name="key" value="value" />
<input type="hidden" name="key" value="value" />
Hit submit and give it to my function:
I would start with a SELECT again: SELECT * FROM ticket;
and make sure that the hidden input field == the value from the latest select. If so I can proceed and know that no other user has changed something in the meanwhile. If the hidden field does not match I bring the user back to the form and display a message.
Next I would build the SQL Queries for the action and also the query to undo those changes.
$sql_fwd = "UPDATE ticket
SET idticket_status = 1
WHERE idticket = '".$c_get['id']."';";
$sql_bwd = "UPDATE ticket
SET idticket_status = 0
WHERE idticket = '".$c_get['id']."';";
Having that I run the UPDATE on ticket and insert a new entry in my new table for logging.
With that I can try to catch possible overwrites while two users are editing the same ticket in the same time and for my history I could simply look up the keys and values and generate some kind of list. Also having the SQL_BWD I simply can undo changes.
My questions to that would be:
Would it be noticeable doing an additional select everytime I want to update something?
Do I lose some benefits I would have with triggers?
Are there any big disadvantages
Are there any functions on my mysql server or with php which already do something like that?
Or is there might be a much easier way to do something like that
Is maybe a slight change to my trigger I've now already enough?
If I understad this right MySQL is only performing an update if the value has changed but the trigger is executed anyways right?
If I'm able to change the trigger, can I still prevent somehow the overwriting of data while 2 users try to edit the ticket the same time on the mysql server or would I do this anyways with PHP?
Thank you for the help already
Another approach...
When a worker starts to make a change...
Store the time and worker_id in the row.
Proceed to do the tasks.
When the worker finishes, fetch the last worker_id that touched the record; if it is himself, all is well. Clear the time and worker_id.
If, on the other hand, another worker slips in, then some resolution is needed. This gets into your concept that some things can proceed in parallel.
Comments could be added to a different table, hence no conflict.
Changing the priority may not be an issue by itself.
Other things may be messier.
It may be better to have another table for the time & worker_ids (& ticket_id). This would allow for flagging that multiple workers are currently touching a single record.
As for History versus Current, I (usually) like to have 2 tables:
History -- blow-by-blow list of what changes were made, when, and by whom. This is table is only INSERTed into.
Current -- the current status of the ticket. This table is mostly UPDATEd.
Also, I prefer to write the History directly from the "database layer" of the app, not via Triggers. This gives me much better control over the details of what goes into each table and when. Plus the 'transactions' are clear. This gives me confidence that I am keeping the two tables in sync:
BEGIN; INSERT INTO History...; UPDATE Current...; COMMIT;
I've answered a similar question before. You'll see some good alternatives in that question.
In your case, I think you're merging several concerns - one is "storing an audit trail", and the other is "managing the case where many clients may want to update a single row".
Firstly, I don't like triggers. They are a side effect of some other action, and for non-trivial cases, they make debugging much harder. A poorly designed trigger or audit table can really slow down your application, and you have to make sure that your trigger logic is coordinated between lots of developers. I realize this is personal preference and bias.
Secondly, in my experience, the requirement is rarely "show the status of this one table over time" - it's nearly always "allow me to see what happened to the system over time", and if that requirement exists at all, it's usually fairly high priority. With a ticketing system, for instance, you probably want the name and email address of the users who created, and changed the ticket status; the name of the category/classification, perhaps the name of the project etc. All of those attributes are likely to be foreign keys on to other tables. And when something does happen that requires audit, the requirement is likely "let me see immediately", not "get a database developer to spend hours trying to piece together the picture from 8 different history tables. In a ticketing system, it's likely a requirement for the ticket detail screen to show this.
If all that is true, then I don't think history tables populated by triggers are a good idea - you have to build all the business logic into two sets of code, one to show the "regular" application, and one to show the "audit trail".
Instead, you might want to build "time" into your data model (that was the point of my answer to the other question).
Since then, a new style of data architecture has come along, known as CQRS. This requires a very different way of looking at application design, but it is explicitly designed for reactive applications; these offer much nicer ways of dealing with the "what happens if someone edits the record while the current user is completing the form" question. Stack Overflow is an example - we can see, whilst typing our comments or answers, whether the question was updated, or other answers or comments are posted. There's a reactive library for PHP.
I do understand that disk space is cheap and I should not worry about it but in order to retrieve some kind of log or nice looking history for the user is painful, at least for me.
A large history table is not necessarily a problem. Huge tables only use disk space, which is cheap. They slow things down only when making queries on them. Fortunately, the history is not something you'd use all the time, most likely it is only used to solve problems or for auditing.
It is useful to partition the history table, for example by month or week. This allows you to simply drop very old records, and more important, since the history of the previous months has already been backed up, your daily backup schedule only needs to backup the current month. This means a huge history table will not slow down your backups.
With that I can try to catch possible overwrites while two users are editing the same ticket in the same time
There is a simple solution:
Add a column "version_number".
When you select with intent to modify, you grab this version_number.
Then, when the user submits new data, you do:
UPDATE ...
SET all modified columns,
version_number=version_number+1
WHERE ticket_id=...
AND version_number = (the value you got)
If someone came in-between and modified it, then they will have incremented the version number, so the WHERE will not find the row. The query will return a row count of 0. Thus you know it was modified. You can then SELECT it, compare the values, and offer conflict resolution options to the user.
You can also add columns like who modified it last, and when, and present this information to the user.
If you want the user who opens the modification page to lock out other users, it can be done too, but this needs a timeout (in case they leave the window open and go home, for example). So this is more complex.
Now, about history:
You don't want to have, say, one large TEXT column called "comments" where everyone enters stuff, because it will need to be copied into the history every time someone adds even a single letter.
It is much better to view it like a forum: each ticket is like a topic, which can have a string of comments (like posts), stored in another table, with the info about who wrote it, when, etc. You can also historize that.
The drawback of using a trigger is that the trigger does not know about the user who is logged in, only the MySQL user. So if you want to record who did what, you will have to add a column with the user_id as I proposed above. You can also use Rick James' solution. Both would work.
Remember though that MySQL triggers don't fire on foreign key cascade deletes... so if the row is deleted in this way, it won't work. In this case doing it in the application is better.
This one happened to me last night. I am quite familiar with the nature of the error but still I cannot figure out what could have caused it. I might have a hunch, but I am not sure. I'll begin with some basic app's info:
My app has 3 entities: Loan, SystemPage and TextPage. Whenever someone adds a loans, one or more system pages is being added to the DB. Basically, it goes something like this:
if ( $form->isValid()){
$this->em->getConnection()->beginTransation();
$this->em->persist($loan);
$this->em->flush();
while ($someCondition){
$page = new SystemPage();
//... Fill the necessary data into page
$page->setObject($loan);
$this->em->persist($page);
}
$this->em->flush();
$this->em->getConnection()->commit();
}
Please ignore potential typos, I am writing this literally by remembering
Entity Loan is mapped to table loans and SystemPage is mapped (via inheritance mapping) to system_pages and base_pages. Both of later one have id field which is set to AUTO_INCREMENT.
My hunch: There is another table called text_pages. Given that text_pages and base_pages on one hand and system_pages and base_pages on another share IDs, I am thinking that it could easily cause this:
User1: Create BasePage, acquire autoincrement ID (value = 1)
User2: Create BasePage, acquire autoincrement ID (value = 1)
User1: Create TextPage, use the ID from step 1
User2: Create SystemPage, use the ID from step 2
Two problems with this theory:
Transactions. That's why I used them in the first place
In the time of error there was no other activity on app by another user
Important: After waiting for a minute, resubmitting passed OK.
Could this be some weird MySQL transaction isolation bug? Any hint would be greatly appreciated...
Edit:
Part of DB Schema:
Please ignore the columns names which are in Serbian language
flush() operation flushes all changes in one single transaction, so you have redundant code here...
You didn't stated if you can reproduce this bug and it would be convenient if you can provide db schema.
It seems there is no right answer to this question, only speculation, so I will provide some troubleshooting ideas based on my own experiences with a problem like this:
You mention there was no other activity on the app, but I would triple check that by looking at the query logs. There must be a duplicate query that was executed.
Maybe the form was submitted twice accidentally. The user double-clicked on the submit button, or they clicked again if the UI did not respond. You can check this idea by looking at the Apache log files for POST requests on your form around the same timestamp. You may need to implement some javascript code to prevent double-clicks on your form page submit button.
Your hunch is probably quite close to correct, in that there is some kind of race condition. Using transactions won't prevent race conditions, but they do provide the means to gracefully rollback. Wrap your code in a try/catch block so that you can catch the Mysql exception and present the user with a friendly error and the option to retry.
I am working on a custom MVC application.
Its an ERP system where we need to set restriction that if a record is opened by admin1 then other user (admin2) can view but cannot change the record.
I have read about locking table and about transactions but didn't get much cleared idea.
Can someone give exact idea with some sample code.
Thanks
Whatever db lock you acquire while the php script is running will be released upon script completion. A workaround is to add a column that will serve as a flag indicating that the record is being updated. Alternatively, you can use a timestamp that is updated via a trigger when the row is updated. You can then use that timestamp to check if someone else has updated that record.
See http://www.akadia.com/services/ora_update_guide.html examples of concurrency control.
So I wanted to start a browser game project. Just for practise.
After years of PHP programming, today I heard about transactions and innoDB the first time ever.
So I googled it and still have some questions.
My first encounter with it was on a website that says, InnoDB would be necessary when programming a browsergame. Because it might be used by many people at the same time and if two people access to a database table at the same time (with one nanosecond difference for example), it could get confusing and data might be lost or your SELECT is not updated although it should have been updated by the access one nanosecond ago (but the script was still running and couldn't change it yet) ... and so on.
And apparently, transactions solve this problem by first handling the first access (until it is completed) and then handling the second one. Is this correct?
And another function is, that if you have for example 2 queries in your transaction and the second one fails, it "rolls back" and "deletes"(or never applies) the changes of the first (successful) query. Right? So either everything goes as it should or nothing changes at all. That would be great I think.
Another question: When should I use transactions? Everytime I access the database? Or is it better to use it just for some particular accesses to the database? And should I always use try {} catch() {}?
And one last question:
How does this transaction proceeds?
My understanding is the following:
You start a transaction
You do your queries and change the database or SELECT something
If everything went well, you commit the changes so they get applied to the database
If something went wrong with queries it cancels and jumps to the catch() {} where you rollback the transaction and the changes don't get applied
Is this correct? Of course, besides the question how to start, commit and rollback a transaction in your code.
Yes this is correct.You can also create savepoints to save your current point before running the query.I stricly recommend you to look into the documentation of mysql references it is explained there clearly.
I need to do something like that:
I have a heavy processing method, and my customer access much times the part of site that execute that heavy method. So I think a way to doesn't execute every time this heavy method.
What's my solution? I'll save the values of the first run of this heavy method, and use this already processed value while there are no changes to make the method run again.
So, for example. I have a "product broken counter" aways my customer access the main page, the system says to him "you have X products broken", when my customer change something on product, i'll change a flag on my database, and this flag say to me "Run the heavy method again to calculate this".
This is the case, now my doubt, its better use trigger to say "when table product has change, update the flag to false", or do it on PHP?
If i choose to use a trigger, what is the proccess/database/memmory/bandwith cost?
I read triggers are bad becouse have low maintainability, and is hard to new programers to find where is this function. But it'll never change, not soon. And i have many places on code to change the product and turn the flag off.
So i know php is better on many ways, but its harder to implement.
What you guys can tell me?
I'd go with triggers. Although they might a pain in the neck sometimes they allow to centralize the code and separate it from application logic meaning no matter how a product is changed the db will flag it for processing.
That being said the a possible overall highlevel processing flow might be as follows:
AFTER UPDATE trigger flags products that have been changed inserting product_id to special table flagged_products
A stored procedure (let's call it sp_process_flagged_products) on demand (when user clicks on a button) or periodically (e.g. once a day), using mysql events or cron, processes flagged products and clears processed products from flagged_products table.
Now a trigger tracking changes to products might be as simple as
CREATE TRIGGER tg_product_update
AFTER UPDATE ON products
FOR EACH ROW
INSERT IGNORE INTO flagged_products (`product_id`) VALUES (NEW.product_id);
...and is hard to new programers to find where is this function...
To list all triggers in your database use SHOW TRIGGERS
SHOW TRIGGERS
-- LIKE '%product%'
Here is SQLFiddle demo.