In an old legacy based system we make updates to a users inventory. The inventory contains many different items, user will have one row per item id and in each row is a quantity of this item that they own.
Now somewhere in this rather old and behemoth like code is a problem whereby a user can end up with a minus quantity of an item. This should never happen.
Rather than approaching the problem from the top and going through each piece of code that interacts with the inventory table we thought we might try and create some reporting to help us find the problems.
Before I go about implementing something that I think may solve this problem I thought i'd put it out there to the community to find out how they might approach it.
Perhaps could start by creating on update MySQL rules which insert activities into another table for closer inspection etc. Be creative.
If you add a timestamp field then you'll know when the last operation was carried out - from that, you could find te update entry in the mysql log and possibly reconcile with the application logs.
Alternatively you could set a trigger on the table...
CREATE TRIGGER no_negatives_in_yourtable
BEFORE UPDATE ON yourtable
FOR EACH ROW
BEGIN
IF (NEW.value<0) THEN
/* log it (NB will be rolled back if subsequent statement enabled */
INSERT INTO badthings (....) VALUES (...);
/* this forces the operation to fail */
DROP TABLE `less than zero value in yourtable`;
END IF;
END
Related
All,
I am using MySql 5.7 in my application. I trying to make my save function Concurrency Safe. I will explain with an example.
Example :
I have two admin users Admin 1 and Admin 2. We have a product table and we have a product table entry with product code "P1". Suppose Admin 1 and Admin 2 are logged into the system and try to update product entry with code "P1" at the same time.
I need to inform one of the users that the record(s) you are trying modify is updating by another user and try again after some time.
I am using transaction and didn't change MySql's default transaction level(repeatable read). I am trying to solve it by using "SELECT FOR UPDATE"(included a where condition to check with modified time). This "where" condition will solve concurrency issue to those transactions which are already committed. But if two transaction starts at the same time and the first transaction gets committed before lock timeout, then when the second transaction executes, it overwrites the first one.
Kindly share your ideas
Thanks in advance
Well there are actually 2 issues here.
First, one of the admins will get a lock on the row before the other, so assuming admin1 gets the lock first, admin2 will queue until admin1's transaction completes, then admin2's transaction will take place.
So that is all looked after for you by the DBMS.
But the second issue is of course if both admin1 and admin2 are attempting to update the same column(s). In this case admin1's update will be overwritten by admin2's update. The only way to stop this happening if that is what you want to stop is to make the UPDATE very specific about what it is updating. In other words the UPDATE must be something like this
UPDATE table SET col1 = 'NewValue'
WHERE usual criteria
AND col1 = 'Its Original Value'
So this means that when you present the original data from this row to the user in a form, you must somehow remember what its original state was as well as capture its new state that the admin changed it to.
Of course the PHP code will also have to be written to capture the fact the UPDATE did not take place and return something to whichever admin's update has now failed. Showing the new value in the column in question and giving them a notice that the update failed because someone else already changed that field, and letting them either forget there change, or apply their change over the top of the other admins update.
There is this technique you can use to control it without locking the table or controlling the update you should do.
Create a field on your table that will be a version for that registry:
alter table someTable add column version not null integer default 0;
There will be no need to change any insert code with this.
Every time a user fetches a registry to update you make sure that it will have the version also in the object (or form,m or whatever way you handle your entity in the system).
Then you will need to Create a before update trigger for your table that will check if the version of the current registry is still the same is so you update if not you raise an error. Something like:
delimiter $$
create trigger trg_check_version BEFORE UPDATE
ON yourTable
for each row
begin
if NEW.version != OLD.version then
signal sqlstate '45000' set message_text = 'Field outdated';
else
NEW.version = NEW.version + 1;
end if;
end$$
delimiter;
Then you handle the error in your php code. This signal command will only work on MySql 5.5 or later. Check out this thread
I'm doing a food delivery system for my final year project. For the database, I'm required to hide the record that is no longer in used, instead of deleting the record permanently. For example, if the seller doesn't want to sell a particular meal, they can disable the meal but the record of the meal still available in the database. I need to achieve this by using PHP and SQL. Can someone give me some ideas on how to achieve this? Thanks in advance.
The feature you are referring to is something called soft deletion. In soft deletion, a record is logically removed from the database table, without actually removing the record itself.
One common way to implement soft deletion is to add a column which keeps track of whether a column has been soft deleted. You can use the TINYINT(1) type for this purpose.
Your table creation statement would look something like this:
CREATE TABLE yourTable (`deleted` TINYINT(1), `col1` varchar, ...)
To query out records which have not been logically deleted, you could use:
SELECT *
FROM yourTable
WHERE deleted <> 1
And having a soft delete column also makes it easy to remove stale records if the time comes to do that.
A extra deleted column is a great option in many cases. But you have to be very careful that you always check it, and in some cases it can be hard to control this.
Another good choice is a "shadow table" with the same structure, and change your delete process to first copy to the shadow table, and then delete. This means your original table is safe to use, but you cannot do queries on all data (not easily - although UNION can help)
I am setting up a new part of an application with historical data requirements for the transactions table in mysql. Originally in old version transactions were not historical, with structure like this:
id|buyerid|prodid|price|status
And other fields, with the id being referenced in links to access Transaction Details page, as well as used as foreign key in other tables across the application to reference particular transactions for various purposes.
Now the requirement is to answer reporting questions like "Show all transaction that had particular status Feb 2014" AND "What did a transaction look like in Feb 2014".
The new design I'm testing at the moment is below:
id|buyerid|prodid|price|status|active|start_date|end_date
Where active used to indicate latest record, start is when it is created, no records to be modified instead end date populated and a new record created with same details plus the modification.
Now the question is - what to do about transaction id field? Because in this new design it is more of a history id, and can not be used for a foreign key across the application since it is going to change with every update.
I can think of two options:
Create a separate table, transaction_ids with just one column, primary key autoincrement tid, and a foreign key column in the main transactions table for tid - Every time a brand new transaction is created, insert the ids table and use that id for the tid to trace this particular transaction across the system.
The buyerid and prodid combination is always unique in my application, no buyer can get the same product twice.
Is the second solution better? Does anyone know of a better way to handle this?
What you are trying to achieve is called Event Sourcing.
Think in terms of events changing the status of your transaction, rather than tracing the status itself in time.
You still have your transaction with its own primary key, and you rebuild the current (or past) status applying each event.
I would also suggest you to start coding your business models, and only after that, to think about the persistence and the best way to map it to a database.
Second Solution looks better although I will say that there is a lot of ambiguity in your question.
I am saying that second solution is better because the transaction_ids table which you are talking about in solution 1 is basically REDUNDANT. It is not solving any purpose. Even if the transaction id is repeating itself in the transaction table, it does not mean that you need to have a separate table to generate the ids and make it as PK-FK relation. Most probably you will still be querying the data by user-id and prod-id and not by transaction-id
Basically what you need is some kind of audit history table where you insert a record for every operation/transaction/modification done and capture some basic details like - Username, Date/time, old value, new value etc. You do not need status or start date and end date columns. Once a record is inserted in this audit history table then it is never going to be touched again.
You will have to design your report carefully.
Taking two previous answers into consideration, here is the solution I will go with: All of the data updates in my application come through one single function, that is already set up to audit particular fields of my choosing, so I will mark the transaction status to be audited among the others. Table structure for the audit table is similar to this:
|id|table|table_id|column|old_val|new_val|who|when|
Only that there is a bit more advanced object mapping via object id's instead of simple table name. I can then use this data in a Join to the main, normal not historical transactions table to provide the reporting required.
We have an automatic car plate reader which records plates of the cars enter to firm. My colleague asked me if we can instantly get the plate number of the car coming. The software uses MySQL and I have only database access. Cannot reach/edit PHP codes.
My offer is to check using a query periodically. For example for 10 seconds. But in this way it is possible to miss the cars coming in 5 seconds. Then decreasing interval increases request/response count which means extra load for the server. I do not want the script to run always. It should run only a new db row added. It shows the plate and exits.
How can I get last recorded row from the db right after inserting? I mean there should be trigger which runs my PHP script after insertion. But I do not know.
What I want is MySQL could run my PHP script after a new record.
If your table is MyISAM, I would stick to your initial idea. Getting the row count from a MyISAM table is instant. It only takes the reading of one single value as MyISAM maintains the row count at all times.
With InnoDB, this approach can still be acceptable. Assuming car_table.id is primary key, SELECT COUNT(id) FROM car_table only requires an index scan, which is very fast. You can improve on this idea by adding another indexed boolean column to your table:
ALTER car_table ADD COLUMN checked BOOLEAN NOT NULL DEFAULT 0, ADD INDEX (checked);
The default value ensures new cars will be inserted with this flag set to 0 without modifying the inserting statement. Then:
BEGIN TRANSACTION; -- make sure nobody interferes
SELECT COUNT(checked) FROM car_table WHERE checked = FALSE FOR UPDATE; -- this gets you the number of new, unchecked cars
UPDATE car_table SET checked = TRUE WHERE checked = FALSE; -- mark these cars as checked
COMMIT;
This way, you only scan a very small number of index entries at each polling.
A more advanced approach consists in adding newly created cars ID's into a side table, through a trigger. This side table is scanned every now and then, without locking the main table, and without altering its structure. Simply TRUNCATE this side table after each polling.
Finally, there is the option of triggering a UDF, as suggested by Panagiotis, but this seems to be an overkill in most situations.
Although this is not the greatest of designs and I have not implemented it, there is way to call an external script through sys_exec() UDF using a trigger as mentioned here:
B.5.11: Can triggers call an external application through a UDF?
Yes. For example, a trigger could invoke the sys_exec() UDF.
http://dev.mysql.com/doc/refman/5.1/en/faqs-triggers.html#qandaitem-B-5-1-11
Also have a look on this thread which is similar to your needs.
Invoking a PHP script from a MySQL trigger
I have a database that has a notice_current table and a notices_archive table. As part of a user logout process, I want to move all of their associated notices from the current table to the archive.
In my PHP application code I am currently making a transaction where I copy the notices over, and then delete the rows in the notices_current table if there were no errors in the copying. However, I am wondering if MySQL has some innate function or method for simply pushing notices from one table to another. If so, it would see that this would be more effective than my current implementation.
There's not a single built-in function for this, but if you're currently iterating over all of the rows, then something like this might be a lot more efficient:
BEGIN;
INSERT INTO notices_archive SELECT * FROM notice_current WHERE user_id=%;
DELETE FROM notice_current WHERE user_id=%;
COMMIT;
u can use phpmyadmin tools to do this.
goto phpmy admin then select your source db and click on operation tab and select copy and set the parameters then let phpmyadmin to do his job :D