Laravel - Is my double submission solution efficient - php

This my first question so please go easy on me. I have been using Laravel for a while; however, recently I have came across an issue in a client application while I was testing it.
The issue was if a user double submit or simply clicked submit button x times then the same record will be created x times in the database.
I have never faced this issue before simply because simple unique validation would achieve this.
Nevertheless, this form (or model to be exact) allows data with the same values/duplicates (client app requirement).
So the first thing I did was as follow:
public function store(CustomRequest $request)
{
if($lastEntry = Record::latest()->first()){
if(
($request->name == $lastEntry->name)
&& ($request->another == $lastEntry->another)
// && ($request->user()->id == $lastEntry->user_id) // Current user check (need to modify the $lastEntry for it to work efficiently!)
&& (now()->diffInMinutes($lastEntry->created_at) < 5) // I added this later as another way to allow duplicates records after each other if they were created 5m apart
){
return redirect()
->route('show.record', $lastEntry->id)
->with('success', 'Record has been created successfully.');
}
}
$record= new Record();
....
}
Now, after testing, it works great. But, my question is, are there any built-in solutions for this, packages or simply better ones?
Also, should I do a session solution for faster response - because, correct me if I am wrong but won't this be slow if it was on a table with > 500k records?
Edit: I thought about making a custom throttle middleware for this, but it would be kinda a bad idea, what do you think?
Also, as mentioned by #nice_dev with more users a double would occur so I thought about adding the user_id field in the record table and grabbed the last record the current user created but still I think it is a bad solution.
Javascript solution won't cut it unfortunately
Edit: What I meant by Javascript solutions, is the famouse once the button is clicked disable it kind of solution (any client-side solutions).
BTW, the client application will eventually have more than 500k record per table (at least in the first year or so).
Feel free to modify my question as you like ... like I said I am new here!.
Thanks in advance

Since, my situation is kinda unique I went with the same solution but added a user_id column on the records table (FK). And checked if the same record was created by the same user in the last x minutes (Since the client wants duplicate records but not allowing mistakes by the users).
Nevertheless, I added a javascript code to disable the button just in case. The client asked for a back-end solution not a client-side one. But might as well add that since I kinda solved the issue (kinda). Also, I created an indexing on the FK column as suggested by #nice_dev.
I had to come up with a solution fast. However, feel free to give your ideas/solutions.
Thanks!

Related

What do you think of this approach for logging changes in mysql and have some kind of audit trail

I've been reading through several topics now and did some research about logging changes to a mysql table. First let me explain my situation:
I've a ticket system with a table: 'ticket'
As of now I've created triggers which will enter a duplicate entry in my table: 'ticket_history' which has "action" "user" and "timestamp" as additional columns. After some weeks and testing I'm somewhat not happy with that build since every change is creating a full copy of my row in the history table. I do understand that disk space is cheap and I should not worry about it but in order to retrieve some kind of log or nice looking history for the user is painful, at least for me. Also with the trigger I've written I get a new row in the history even if there is no change. But this is just a design flaw of my trigger!
Here my trigger:
BEFORE UPDATE ON ticket FOR EACH ROW
BEGIN
INSERT INTO ticket_history
SET
idticket = NEW.idticket,
time_arrival = NEW.time_arrival,
idticket_status = NEW.idticket_status,
tmp_user = NEW.tmp_user,
action = 'update',
timestamp = NOW();
END
My new approach in order to avoid having triggers
After spening some time on this topic I came up with an approach I would like to discuss and implement. But first I would have some questions about that:
My idea is to create a new table:
id sql_fwd sql_bwd keys values user timestamp
-------------------------------------------------------------------------
1 UPDATE... UPDATE... status 5 14 12345678
2 UPDATE... UPDATE... status 4 7 12345678
The flow would look like this in my mind:
At first I would select something or more from the DB:
SELECT keys FROM ticket;
Then I display the data in 2 input fields:
<input name="key" value="value" />
<input type="hidden" name="key" value="value" />
Hit submit and give it to my function:
I would start with a SELECT again: SELECT * FROM ticket;
and make sure that the hidden input field == the value from the latest select. If so I can proceed and know that no other user has changed something in the meanwhile. If the hidden field does not match I bring the user back to the form and display a message.
Next I would build the SQL Queries for the action and also the query to undo those changes.
$sql_fwd = "UPDATE ticket
SET idticket_status = 1
WHERE idticket = '".$c_get['id']."';";
$sql_bwd = "UPDATE ticket
SET idticket_status = 0
WHERE idticket = '".$c_get['id']."';";
Having that I run the UPDATE on ticket and insert a new entry in my new table for logging.
With that I can try to catch possible overwrites while two users are editing the same ticket in the same time and for my history I could simply look up the keys and values and generate some kind of list. Also having the SQL_BWD I simply can undo changes.
My questions to that would be:
Would it be noticeable doing an additional select everytime I want to update something?
Do I lose some benefits I would have with triggers?
Are there any big disadvantages
Are there any functions on my mysql server or with php which already do something like that?
Or is there might be a much easier way to do something like that
Is maybe a slight change to my trigger I've now already enough?
If I understad this right MySQL is only performing an update if the value has changed but the trigger is executed anyways right?
If I'm able to change the trigger, can I still prevent somehow the overwriting of data while 2 users try to edit the ticket the same time on the mysql server or would I do this anyways with PHP?
Thank you for the help already
Another approach...
When a worker starts to make a change...
Store the time and worker_id in the row.
Proceed to do the tasks.
When the worker finishes, fetch the last worker_id that touched the record; if it is himself, all is well. Clear the time and worker_id.
If, on the other hand, another worker slips in, then some resolution is needed. This gets into your concept that some things can proceed in parallel.
Comments could be added to a different table, hence no conflict.
Changing the priority may not be an issue by itself.
Other things may be messier.
It may be better to have another table for the time & worker_ids (& ticket_id). This would allow for flagging that multiple workers are currently touching a single record.
As for History versus Current, I (usually) like to have 2 tables:
History -- blow-by-blow list of what changes were made, when, and by whom. This is table is only INSERTed into.
Current -- the current status of the ticket. This table is mostly UPDATEd.
Also, I prefer to write the History directly from the "database layer" of the app, not via Triggers. This gives me much better control over the details of what goes into each table and when. Plus the 'transactions' are clear. This gives me confidence that I am keeping the two tables in sync:
BEGIN; INSERT INTO History...; UPDATE Current...; COMMIT;
I've answered a similar question before. You'll see some good alternatives in that question.
In your case, I think you're merging several concerns - one is "storing an audit trail", and the other is "managing the case where many clients may want to update a single row".
Firstly, I don't like triggers. They are a side effect of some other action, and for non-trivial cases, they make debugging much harder. A poorly designed trigger or audit table can really slow down your application, and you have to make sure that your trigger logic is coordinated between lots of developers. I realize this is personal preference and bias.
Secondly, in my experience, the requirement is rarely "show the status of this one table over time" - it's nearly always "allow me to see what happened to the system over time", and if that requirement exists at all, it's usually fairly high priority. With a ticketing system, for instance, you probably want the name and email address of the users who created, and changed the ticket status; the name of the category/classification, perhaps the name of the project etc. All of those attributes are likely to be foreign keys on to other tables. And when something does happen that requires audit, the requirement is likely "let me see immediately", not "get a database developer to spend hours trying to piece together the picture from 8 different history tables. In a ticketing system, it's likely a requirement for the ticket detail screen to show this.
If all that is true, then I don't think history tables populated by triggers are a good idea - you have to build all the business logic into two sets of code, one to show the "regular" application, and one to show the "audit trail".
Instead, you might want to build "time" into your data model (that was the point of my answer to the other question).
Since then, a new style of data architecture has come along, known as CQRS. This requires a very different way of looking at application design, but it is explicitly designed for reactive applications; these offer much nicer ways of dealing with the "what happens if someone edits the record while the current user is completing the form" question. Stack Overflow is an example - we can see, whilst typing our comments or answers, whether the question was updated, or other answers or comments are posted. There's a reactive library for PHP.
I do understand that disk space is cheap and I should not worry about it but in order to retrieve some kind of log or nice looking history for the user is painful, at least for me.
A large history table is not necessarily a problem. Huge tables only use disk space, which is cheap. They slow things down only when making queries on them. Fortunately, the history is not something you'd use all the time, most likely it is only used to solve problems or for auditing.
It is useful to partition the history table, for example by month or week. This allows you to simply drop very old records, and more important, since the history of the previous months has already been backed up, your daily backup schedule only needs to backup the current month. This means a huge history table will not slow down your backups.
With that I can try to catch possible overwrites while two users are editing the same ticket in the same time
There is a simple solution:
Add a column "version_number".
When you select with intent to modify, you grab this version_number.
Then, when the user submits new data, you do:
UPDATE ...
SET all modified columns,
version_number=version_number+1
WHERE ticket_id=...
AND version_number = (the value you got)
If someone came in-between and modified it, then they will have incremented the version number, so the WHERE will not find the row. The query will return a row count of 0. Thus you know it was modified. You can then SELECT it, compare the values, and offer conflict resolution options to the user.
You can also add columns like who modified it last, and when, and present this information to the user.
If you want the user who opens the modification page to lock out other users, it can be done too, but this needs a timeout (in case they leave the window open and go home, for example). So this is more complex.
Now, about history:
You don't want to have, say, one large TEXT column called "comments" where everyone enters stuff, because it will need to be copied into the history every time someone adds even a single letter.
It is much better to view it like a forum: each ticket is like a topic, which can have a string of comments (like posts), stored in another table, with the info about who wrote it, when, etc. You can also historize that.
The drawback of using a trigger is that the trigger does not know about the user who is logged in, only the MySQL user. So if you want to record who did what, you will have to add a column with the user_id as I proposed above. You can also use Rick James' solution. Both would work.
Remember though that MySQL triggers don't fire on foreign key cascade deletes... so if the row is deleted in this way, it won't work. In this case doing it in the application is better.

Duplicate entry '...' for key 'PRIMARY' during the transaction

This one happened to me last night. I am quite familiar with the nature of the error but still I cannot figure out what could have caused it. I might have a hunch, but I am not sure. I'll begin with some basic app's info:
My app has 3 entities: Loan, SystemPage and TextPage. Whenever someone adds a loans, one or more system pages is being added to the DB. Basically, it goes something like this:
if ( $form->isValid()){
$this->em->getConnection()->beginTransation();
$this->em->persist($loan);
$this->em->flush();
while ($someCondition){
$page = new SystemPage();
//... Fill the necessary data into page
$page->setObject($loan);
$this->em->persist($page);
}
$this->em->flush();
$this->em->getConnection()->commit();
}
Please ignore potential typos, I am writing this literally by remembering
Entity Loan is mapped to table loans and SystemPage is mapped (via inheritance mapping) to system_pages and base_pages. Both of later one have id field which is set to AUTO_INCREMENT.
My hunch: There is another table called text_pages. Given that text_pages and base_pages on one hand and system_pages and base_pages on another share IDs, I am thinking that it could easily cause this:
User1: Create BasePage, acquire autoincrement ID (value = 1)
User2: Create BasePage, acquire autoincrement ID (value = 1)
User1: Create TextPage, use the ID from step 1
User2: Create SystemPage, use the ID from step 2
Two problems with this theory:
Transactions. That's why I used them in the first place
In the time of error there was no other activity on app by another user
Important: After waiting for a minute, resubmitting passed OK.
Could this be some weird MySQL transaction isolation bug? Any hint would be greatly appreciated...
Edit:
Part of DB Schema:
Please ignore the columns names which are in Serbian language
flush() operation flushes all changes in one single transaction, so you have redundant code here...
You didn't stated if you can reproduce this bug and it would be convenient if you can provide db schema.
It seems there is no right answer to this question, only speculation, so I will provide some troubleshooting ideas based on my own experiences with a problem like this:
You mention there was no other activity on the app, but I would triple check that by looking at the query logs. There must be a duplicate query that was executed.
Maybe the form was submitted twice accidentally. The user double-clicked on the submit button, or they clicked again if the UI did not respond. You can check this idea by looking at the Apache log files for POST requests on your form around the same timestamp. You may need to implement some javascript code to prevent double-clicks on your form page submit button.
Your hunch is probably quite close to correct, in that there is some kind of race condition. Using transactions won't prevent race conditions, but they do provide the means to gracefully rollback. Wrap your code in a try/catch block so that you can catch the Mysql exception and present the user with a friendly error and the option to retry.

Improving SQL Queries

I have recently designed a website that used a lot of queries. During the time I was developing my website I came across an issues which was very time consuming and frustrating.
So the problem was that at a certain point I wanted to add an additional feature to the website where it would affect most on my queries and I needed to change most of them to make the feature work. So let me give an example: Lets says I have a users table, and I didn't add a column to check if a user is banned. Now I added the column "banned", and now the problem was that, I need to arrange all the other queries to check if the user is banned first. I hope that makes sense.
So my question is, is there a way I could minimize that work and instead of going through all the queries and revising them (To add the is user banned), I could instead add that feature once and the queries would work? Basically how can I improve?
I hope this makes sense and if not I would try my best to explain it further.
Any help would be greatly appreciated if it could help improve my coding knowledge.
PS. I am using SQL and PHP. If there is something better than SQL that would fix this problem, suggest away.
Thank you
I understand your problem. You have some column-rule. Which is globally used in all app. For example in my case there was 'status' column, and there were some logical meaning called 'important', which worked if column had one of the certain values in the set.
So, everywhere, to check if status is 'important', I needed to write:
WHERE `status` IN('INCIDENT', 'ERROR')
If I needed to add for example 'FLAGGED' to list of important statuses, I needed to rewrite all SQL queries:
WHERE `status` IN('INCIDENT', 'ERROR', 'FLAGGED')
Once I got tired of this. I decided to write a MySQL function which did this work. Called it IS_STATUS_IMPORTANT(status).
But this solution failed the test because it slowed down performance - it did not allow MySQL to use indexes properly.
I finally solved this problem by creating some set of app-global conditions, lets say:
class DbHelper {
public static function importanceCondition($column_name) {
return $column_name . " IN('INCIDENT', 'ERROR') ";
}
}
And now all over app I write:
$sql = 'SELECT * FROM blah .... WHERE ... AND ' . DbHelper::importanceCondition('x.status');
If I need to change some the logical condition I do it in one place and it applies all over the application.
In your case you could add some function
class DbHelper {
...
public static function validUserCondition($user_alias) {
return " ({$user_alias}.deleted = 0 AND {$user_alias}.banned = 0) ";
}
}
Why don't you check this during the login of the user? If (s)he is banned, the logon fails and all further queries are impossible to be executed in the first place.
Generally you should never spread the same logic to multiple code locations because of this multiple effort this will cause whenever you want to adjust something.
Create reuse methods where you have reuse. This could be even reuse to enhace a given SQL (prepared) statement with another WHERE condition or a method that performs the SQL request itself.
I understand your problem there are some easy solution for this using ORM (Object Relational Mapping) to query which supports multiple DB like (SQL, mysql, Oracle DB, NoSql...)
PHP ORM's like - Doctrine, Propel
ORM Better support for most of PHP Frameworks
Which has the better way you work with queries
Less complex in coding which splits up functionalities into classes
Easy to manage relations in table
I hope this will help you for modifying queries with less time and great performance
I assume you have code included in every page to make sure that the user has logged in sucessfully.
If that is the case then all you should need to do is change the login script to reject banned users, every other page will therefore work as it is and reject any users that are not logged in and none of the queries on these other pages would need to be changed.

Checking if two ids are identical

I've added a feature to a web site that shows what visitors have visited a user profile. The table representing this holds the id of the user profile and the id of user visiting the profile.
Obviously, it's pointless showing that someone has visited their own profile so I modified the PHP code to detect this. In the meantime, a bit of data was written. This isn't a problem because it represents only a handful of users and I can edit the information by hand.
My question is as follows. In the hypothetical case where I'd have to do the same thing for more data, what would be a good approach to finding rows where id1 = id2 and removing them?
DELETE
FROM table
WHERE id1 = id2;
DELETE FROM `profiletracking` WHERE `visitor_id` = `profile_id`;
If you need to delete it, harakiri's query is good but I have a question, why to add a record in the first place? In time your website could grow bigger and things might get complicated.
I would suggest you to not record it to the database in the first place. You just do more actions and queries while there is a shorter way.
<?php
// Get ID of profile owner;
/* do your query here */
if ($_SESSION['id'] != $profileOwner['user_id']) {
// add it to your database
}
?>
I believe such approach is more elegant and useful, considering in the future your web site might grow bigger and you might need to check your codes again.
Please don't forget such things might be headache. This is a fatal mistake for a programmer. In the beginning, many thinks, ok for now this do the trick, why to bother coding more? In time you will add more and more codes, later you might lose yourself in it. It will be too late once your visitors / customer will start to complain about slow opening pages, eventually bad coding.

multi-user application record locking - best method?

I'm developing a php / mysql application that handles multiple simultaneous users. I'm thinking of the best approach to take when it comes to locking / warning against records that are currently being viewed / edited.
The scenario to avoid is two users viewing the record, one making a change, then the other doing likewise - with the potential that one change might overwrite the previous.
In the latest versions of WordPress they use some method to detect this, but it does not seem wholly reliable - often returning false positives, at least in my experience.
I assume some form of ajax must be in place to 'ping' the application and let it know the record is still being viewed / edited (otherwise, a user might simply close their browser window, and then how would the application know that).
Another solution I could see is to check the last updated time when a record is submitted for update, to see if in the interim it has been updated elsewhere - and then offer the user a choice to proceed or discard their own changes.
Perhaps I'm barking up the wrong tree in terms of a solution - what are peoples experiences of implementing this (what must be a fairly common) requirement?
I would do this: Store the time of the last modification in the edit form. Compare this time on submission with the time stored in the database. If they are the same, lock the table, update the data (along with the modification time) and unlock the table. If the times are different, notify the user about it and ask for the next step.
Good idea with the timestamp comparison. It's inexpensive to implement, and it's an inexpensive operation to run in production. You just have to write the logic to send back to the user the status message that their write/update didn't occur because someone beat them to it.
Perhaps consider storing the username on each update in a field called something like 'LastUpdateBy', and return that back to the user who had their update pre-empted. Just a little nicety for the user. Nice in the corporate sense, perhaps not in an environment where it might not be appropriate.

Categories