So, I am submitting a form with 3 components, user -> company -> branch.
Right now, if I catch an error in any child object, I am unable to delete the parent records and when the user tries to resubmit the form he can't use the same details because they're already registed.
My question is:
Can I delete the objects created within try{} in catch{} ?
If not, what practice do you recommend to handle this type of situations?
Thanks in advance!
You can use transactions for that scenario.
Let's say you have multiple create/update/delete and if one of them fails you want it to undo the others, then transaction comes to the rescue.
DB::transaction(function () {
User::create([...]);
Company::create([...]);
Branch::create([...]);
});
So here if any one of the above statement fails, other statements are rolled back.
You can read in detail in the documentation.
https://laravel.com/docs/7.x/database#database-transactions
Related
I've got a script that fetches data from a database using doctrine. Sometimes it needs to fetch the data for the same entity, the second time however it uses the identity map and therefor might go out of sync with the database (another process can modify the entities in the db). One solution that we tried was to set the query hint Query::HINT_REFRESH before we run the DQL query. We however would like to use it also with simple findBy(..) calls but that doesn't seem to work? We would also like to be able to set it globally per process so that all the doctrine SELECT queries that are run in that context would actually fetch the entities from the DB. We tried to set the $em->getConfiguration()->setDefaultQueryHint(Query::HINT_REFRESH, true); but again that doesn't seem to work?
Doctrine explicitly warns you that it is not meant to be used without a cache.
However if want to ignore this, then Cerad's comment (also mentioned in in this answer) sound right. If you want to do it on every query though you might look into hooking into a doctrine event, unfortunately there is no event for preLoad, only postLoad, but if you really don't care about performance you could create a postLoad listener which first gets the class and id of the loaded entity, calls clear on the entity manager and finally reloads it. Sounds very wrong to me though, I wash my hands of it :-)
I have the follow relationships:
A Student has a "one-to-one" relationship with a Person. A Person has a "many-to-many" relationship with Address.
I want to persist the data: first create the Person, after create the Addresses and then create the Student.
But I want to rollback the transations if any error occur during the persistence in any of these tables. Ex.: If I save the Person and the Addresses, and the Student fails, I want to rollback everything.
How to handle this with Eloquent?
Thanks.
Laravel provides vary simple way to handle this kind of situation.
DB::transaction(function () {
//all your codes
});
From laravel Documentation :
To run a set of operations within a database transaction,
you may use the transaction method on the DB facade. If an exception is thrown within the transaction Closure, the transaction will automatically be rolled back. If the Closure executes successfully, the transaction will automatically be committed. You don't need to worry about manually rolling back or committing while using the transaction method.
Also if you want to do it manually , you can do it using
DB::beginTransaction();
//your codes
DB::commit();
To learn more about transactions, read official document Here
I'm in the process of trying to create a logging interface for our application. I need to track when a change happens in the database through the application. So when someone updates a field I need to insert a row into the log with the table, columns original value, columns new value, timestamp, and user that made change. To me the logical way of doing this is to tie into the DB class in laravel so everytime it's called and an update / delete method is used it runs my new method of getting the needed info and inserting it in the log.
I need it to work at the DB level I believe as it needs to happen for updates / deletes called from DB or eloquent.
How would I go about doing this?
Eloquent provides you with some nice events, they're even in the docs ! Who knew you can find so much in there.
http://laravel.com/docs/eloquent#model-events
Instead of trying to attach something onto the DB layer, have you considered using Model Observers (http://laravel.com/docs/eloquent#model-observers) to watch all update and delete events and put your logic in that observer?
Upon searching and trial and error I accomplished my goal the following way.
I created an Event listener that I for organizational purposes placed in /app/events/database.php
With the following
<?php
Event::listen('illuminate.query', function($query, $bindings, $time, $name)
{
// Code to log query goes here
});
I then placed in my /app/start/global.php the following line
include(app_path().'/events/database.php');
This now captures all requests to query the database using either DB, or Eloquent.
I have created a crud system. Here I have a main model and then dependent details model. So their are multiple rows (some times > 100 ) entered into details model related to parent model. All operations handled through grid ( inline edit ). So create, update, delete operations are performed through single request (logical checking via DB).
Now I am considering to use DB transaction to whole operations. But I am confused, how I can implement this using same structure. I already have suggestions to move my all code to one model, So transaction can be applied there. But I am thinking, if any other approach can be used for retain separation of main and details model code.
Are you using AJAX for the changes or does it rely on manual form submissions?
You can use what's called a UnitOfWork pattern.
Save any changes that the user makes to each line of the grid but don't actually commit them to the DB. Then, place a geneic save button on the page that will cause your server to actually commit all the changes through a transaction.
You can keep a list server side of each row that the user changes. You don't need to track all the rows because if they don't change anything you don't need to save anything.
What sort of approach are you using in your Models? Do your models know about their own persistence (do you do things like $user->save()), or are you doing a more data-mapper approach ($userManager->save($userEntity))? The latter makes transaction handling a lot easier.
If you're doing the active-record type of patter ($user->save()), your best bet is probably just to manually grab your db connection and manage the transaction in your controller.
If you're doing data-mapper stuff, you have more options, up to and including doing a whole unit-of-work implementation.
As for my last comment, Moving code to parent model is my solution for now. So I think I should mark this thread answered.
The whole "when to throw exception or return value" questions has been asked a lot (see the following to see just one example):
Should a retrieval method return 'null' or throw an exception when it can't produce the return value?
and I completely agree with the answers in main.
Now my question arises from adding a little more context to the above when applying this to a more complex system. Ill try and keep this as brief and simple as possible.
Right we have an example MVC PHP application:
Model A: has a function get_car($id) which returns a car object.
Controller A has a simple function for say showing a car to the user
Controller B however has a complex function that say gets the car, modify it (say through one of model A's set functions) and also updates other tables based on some of these new values through other models and libraries throughout the system - very complex ay lol
we now get to the main part of my question:
For data integrity I want to use MySQL transactions. This is where I run into a "what's best / what's best practice" scenario...
We write Model A to return FALSE if the car is not found or there is an SQL error.
This is fine for Controller A as it just wants to know if there was a error and bom out, so we just check the return value an bom - fine.
We now get to Controller B. Controller B say does some database updating before the Model A function is called which we need to roll back on error so we need to use a transactions. now this is where I get to my problem. do I leave Model A as a return value and just check it or do I change it to throw exception with the knock on effect of then having to also re-write Controller A as we now need to catch the exception... then (not done yet ;o)) do I roll back in the catch of the model (but how do we know if a transaction has been used or not?) or do we catch and re-throw or allow to bubble up to the controller catch and do the roll back there?
what I'm trying to say is that if I have lots of models and controllers with database interaction should I just make them throw exceptions and then wrap all my other code eg controller functions in try catches encase the model or library functions ever throw, or, do I make the models "self contained" to tidy and handle there own problems but then what do I do about rolling back a transaction if (for this "call") one was open (as per my example above not every time is a transaction opened...)? if this was the case I would have to make all my functions return something and then check this in the controller, as this is the only place that knows if there is an open transaction or not...
So to clarify I can use a try catch to catch and roll back in a controller, that's ok, but how to I do this from "further down" eg in a model or library function... that could be called both during and transaction or just as an auto commit normal MySQL call?
An explained answer would be great (as I like to understand why I am doing something) but if not a some vote for the favourite of the follow solutions (well the solutions I can see):
1) make all model and library functions always return a value and then the controller can either just bom or do a try catch to roll back where necessary - but meaning that I would have to check the return value of every model and library function everywhere they are used.
2) make all model and library functions throw exceptions (say on SQL error) and wrap every controller (which would call the model and library functions) in a try catch where the catch would either just bom or roll back if necessary...
also please note "bom" is push user somewhere or show a pretty error (before someone says "its bad practice to just allow your application to die..." lol)
I hope you get where Im coming from here and sorry for the long loooooong question.
Thanks in advance
Ben
[There's a theoretical problem implicit in the "For data integrity I want to use MySQL transactions"... since MySQL historically hasn't been very ACID - PostgreSQL and Oracle both provide stronger support for ACID. However, back to the real question...]
Both your (1) and (2) focus on exceptions versus failure-return values, but my impression is that this isn't the key part of detangling exceptions, error returns, and open transactions (and some databases support SQL exceptions as well). Instead, I'd focus on keeping the transaction state tied to the nesting of the functions manipulating the model. Here are some thoughts along this line:
You will probably always have error returns from some library functions anyway, so having Model A return FALSE isn't really breaking the paradigm, nor is there anything particularly troublesome about a mix of error returns versus exceptions. However, error returns MUST bubble up correctly - or be converted to exceptions if they go beyond what can be locally address.
Nested transactions are the most obvious way to have one controller start a database manipulation and still call other stuff in the app that also uses transactions. This allows a failed sub-sub-function to abort just its own part transaction and take either the error return or exception approach to bubbling the error up on the non-SQL side while the closed sub-transactions still maintain reasonable matching state. This usually needs to be simulated in code outside of the database (this is essentially what Django does).
Your code could start a new (potentially large) transaction, and keep track of the fact that it's already open to keep the sub-sub-functions in your code from trying to reopen it.
In some databases, code can detect whether a transaction is already open based on the database session state, allowing you to check the DB session state instead of tracking it in code.
Both of the above allow one to use savepoints to simulate truly nested transactions.
One must be very careful to avoid calling SQL calls with implicit commits (CREATE TABLE, for example). MySQL probably deserves a lot more caution around this issue than, say, PostgreSQL.
One way to implement the one big transaction approach is to have high-level function that initiates the transaction and then itself calls the top of whatever Controller B needs to do. This makes either bubbling up errors or having a special abort-transaction exception pretty straightforward. Only the top function would call commit rather than abort if no subfunction failed and no exception was caught. Subfunctions wouldn't call commit.
Conversely, you could have all of your functions pay attention to transactional depth implemented in the non-SQL side (your code), although this is harder to set up in some languages than others (it's pretty easy using decorators in Python, for example). This would allow any of them to call commit if they were done and the transactional depth at zero.
Hope this helps someone :-)