a while ago I completely recoded my application so that MySQL would perform in an ACID way.
At the very top level of all functions I do something like this:
try{
$db->begin();
dosomething($_SESSION['userid']);
$db->commit();
}catch(advException $e){
$eCode = $e->getCode();
$eMessage = $e->getMessage();
# Success
if ($eCode == 0){
$db->commit();
}else{
$db->rollback();
}
}
Within the function 'dosomething' I have the Exceptions thrown to users like:
throw new Exception('There was a problem.',1);
or
throw new Exception('You have successfully done that!', 0);
So that I can control the flow of the program. If theres a problem then roll back everything that happened and if everything was good then commit it. It's all worked quite great but there's just one flaw that I've come across so far.
I added Exception logging so I can see when there are issues that users face. But the problem is, if the table that logs the errors is InnoDB then it's also included in the transaction and will rollback if theres a problem so no errors are stored.
To get around this I basically just made the Error logging table MyISAM so when a rollback is done, the changes are still there.
Now I'm thinking of other bits I'd like to keep out of the transaction, like sending a mail within my application to the admin to help alert of problems.
Is there any sort of way for me to not include a database insert within the parent transaction? Have I taken a bad route in terms of Application/DB design and is there any other way I could have handled this?
Thanks, Dominic
It is not good idea to throw an Exception on success.
You have to do DB insert after previous rollback was called.
catch (Exception $e) {
$db->rollback();
Log::insert('Error: ' . $e->getMessage());
}
Try to use Logger to control your program. It is more flexible way.
Use return codes for successful operation, and Exceptions to specify exceptional situations (such as severe errors, etc.). As for your specific issue, I'd recommend having a separate database for logging if you choose to use the rollback strategy.
Related
Let's say I have a function that modifies a database. This could be anywhere from one query to multiple queries in a transaction. In any of these modifications we want to make sure all queries are successful. Thankfully for transactions, if one of these fails I have a way of making sure none of the changes are made permanent.
Now let's say I also have a log that needs to be written to to make note of the change (s). How could I make sure that both the modifications are made and the log written?
I could always do a try/catch on either but say for instance my queries are successful but my log is unsuccessful. How could I then go back and undo all of the modifications to the database?
Would something like this be effective and/or advisable?
try {
$db->connect();
$db->mysqli->begin_transaction();
// series of queries
...
// log to file
$db->mysqli->commit();
} catch (exception $e) {
$db->mysqli->rollback();
$db->mysqli->close();
}
Wanted to make note of some behaviors I observed while testing this. If you place the commit() before the log, the database changes will not roll back even if an exception is caught while logging.
The data will be stored in the database once you call commit() or trigger an implicit commit. For example, this will not make any changes to the database because there is no commit:
$mysqli->begin_transaction();
$mysqli->query('INSERT INTO test1(val) VALUES(2)');
// end of script's execution
In your little example, the transaction will be rolled back as long as your logging functionality stops the execution or throws an exception.
$db->connect();
try {
$db->mysqli->begin_transaction();
// series of queries
...
// log to file
throw new \Exception(); // <-- This will prevent commit from executing.
$db->mysqli->commit();
} catch (\Exception $e) {
// An exception? That's ok, we'll try something different instead.
$db->mysqli->rollback();
}
You need to call rollback only if you want to recover from the exception and clear the unsaved buffer. The only time you should ever catch exceptions is if you want to somehow recover from it and do a different thing. Otherwise, your code could be simplified by removing try-catch and rollback altogether.
However, it is a good idea to catch exceptions in transactions and roll it back explicitly as soon as possible, even when you do not want to recover. It can prevent an issue if you catch the exception somewhere else in your code and you try to execute other DB actions. The unsaved data is still in the buffer until you close the session or roll back! This is why you would often see code like this:
$db->connect();
try {
$db->mysqli->begin_transaction();
throw new \Exception(); // <-- Something throws an exception
$db->mysqli->commit();
} catch (\Exception $e) {
// rollback unsaved data, but rethrow the exception as we do not know how to recover from it here
$db->mysqli->rollback();
throw $e;
}
When executing a "LOCK TABLES" is it wise to wrap the call in a try/catch to make sure the table gets unlocked in case of an exception?
In general it's a good idea to use try { } catch for operations that require undoing a previous operation in case of any errors; it's not limited to just database statements.
That said, when using databases, it's advisable to use a more granular locking mechanism such as the one that comes with transactional databases such as InnoDB. You would still use try { } catch, but in this manner:
// start a new transaction
$db->beginTransaction();
try {
// do stuff
// make the changes permament
$db->commit();
} catch (Exception $e) {
// roll back any changes you've made
$db->rollback();
throw $e;
}
The exact behaviour of conflict resolution is defined by the transaction isolation level, which can be changed to suit your needs.
when using 3rd part libraries they tend to throw exceptions to the browser and hence kill the script.
eg. if im using doctrine and insert a duplicate record to the database it will throw an exception.
i wonder, what is best practice for handling these exceptions.
should i always do a try...catch?
but doesn't that mean that i will have try...catch all over the script and for every single function/class i use? Or is it just for debugging?
i don't quite get the picture.
Cause if a record already exists in a database, i want to tell the user "Record already exists".
And if i code a library or a function, should i always use "throw new Expcetion($message, $code)" when i want to create an error?
Please shed a light on how one should create/handle exceptions/errors.
Thanks
The only way to catch these exceptions is to use a try catch block. Or if you don't want the exception to occur in the first place you need to do your due diligence and check if the record already exists before you try to insert the record.
If it feels like you're using this all over the place then maybe you need to create a method that takes care of this for you (Dont Repeat Yourself).
I don't know Doctrine, but regarding your concrete usage, maybe there is a way to determine if you are facing a duplicate entry, something like :
try {
/* Doctrine code here */
} catch (DuplicateEntryException $e) {
/* The record already exists */
} catch (Exception $e) {
/* Unexpected error handling */
}
Or maybe you have to check if the Exception code equals 1062, which is the MySQL error code for duplicate entries.
Any code that may throw an exception should be in a try/catch block. It is difficult in PHP because you cannot know which method throws an Exception.
You should also have a big try/catch block in your main PHP file that avoids displaying the stack trace to the user, and that logs the cause. Maybe you can use set_exception_handler for this.
I searched for this for a long time (here too), have read many php codes, but still couldn't find a satisfying answer. It may seem a too wide topic, but it really sticks together - at last for me. Could you please help?
In a php website I have PDO as DAL, what is used by BLL objects, and they are called from the UI. Now if something happens, PDO throws a PDOException. Of course the UI layer doesn't have to know anything about PDOExceptions, so the BLL object catches it. But now what?
I have read that
exceptions are for truly exceptional situations and
one re-throws exceptions from lower layers in order not to get low-level exceptions in upper layers.
Let me illustrate my problem (please don't pay attention to the function args):
class User
{
function signUp()
{
try
{
//executes a PDO query
//returns a code/flag/string hinting the status of the sign up:
//success, username taken, etc.
}
catch (PDOException $e)
{
//take the appropriate measure, e.g. a rollback
//DataAccessException gets all the information (e.g. message, stack
//trace) of PDOException, and maybe adds some other information too
//if not, it is like a "conversion" from PDOException to DAE
throw new DataAccessException();
}
}
}
//and in an upper layer
$user = new User();
try
{
$status = $user->signUp();
//display a message regarding the outcome of the operation
//if it was technically successful
}
catch (DataAccessException $e)
{
//a technical problem occurred
//log it, and display a friendly error message
//no other exception is thrown
}
Is this a right solution?
When re-throwing the PDOException I don't think it would be appropriate to use exception chaining (as this would only make debugging information redundant; DataAccessException gets everything, including the full stack trace from PDOException).
Thanks in advance.
As I understood your post a good resource is this:
Api design
I think that you've done your homework (if I can use the phrase) but you forgot the reason why this thing is being done. In your example I'd create something like
SignUpException
which would inform the upper layers that something has gone wrong about signup. What you do here is essentially masking a database exception with a different named one, which is essentially the same thing, something that although programmatically correct, misses the point of the why-s of doing it in the first place.
This is written in PHP, but it's really language agnostic.
try
{
try
{
$issue = new DM_Issue($core->db->escape_string($_GET['issue']));
}
catch(DM_Exception $e)
{
throw new Error_Page($tpl, ERR_NOT_FOUND, $e->getMessage());
}
}
catch(Error_Page $e)
{
die($e);
}
Is nested try, catch blocks a good practice to follow? It seems a little bulky just for an error page - however my Issue Datamanager throws an Exception if an error occurs and I consider that to be a good way of error detecting.
The Error_Page exception is simply an error page compiler.
I might just be pedantic, but do you think this is a good way to report errors and if so can you suggest a better way to write this?
Thanks
You're using Exceptions for page logic, and I personally think that's not a good thing. Exceptions should be used to signal when bad or unexpected things happen, not to control the output of an error page. If you want to generate an error page based on Exceptions, consider using set_exception_handler. Any uncaught exceptions are run through whatever callback method you specify. Keep in mind that this doesn't stop the "fatalness" of an Exception. After an exception is passed through your callback, execution will stop like normal after any uncaught exception.
I think you'd be better off not nesting. If you expect multiple exception types, have multiple catches.
try{
Something();
}
catch( SpecificException se )
{blah();}
catch( AnotherException ae )
{blah();}
The ideal is for exceptions to be caught at the level which can handle them. Not before (waste of time), and not after (you lose context).
So, if $tpl and ERR_NOT_FOUND are information which is only "known" close to the new DM_Issue call, for example because there are other places where you create a DM_Issue and would want ERR_SOMETHING_ELSE, or because the value of $tpl varies, then you're catching the first exception at the right place.
How to get from that place to dying is another question. An alternative would be to die right there. But if you do that then there's no opportunity for intervening code to do anything (such as clearing something up in some way or modifying the error page) after the error but before exit. It's also good to have explicit control flow. So I think you're good.
I'm assuming that your example isn't a complete application - if it is then it's probably needlessly verbose, and you could just die in the DM_Exception catch clause. But for a real app I approve of the principle of not just dying in the middle of nowhere.
Depending on your needs this could be fine, but I am generally pretty hesitant to catch an exception, wrap the message in a new exception, and rethrow it because you loose the stack trace (and potentially other) information from the original exception in the wrapping exception. If you're sure that you don't need that information when examining the wrapping exception then it's probably alright.
I'm not sure about PHP but in e.g. C# you can have more then one catch-Block so there is no need for nested try/catch-combinations.
Generally I believe that errorhandling with try/catch/finally is always common-sense, also for showing "only" a error-page. It's a clean way to handle errors and avoid strange behavior on crashing.
I wouldn't throw an exception on issue not found - it's a valid state of an application, and you don't need a stack trace just to display a 404.
What you need to catch is unexpected failures, like sql errors - that's when exception handling comes in handy. I would change your code to look more like this:
try {
$issue = DM_Issue::fetch($core->db->escape_string($_GET['issue']));
}
catch (SQLException $e) {
log_error('SQL Error: DM_Issue::fetch()', $e->get_message());
}
catch (Exception $e) {
log_error('Exception: DM_Issue::fetch()', $e->get_message());
}
if(!$issue) {
display_error_page($tpl, ERR_NOT_FOUND);
}
else
{
// ... do stuff with $issue object.
}
Exceptions should be used only if there is a potentially site-breaking event - such as a database query not executing properly or something is misconfigured. A good example is that a cache or log directory is not writable by the Apache process.
The idea here is that exceptions are for you, the developer, to halt code that can break the entire site so you can fix them before deployment. They are also sanity checks to make sure that if the environment changes (i.e. somebody alters the permissions of the cache folder or change the database scheme) the site is halted before it can damage anything.
So, no; nested catch handlers are not a good idea. In my pages, my index.php file wraps its code in a try...cache block - and if something bad happens it checks to see if its in production or not; and either emails me and display a generic error page, or shows the error right on the screen.
Remember: PHP is not C#. C# is (with the exception (hehe, no pun intended :p) of ASP.net) for applications that contain state - whereas PHP is a stateless scripting language.