I've found some limited use in chaining class functions, say $class->setUser('foo')->getInfo() (bad example) although am having trouble understanding how to handle any errors that arise from one of the calls in the chain.
If setUser() for example came with an error, and returned false, it would not return $this and not allow another function to be called, thus displaying an error.
What I have actually just realized (and correct me if this is wrong), would throwing an exception if there is an error in setUser() prevent the following getInfo() function from running and emitting an error?
This would be useful knowledge to at least have, so that I can debug code that uses chaining even if I am not to use it.
Yes, throwing exceptions would probably be a good idea in this case. You'll be able to pinpoint where the error happened in the chain.
Here's an example of how that could work:
try
{
$class->setUser('foo')->getInfo();
}
catch(UnknownUser $ex)
{
// setUser failed
}
catch(CannotFetchInfo $ex)
{
// getInfo failed
}
As with any chains: If one element get's broken, the whole chain get's broken.
So if you're chaining objects, on the one hand this might be very accessible, on the other hand if things tend to fail due to errors (e.g. remote connections, files etc.), chaining becomes awkward.
So unless things do not return the expected type of response but signal a failure, exceptions can help but are - as the chain get's broken - of limited use.
However, I think that throwing exceptions is the best thing you can do for error handling (in chains and for developing own code).
The benefit of throwing an Exception is the following: You don't need to destroy the syntax that makes the chain accessible.
Let's say you have made an error in your code. It's better to throw an Exception as you need to fix it anyway.
Or let's say the data layer has been coded that it fails too often. Throw an exception, then you know about the part that needs more love.
In your general-use code, that has the chaining, these exceptions will interrupt you (you can always catch them, and btw you can turn errors into exceptions as well), but they will point you to a cause of error you might have not thought already enough much about.
So,
Exceptions are a way of error handling for chains
Exceptions help you to locate various sources of errors while you can maintain the chain syntax for a higher level of data access.
Regardless how well chaining is done however, there comes the point where it does not make sense any longer (should I add "as with everything" ;) ). I wouldn't say I'm a real fanboy of chaining, but even as a non-fanboy I use the principle from time to time in my own code. It can create easy to write code, so make use of exceptions for the error handling, I didn't run into any "real" (tm) problems with it. It's much better than breaking your chain with a false or some crap like that.
Theoretically you can return an object that takes any property/member call set/get via overloading, but even if this is fun to play with, I think it will only introduce more complexity than it helps to deal with actual, real life errors.
So if you chain into an error, you actually see the error when you use an exception. That's what exceptions have been made for: Interrupt the standard processing of your application because of an error (or signal, works as well).
Related
How can I create a generic and common system of returning application code and response messages upwards when an API call is called?
For example, I have a User class in user.php. From the highest level of my application, let's say app.php we can call
$user = new User();
$user->register($params);
About 10 things can go wrong during that process:
user already exists
invalid parameters
etc..
I don't want to just return 0 or 1, I also want the caller to know what was wrong. Here are some ideas
Exceptions, which I seriously hate..
Return array responses, ie ['result'=>0, 'reason'=>'not unique']
What is the standard way of doing this?
You asked about the standard way of handling this kind of thing. Generally the best way to handle these sorts of errors is by using exceptions, your dislike of them notwithstanding. By using exceptions, you can get away with doing less work on each side of the interface (method and caller) when you add a new kind of failure condition. You get code that's easier to maintain and more resilient to change. It's also higher performance: exceptions have a performance cost, but only when you raise them.
Your approach of returning arrays or objects is also OK. But you'll have to adjust both sides of the interface when you add new conditions.
There's a writeup on this in the php manual here. http://php.net/manual/en/functions.returning-values.php
As a project gets bigger and bigger, I get a bit confused as to what types of exceptions should be thrown and where they should be caught, e.g. how to organize internal exceptions vs exception messages that should be shown to the end user. To keep confusion down, is it best practice to always catch an exception in the higher-level object?
Take this example: if I have a database class that inserts a row into the database, and that class is called by another object that processes data, and in turn this data processing object is called by a controller, is it correct to catch errors from the database class in the processing class, which (possibly) rethrows the error to be caught in the controller class? Or can an error thrown in the database class be caught in the controller class?
Additionally, if one method in the processing class is called by another method in the same class, and the first throws an error, is it ok to catch the exception in the same class? Or should it be deferred to the higher-level class that's calling it?
Are there any other tips on how to structure and organize exceptions in large projects with many levels of classes?
I like to think about exceptions as expected events that may interrupt your normal program flow but in an orderly and controlled way.
The task of your exception handling is to deal with this situation and resolve it in such a way that your program state remains valid and the application can continue.
Therefore you should add the exception handling at the first place up the calling hierarchy where you are actually able to resolve the situation.
This may include cleaning up previously opened resources which are now no longer needed, logging the event or providing feedback to the user.
In your example I would probably leave the handling logic to the controller.
The database often does not have enough context of what has just happened and how to deal with specific conditions since those depend on the context in which the database has been called.
Your controller on the other hand should have all context information and should be well aware of what the program just tried to do.
It is also probably better suited to resolve the issue for example by displaying a general error message to the user and maybe send detailed error report to the administrator.
Sometimes you will also have the situation where you need to catch exception on an intermediate level, to do some cleanup (like closing streams or rolling back some actions) and then rethrow the exception because you know that you did only resolve part of the situation.
All in all may general recommendation is to think about what actions need to be done to resolve such an exceptional event and then implement the error handling where those actions can be done easily.
An exception should be thrown in an exceptional event, i.e. if something is seriously wrong and the code cannot continue as is. E.g. database is down, remote service is not responding, seriously malformed input received.
If you catch an exception, you need to know what to do with it. When an exception has been raised, it means your application is in a serious error state. If you catch the exception, you should have a serious plan on how to proceed from this error state. If you don't have a plan, there's no point in catching it. If the database is down, you will probably want to stop the program. On the other hand, if a remote HTTP request yielded a 404 response and your HTTP handler is throwing that as an exception, you will probably be able to live with that error and continue (depending on what that request was supposed to be used for).
Raw exception messages should never be shown to endusers. The exception needs to be logged, but the enduser should only see a generic nice "Oops, something went wrong, maybe you want to try X instead...?" message. This is not specific to exceptions, it's just good UX design.
I want to add in some handling for a specific exception
Zend_Db_Statement_Exception' with message
'SQL ERROR: SQLSTATE[HY000]: General error: 2006 MySQL server has gone away
I'd prefer to do string comparison against the exception message somewhere fairly low-level, so that in my application logic, I can simply catch a nice friendly exception like My_Module_Exception_MysqlGoneAway, as opposed to having catch exception clauses with string comparison in them strewn about my application logic.
So in this particular case, the error is being triggered from a load() method, so I could go rewrite Mage_Core_Model_Abstract, overload the load() method and add in the exception handling. But that's not bulletproof, b/c this kind of thing could also be triggered from a collection load or probably other areas of code.
So the other option would be to override lib/Varien/Db/Adapter/Pdo/Mysql.php in app/code/local, and add the exception handling there, but that seems a bit like overkill just to have a nice exception class.
Is there any easier way to do this?
As you proposed the best solution is to override the MySQL.php adapter in local as this is the file that any MySQL transaction will use. For this error though I would look more into the cause as MySQL gone away could mean that there are bigger problems.
Are you logging in MySQL to see why it is dropping its connection ?
Back to your point though, The adapter class is where all other classes inherit there logic and is the main workhorse for querying the database, There are several instances where you will need to add extra logic to catch the exception. Connection, RawQuery etc.
The whole "when to throw exception or return value" questions has been asked a lot (see the following to see just one example):
Should a retrieval method return 'null' or throw an exception when it can't produce the return value?
and I completely agree with the answers in main.
Now my question arises from adding a little more context to the above when applying this to a more complex system. Ill try and keep this as brief and simple as possible.
Right we have an example MVC PHP application:
Model A: has a function get_car($id) which returns a car object.
Controller A has a simple function for say showing a car to the user
Controller B however has a complex function that say gets the car, modify it (say through one of model A's set functions) and also updates other tables based on some of these new values through other models and libraries throughout the system - very complex ay lol
we now get to the main part of my question:
For data integrity I want to use MySQL transactions. This is where I run into a "what's best / what's best practice" scenario...
We write Model A to return FALSE if the car is not found or there is an SQL error.
This is fine for Controller A as it just wants to know if there was a error and bom out, so we just check the return value an bom - fine.
We now get to Controller B. Controller B say does some database updating before the Model A function is called which we need to roll back on error so we need to use a transactions. now this is where I get to my problem. do I leave Model A as a return value and just check it or do I change it to throw exception with the knock on effect of then having to also re-write Controller A as we now need to catch the exception... then (not done yet ;o)) do I roll back in the catch of the model (but how do we know if a transaction has been used or not?) or do we catch and re-throw or allow to bubble up to the controller catch and do the roll back there?
what I'm trying to say is that if I have lots of models and controllers with database interaction should I just make them throw exceptions and then wrap all my other code eg controller functions in try catches encase the model or library functions ever throw, or, do I make the models "self contained" to tidy and handle there own problems but then what do I do about rolling back a transaction if (for this "call") one was open (as per my example above not every time is a transaction opened...)? if this was the case I would have to make all my functions return something and then check this in the controller, as this is the only place that knows if there is an open transaction or not...
So to clarify I can use a try catch to catch and roll back in a controller, that's ok, but how to I do this from "further down" eg in a model or library function... that could be called both during and transaction or just as an auto commit normal MySQL call?
An explained answer would be great (as I like to understand why I am doing something) but if not a some vote for the favourite of the follow solutions (well the solutions I can see):
1) make all model and library functions always return a value and then the controller can either just bom or do a try catch to roll back where necessary - but meaning that I would have to check the return value of every model and library function everywhere they are used.
2) make all model and library functions throw exceptions (say on SQL error) and wrap every controller (which would call the model and library functions) in a try catch where the catch would either just bom or roll back if necessary...
also please note "bom" is push user somewhere or show a pretty error (before someone says "its bad practice to just allow your application to die..." lol)
I hope you get where Im coming from here and sorry for the long loooooong question.
Thanks in advance
Ben
[There's a theoretical problem implicit in the "For data integrity I want to use MySQL transactions"... since MySQL historically hasn't been very ACID - PostgreSQL and Oracle both provide stronger support for ACID. However, back to the real question...]
Both your (1) and (2) focus on exceptions versus failure-return values, but my impression is that this isn't the key part of detangling exceptions, error returns, and open transactions (and some databases support SQL exceptions as well). Instead, I'd focus on keeping the transaction state tied to the nesting of the functions manipulating the model. Here are some thoughts along this line:
You will probably always have error returns from some library functions anyway, so having Model A return FALSE isn't really breaking the paradigm, nor is there anything particularly troublesome about a mix of error returns versus exceptions. However, error returns MUST bubble up correctly - or be converted to exceptions if they go beyond what can be locally address.
Nested transactions are the most obvious way to have one controller start a database manipulation and still call other stuff in the app that also uses transactions. This allows a failed sub-sub-function to abort just its own part transaction and take either the error return or exception approach to bubbling the error up on the non-SQL side while the closed sub-transactions still maintain reasonable matching state. This usually needs to be simulated in code outside of the database (this is essentially what Django does).
Your code could start a new (potentially large) transaction, and keep track of the fact that it's already open to keep the sub-sub-functions in your code from trying to reopen it.
In some databases, code can detect whether a transaction is already open based on the database session state, allowing you to check the DB session state instead of tracking it in code.
Both of the above allow one to use savepoints to simulate truly nested transactions.
One must be very careful to avoid calling SQL calls with implicit commits (CREATE TABLE, for example). MySQL probably deserves a lot more caution around this issue than, say, PostgreSQL.
One way to implement the one big transaction approach is to have high-level function that initiates the transaction and then itself calls the top of whatever Controller B needs to do. This makes either bubbling up errors or having a special abort-transaction exception pretty straightforward. Only the top function would call commit rather than abort if no subfunction failed and no exception was caught. Subfunctions wouldn't call commit.
Conversely, you could have all of your functions pay attention to transactional depth implemented in the non-SQL side (your code), although this is harder to set up in some languages than others (it's pretty easy using decorators in Python, for example). This would allow any of them to call commit if they were done and the transactional depth at zero.
Hope this helps someone :-)
I'm currently using register_shutdown_function() for multiple purposes. One use is for handling fatal errors, while the other is for logging resources used during execution like time, memoryusage etc.
Currently I register two different shutdown functions, but on one test only the first ran while the other seemed to fail. Now this could ofcourse be triggered by some error within the function itself so I have rewritten it, but is it possible an error was caused by using several register_shutdown_function calls? So what is considered best practice here, to registert two different functions or just make the call to one function that handles the different tasks?
Is it also safe (and possible) to make the function load a class for errorhandling if a fatal error occurs or should I keep the functionality within the function itself?
The final question I got is if there's a better way to handle fatal errors than using shutdown functions? I tried using set_error_handler, but it does not cover all errortypes so some errors will not trigger this.
Hope these questions are well formulated and clear. My goal is to keep the code as solid as possible and I could not find any decent answers to the questions I had.
*Edit: Found the answer to my first question, registering several functions should be no problem so the error had to be within the function itself. Leaving the question up to get answers to whether there's a better ways to handle fatal errors.
IIRC, if you have multiple shutdown functions registered, they will be executed in the order in which they were registered; and you should never have an exit statement in any, otherwise subsequent shutdown functions will not be run. That means you need to take great care if you have multiple functions rather than a single shutdown function.
However, if you're passing different arguments to the different functions, you should ensure that you have default values for them all in case the functions are called (perhaps triggered by an error) before all the appropriate variables are set.
Personally, I register multiple functions, for similar purposes to yourself; but I'm very careful about the logic within them, and the order of registration.
It's also not a good idea to use includes or similar in shutdown functions (especially where one is an exception handler), in case the include itself triggers an exception