Returning Complex Function Responses - php

How can I create a generic and common system of returning application code and response messages upwards when an API call is called?
For example, I have a User class in user.php. From the highest level of my application, let's say app.php we can call
$user = new User();
$user->register($params);
About 10 things can go wrong during that process:
user already exists
invalid parameters
etc..
I don't want to just return 0 or 1, I also want the caller to know what was wrong. Here are some ideas
Exceptions, which I seriously hate..
Return array responses, ie ['result'=>0, 'reason'=>'not unique']
What is the standard way of doing this?

You asked about the standard way of handling this kind of thing. Generally the best way to handle these sorts of errors is by using exceptions, your dislike of them notwithstanding. By using exceptions, you can get away with doing less work on each side of the interface (method and caller) when you add a new kind of failure condition. You get code that's easier to maintain and more resilient to change. It's also higher performance: exceptions have a performance cost, but only when you raise them.
Your approach of returning arrays or objects is also OK. But you'll have to adjust both sides of the interface when you add new conditions.
There's a writeup on this in the php manual here. http://php.net/manual/en/functions.returning-values.php

Related

Return values, throw exceptions and rolling back transactions

The whole "when to throw exception or return value" questions has been asked a lot (see the following to see just one example):
Should a retrieval method return 'null' or throw an exception when it can't produce the return value?
and I completely agree with the answers in main.
Now my question arises from adding a little more context to the above when applying this to a more complex system. Ill try and keep this as brief and simple as possible.
Right we have an example MVC PHP application:
Model A: has a function get_car($id) which returns a car object.
Controller A has a simple function for say showing a car to the user
Controller B however has a complex function that say gets the car, modify it (say through one of model A's set functions) and also updates other tables based on some of these new values through other models and libraries throughout the system - very complex ay lol
we now get to the main part of my question:
For data integrity I want to use MySQL transactions. This is where I run into a "what's best / what's best practice" scenario...
We write Model A to return FALSE if the car is not found or there is an SQL error.
This is fine for Controller A as it just wants to know if there was a error and bom out, so we just check the return value an bom - fine.
We now get to Controller B. Controller B say does some database updating before the Model A function is called which we need to roll back on error so we need to use a transactions. now this is where I get to my problem. do I leave Model A as a return value and just check it or do I change it to throw exception with the knock on effect of then having to also re-write Controller A as we now need to catch the exception... then (not done yet ;o)) do I roll back in the catch of the model (but how do we know if a transaction has been used or not?) or do we catch and re-throw or allow to bubble up to the controller catch and do the roll back there?
what I'm trying to say is that if I have lots of models and controllers with database interaction should I just make them throw exceptions and then wrap all my other code eg controller functions in try catches encase the model or library functions ever throw, or, do I make the models "self contained" to tidy and handle there own problems but then what do I do about rolling back a transaction if (for this "call") one was open (as per my example above not every time is a transaction opened...)? if this was the case I would have to make all my functions return something and then check this in the controller, as this is the only place that knows if there is an open transaction or not...
So to clarify I can use a try catch to catch and roll back in a controller, that's ok, but how to I do this from "further down" eg in a model or library function... that could be called both during and transaction or just as an auto commit normal MySQL call?
An explained answer would be great (as I like to understand why I am doing something) but if not a some vote for the favourite of the follow solutions (well the solutions I can see):
1) make all model and library functions always return a value and then the controller can either just bom or do a try catch to roll back where necessary - but meaning that I would have to check the return value of every model and library function everywhere they are used.
2) make all model and library functions throw exceptions (say on SQL error) and wrap every controller (which would call the model and library functions) in a try catch where the catch would either just bom or roll back if necessary...
also please note "bom" is push user somewhere or show a pretty error (before someone says "its bad practice to just allow your application to die..." lol)
I hope you get where Im coming from here and sorry for the long loooooong question.
Thanks in advance
Ben
[There's a theoretical problem implicit in the "For data integrity I want to use MySQL transactions"... since MySQL historically hasn't been very ACID - PostgreSQL and Oracle both provide stronger support for ACID. However, back to the real question...]
Both your (1) and (2) focus on exceptions versus failure-return values, but my impression is that this isn't the key part of detangling exceptions, error returns, and open transactions (and some databases support SQL exceptions as well). Instead, I'd focus on keeping the transaction state tied to the nesting of the functions manipulating the model. Here are some thoughts along this line:
You will probably always have error returns from some library functions anyway, so having Model A return FALSE isn't really breaking the paradigm, nor is there anything particularly troublesome about a mix of error returns versus exceptions. However, error returns MUST bubble up correctly - or be converted to exceptions if they go beyond what can be locally address.
Nested transactions are the most obvious way to have one controller start a database manipulation and still call other stuff in the app that also uses transactions. This allows a failed sub-sub-function to abort just its own part transaction and take either the error return or exception approach to bubbling the error up on the non-SQL side while the closed sub-transactions still maintain reasonable matching state. This usually needs to be simulated in code outside of the database (this is essentially what Django does).
Your code could start a new (potentially large) transaction, and keep track of the fact that it's already open to keep the sub-sub-functions in your code from trying to reopen it.
In some databases, code can detect whether a transaction is already open based on the database session state, allowing you to check the DB session state instead of tracking it in code.
Both of the above allow one to use savepoints to simulate truly nested transactions.
One must be very careful to avoid calling SQL calls with implicit commits (CREATE TABLE, for example). MySQL probably deserves a lot more caution around this issue than, say, PostgreSQL.
One way to implement the one big transaction approach is to have high-level function that initiates the transaction and then itself calls the top of whatever Controller B needs to do. This makes either bubbling up errors or having a special abort-transaction exception pretty straightforward. Only the top function would call commit rather than abort if no subfunction failed and no exception was caught. Subfunctions wouldn't call commit.
Conversely, you could have all of your functions pay attention to transactional depth implemented in the non-SQL side (your code), although this is harder to set up in some languages than others (it's pretty easy using decorators in Python, for example). This would allow any of them to call commit if they were done and the transactional depth at zero.
Hope this helps someone :-)

Is it good/common sense programming practice to make all methods return a MyResult object in PHP?

Working through several layers of an MVC architecture designed program, I find that I would like to have more information on a deeper layer's method return result, and that it's not always that I can anticipate when I'll need this information. And - for abstraction sake - I might not want that method outputting stuff to the application-specific log (that method could be used in a different program), or have a specific application dependent behaviour like other layers above.
For instance, in a given utility function I might have several pre-requisite checks before executing an action, that fail. If I return false on any of them, the caller doesn't know what happened. If I return false and log to the application log what happened, I'm bounding that function to application specific behaviour.
Question is: is it good/common pratice to implement a little class called MyResult and have it return the response status (ok/false), a message, an eventual integer code, and an object placeholder (array or object) where the caller could access the returned object? This MyResult class would be used throughout the whole system and would be a common "dialect" between all methods and their callers. All methods would then return an instance of MyResult, all the times.
Could you give an example? It seems a bit, but I can be mistaken, that you are having methods you are using statically (even if they are not implemented/called like that they could've been). The basic example of the table-object that can paint itself is called like so: $myTable->paint();. It can return a variable if it worked or not (true/false) but any other thing (like logging) is a function of table() and neither your calling method, nor the return value should have anything to do with that as far as I'm concerned.
Maybe I'm having a hard time understanding what situation you are going to use this for, but if you want to send messages around for some purpose that requires messages (or events etc) you should define those, but I don't see any merit in defining a default returnObject to pass around method-call results.
For errors you have two options: exceptions (that is: things you really don't expect to happen and should halt execution) and errors: expected but unwanted behaviour. The first should be left alone, the second can be tricky, but I'd say the object itself should contain a state which makes it clear what happened.
That's what exceptions are for. You don't have to over-do them like Java, but they exist because error codes suck.
If a framework does not offer a specific feature you need, there is no other way then that you take care on your own. Especially if you need something that runs cross the aims of the framework, so would never make it in.
However, many frameworks offer places in which you can extend them. Some are more flexible than others. So if possible I would look if you can still implement your needed feature as a type of add-on, plugin or helper code that can stay within the frameworks terrain.
If that is not possible, I would say it's always valid to do whatever you want to do. Use the part of the framework that is useful for you.

Simple error checking in PHP class function chaining?

I've found some limited use in chaining class functions, say $class->setUser('foo')->getInfo() (bad example) although am having trouble understanding how to handle any errors that arise from one of the calls in the chain.
If setUser() for example came with an error, and returned false, it would not return $this and not allow another function to be called, thus displaying an error.
What I have actually just realized (and correct me if this is wrong), would throwing an exception if there is an error in setUser() prevent the following getInfo() function from running and emitting an error?
This would be useful knowledge to at least have, so that I can debug code that uses chaining even if I am not to use it.
Yes, throwing exceptions would probably be a good idea in this case. You'll be able to pinpoint where the error happened in the chain.
Here's an example of how that could work:
try
{
$class->setUser('foo')->getInfo();
}
catch(UnknownUser $ex)
{
// setUser failed
}
catch(CannotFetchInfo $ex)
{
// getInfo failed
}
As with any chains: If one element get's broken, the whole chain get's broken.
So if you're chaining objects, on the one hand this might be very accessible, on the other hand if things tend to fail due to errors (e.g. remote connections, files etc.), chaining becomes awkward.
So unless things do not return the expected type of response but signal a failure, exceptions can help but are - as the chain get's broken - of limited use.
However, I think that throwing exceptions is the best thing you can do for error handling (in chains and for developing own code).
The benefit of throwing an Exception is the following: You don't need to destroy the syntax that makes the chain accessible.
Let's say you have made an error in your code. It's better to throw an Exception as you need to fix it anyway.
Or let's say the data layer has been coded that it fails too often. Throw an exception, then you know about the part that needs more love.
In your general-use code, that has the chaining, these exceptions will interrupt you (you can always catch them, and btw you can turn errors into exceptions as well), but they will point you to a cause of error you might have not thought already enough much about.
So,
Exceptions are a way of error handling for chains
Exceptions help you to locate various sources of errors while you can maintain the chain syntax for a higher level of data access.
Regardless how well chaining is done however, there comes the point where it does not make sense any longer (should I add "as with everything" ;) ). I wouldn't say I'm a real fanboy of chaining, but even as a non-fanboy I use the principle from time to time in my own code. It can create easy to write code, so make use of exceptions for the error handling, I didn't run into any "real" (tm) problems with it. It's much better than breaking your chain with a false or some crap like that.
Theoretically you can return an object that takes any property/member call set/get via overloading, but even if this is fun to play with, I think it will only introduce more complexity than it helps to deal with actual, real life errors.
So if you chain into an error, you actually see the error when you use an exception. That's what exceptions have been made for: Interrupt the standard processing of your application because of an error (or signal, works as well).

How to create write-once properties?

I'm stuck with a general OOP problem, and can't find the right way to phrase my question.
I want to create a class that gives me an object which I can write to once, persist it to storage and then be unable to change the properties. (for example: invoice information - once written to storage, this should be immutable). Not all information is available immediately, during the lifecycle of the object, information is added.
What I'd like to avoid is having exceptions flying out of setters when trying to write, because it feels like you're offering a contract you don't intend to keep.
Here are some ideas I've considered so far:
Pass in any write-information in the constructor. Constructor throws exception if the data is already present.
Create multiple classes in an inheritance tree, with each class representing the entity at some stage of its lifecycle, with appropriate setters where needed. Add a colletive interface for all the read operations.
Silently discarding any inappropriate writes.
My thoughts on these:
1. Makes the constructor highly unstable, generally a bad idea.
2. Explosion of complexity, and doesn't solve the problem completely (you can call the setter twice in a row, within the same request)
3. Easy, but same problem as with the exceptions; it's all a big deception towards your clients.
(Just FYI: I'm working in PHP5 at the moment - although I suspect this to be a generic problem)
Interesting problem. I think your best choice was #1, but I'm not sure I'd do it in the constructor. That way the client code can choose what it wants to do with the exception (suppress them, handle them, pass them up to the caller, etc...). And if you don't like exceptions, you could move the writing to a write() method that returns true if the write was successful and false otherwise.

Error handling in the model, or in the controller?

I asked around on various IRC channels but was unable to get an answer with a definitive explanation behind it. Should errors (pertaining to the model, such as transaction failures) be handled in the model, or in the controller?
Thanks in advance for any help.
EDIT
Well, the confusing thing is that my code (in the model) looks something like this already:
try
{
// Connect to MongoDB
// Fetch a record
}
catch (MongoConnectionException $e)
{
// Handle this error
}
catch (MongoException $e)
{
// Handle this error
}
So, should I return exceptions based on the exceptions MongoDB returns? Or should I directly allow these exceptions to bubble up to the controller?
Thanks!
The correct answer is "Both", but mostly in the Model.
The appropriate thing for your controller to do is simply catch some exception that the model throws, and handle outputting a nice "whups" message. Depending on how you're structuring your models, it might be appropriate for the controller to do some logging.
Anything other than catching the exception, maybe writing to a log (if your model infrastructure doesn't do it), and displaying a pretty error, does not belong in your controller.
Errors such as a transaction failure - and what to do in such cases - are business logic issues. Thus, they should be handled in the model and appropriate notifications passed back up to the controller.
Fat model, skinny controller.
In most cases, you should throw or pass exceptions to the caller/receiver AKA Controller or BLL.
It's controller's job to process actions not model
It's view's job to display a message box(or whatever) not model
You can't HANDLE exceptions in model, for real... you can only log or throw them.
Scott Guthrie for ASP.NET C# suggests using the Controller as the exception handler. He also, suggests setting up helper objects and handlers for the project. This in turn allows you to continue your development as normal.
Note, however with PHP MVC is still in its earliest stages and implementation so, this may not be perfect.
I do think that once you have decided how to handle the solution that you still with it and follow that pattern once you have made the decision.
The ideas behind MVC frameworks are quite simple and extremely flexible. The idea is that you have a single controller (such as index.php) that controls the launch of applications within the framework based on arguments in the request. This usually includes, at a minimum, an argument defining which model to invoke, an event, and the usual GET arguments. From there the controller validates the request (authentication, valid model, request sanitization, etc.) and runs the requested event.
At present the are only really two true frameworks...and this could be a problem for coding, support and future releases. Though, there are alot of frameworks that extend themselves to support MVC.
By Earliest stages, I mean to say and suggest that the current frameworks, solutions and support is limited as a result of rushed deliveries and poor documentation. Additionally, I'm suggesting what works for me and has worked for me in the past.
I would strongly This Website

Categories