Updating database on __destruct()? - php

Do you think it's a good idea?
Let's say you have an application component that is used by other components to retrieve / update data in the db. It's basically a class with get(), set(), update() methods.
Would it be a good idea for that component to update (or set) data only in it's properties when called, and on __destruct to update the db as well? Or should it directly update the db on each set/ update call ?

Updating the database on object destruction smells to me a little bit like a software side effect. That is, an action that takes place in an unexpected and somewhat non-explicit place. It would not be obvious from viewing your code that a database action is happening when __destruct() is called, even if you call it explicitly. Future code maintainers (including yourself) could be easily confused when trying to hunt down a bug involving inconsistent data, but not seeing any calls to the database or method calls resembling data interactions when viewing the code.
I would advise against it.

Attempting to throw an exception from a destructor (called in the time
of script termination) causes a fatal error.
So what about when you have an exception? Any way, I think this is not a good idea, you can't control the work flow, and it is easy to lead a debug hell.

Related

Logic inside constructor

Is it a good idea to have logic inside __constructor?
public class someClass
{
public function __construct()
{
//some logic here
}
So far I thought that it is fine; however, this reddit comment suggests the opposite.
As #Barry wrote, one of the reasons is related to unit-testing, but it's just a side-effect.
Let's take the worst case scenario: you have a "class", which only has a constructor (you probably have seen such examples). So ... why was it even written as a class? You cannot alter it's state, you cannot request it to perform any task and you have no way to check, that it did what you wanted. You could as well used a linear file and just included it. This is just bad.
Now for a more reasonable example: let's assume you have a class, which has some validation checks in the constructor and makes a new DB connection in it. And and then it also has some public methods for performing various tasks
The most obvious problem is the "makes a new DB connection" - there is no way to affect or prevent this operation from outside the class. And that new connection is going off to do who-knows-what (probably loading some configuration and trying to throw exceptions). It also constitutes a hidden dependency, for which you have no indication, without inspecting the class's code.
And there is a similar problem with code, that does validations and/or transformations of passed parameters. It constitutes hidden logic (and thus violating PoLA. It also makes your class harder to extend, because you probably will want to retain some of that validation functionality, while replacing other part. And you don't have that option. Because all of that code gets run whenever you crate a new instance.
Bottom line is this - logic in constructor is considered to be a "code smell". It's not a mortal sin (like using eval() on a global variable), but it's a sign of bad design.
No it isn't a good idea for automated testing. When testing you want to be able to "mock" objects that allow you to control the logic especially in terms of interfaces. So if you place logic in the constructor then it is very hard to test as you must use the real object.
here is a fantastic talk with much more detail on why not to put logic in constructor (google tech talk by Misko Hevery)
https://www.youtube.com/watch?v=RlfLCWKxHJ0
I think this question is little bit unclear because i don't think that __construct is bad place to logic, the question is what kind of logic you have here? Some kind of logic can be placed in constructor but another must not be present in constructor. For example Symfony Response - constructor contains logic, but this logic is necessary for this object, and this constructor doesn't make some implicit actions. This constructor doesn't print content to output or something else - so this is good example (as for me)...
Also it is important to understand what you object must to do, if it will be immutable object - constructor can have little bit another view...
Also it's important to follow SOLID and appropriate design pattern...

Should an object save to the database on setting property (e.g. a Blog Post object)

I am beginning to get into more OO PHP and also writing tests for that objects. My main question is: if I have a Blog_Post object, and I call $post->setCategory( 'Foo' ), should doing that save to the database directly?
The reason I ask is for unit testing, I often don't want to use the DB for these kind of things because that's not what I am testing.
I have seen people suggest doing something like
function __construct( PDO $db )
to pass in the database object to be used, then use mock when testing. However, I really don't like the idea of having instantiate my database objects all the time by whatever is calling the Blog_Post class.
This is in the scope of WordPress, which does not have an OO approach fundamentally - with my current Blog_Post, the setter method would just call the DB (via a global $wpdb!! (i know!)).
Really I wanted to know, what's the general pattern that is the path of least resistance with something like this. Would Blog_Post just write it's self, or would one maybe put on a "save()" method to actually push all set properties to the DB? Or perhaps set a flag on teh object "setSaveToDB(false)" before calling the setter.
Thanks, hope this make some sense!
Short answer: No. Delegate loading, saving to another layer.
If you're looking to have any kind of performance, then using a ->save() function on the object is preferable to saving whenever a property is set. If you write to the database every time a property is set then an object with 7 properties will write 7 times to the db, if you have a save function then only one write will be used for all 7 properties. Since opening the db connection is the slowest function of a db you will want to minimize the amount of separate read/writes. In general terms, opening the connection to the db and writing one row takes about the same time as opening a connection and writing 100000 rows. SQL is fast when handling large data sets, but opening the connection and finding the right place to read/write is quite slow.
Hope this helps :-)

What should be an instance of a class in php?

In PHP 5.2.x, mySQL 5.x
Im having a bit of an issue wrapping my head around what should and should not be an instance of a class in php because they are not persistent once the page is rendered.
Say I have a list of comments. To me, it would make sense that every comment be its own object because I can call actions on them, and they hold properties. If I was doing this in another language (one that has persistent state and can be interacted with), I would do it that way.
But it seems wasteful because to do that I have a loop that is calling new() and it would probably mean that I need to access the database for each instance (also bad).
But maybe im missing something.
Php just seems different in how I think about class and objects. When should something be a class instance, and when not?
This is a subjective issue, so I'll try to gather my thoughts coherently:
Persistence in PHP has sort of a different meaning. Your thinking that each comment should be an object because comments have actions which can be performed on them seems correct. The fact that the objects won't persist across a page load isn't really relevant. It isn't uncommon in PHP to use an object in one script, which gets destroyed when the script completes, and then re-instantiate it on a subsequent page load.
Object-oriented programming provides (among other things) code organization and code reuse. Even if an object is only really used once during the execution of a script, if defining its class aids in program organization, you are right to do so.
You usually needn't worry about resource wastefulness until it starts to become a problem; if your server is constantly taxed to where it degrades your user experience or limits your expansion, then it is time to optimize.
Addendum:
Another reason defining a class for your comments is that doing so could pay dividends later when you need to extend the class. Suppose you decide to implement something like a comment reply. The reply is itself just a comment, but holds some extra attributes about the comment to which it refers. You can extend the Comment object to add those attributes and additional functionality.

Return values, throw exceptions and rolling back transactions

The whole "when to throw exception or return value" questions has been asked a lot (see the following to see just one example):
Should a retrieval method return 'null' or throw an exception when it can't produce the return value?
and I completely agree with the answers in main.
Now my question arises from adding a little more context to the above when applying this to a more complex system. Ill try and keep this as brief and simple as possible.
Right we have an example MVC PHP application:
Model A: has a function get_car($id) which returns a car object.
Controller A has a simple function for say showing a car to the user
Controller B however has a complex function that say gets the car, modify it (say through one of model A's set functions) and also updates other tables based on some of these new values through other models and libraries throughout the system - very complex ay lol
we now get to the main part of my question:
For data integrity I want to use MySQL transactions. This is where I run into a "what's best / what's best practice" scenario...
We write Model A to return FALSE if the car is not found or there is an SQL error.
This is fine for Controller A as it just wants to know if there was a error and bom out, so we just check the return value an bom - fine.
We now get to Controller B. Controller B say does some database updating before the Model A function is called which we need to roll back on error so we need to use a transactions. now this is where I get to my problem. do I leave Model A as a return value and just check it or do I change it to throw exception with the knock on effect of then having to also re-write Controller A as we now need to catch the exception... then (not done yet ;o)) do I roll back in the catch of the model (but how do we know if a transaction has been used or not?) or do we catch and re-throw or allow to bubble up to the controller catch and do the roll back there?
what I'm trying to say is that if I have lots of models and controllers with database interaction should I just make them throw exceptions and then wrap all my other code eg controller functions in try catches encase the model or library functions ever throw, or, do I make the models "self contained" to tidy and handle there own problems but then what do I do about rolling back a transaction if (for this "call") one was open (as per my example above not every time is a transaction opened...)? if this was the case I would have to make all my functions return something and then check this in the controller, as this is the only place that knows if there is an open transaction or not...
So to clarify I can use a try catch to catch and roll back in a controller, that's ok, but how to I do this from "further down" eg in a model or library function... that could be called both during and transaction or just as an auto commit normal MySQL call?
An explained answer would be great (as I like to understand why I am doing something) but if not a some vote for the favourite of the follow solutions (well the solutions I can see):
1) make all model and library functions always return a value and then the controller can either just bom or do a try catch to roll back where necessary - but meaning that I would have to check the return value of every model and library function everywhere they are used.
2) make all model and library functions throw exceptions (say on SQL error) and wrap every controller (which would call the model and library functions) in a try catch where the catch would either just bom or roll back if necessary...
also please note "bom" is push user somewhere or show a pretty error (before someone says "its bad practice to just allow your application to die..." lol)
I hope you get where Im coming from here and sorry for the long loooooong question.
Thanks in advance
Ben
[There's a theoretical problem implicit in the "For data integrity I want to use MySQL transactions"... since MySQL historically hasn't been very ACID - PostgreSQL and Oracle both provide stronger support for ACID. However, back to the real question...]
Both your (1) and (2) focus on exceptions versus failure-return values, but my impression is that this isn't the key part of detangling exceptions, error returns, and open transactions (and some databases support SQL exceptions as well). Instead, I'd focus on keeping the transaction state tied to the nesting of the functions manipulating the model. Here are some thoughts along this line:
You will probably always have error returns from some library functions anyway, so having Model A return FALSE isn't really breaking the paradigm, nor is there anything particularly troublesome about a mix of error returns versus exceptions. However, error returns MUST bubble up correctly - or be converted to exceptions if they go beyond what can be locally address.
Nested transactions are the most obvious way to have one controller start a database manipulation and still call other stuff in the app that also uses transactions. This allows a failed sub-sub-function to abort just its own part transaction and take either the error return or exception approach to bubbling the error up on the non-SQL side while the closed sub-transactions still maintain reasonable matching state. This usually needs to be simulated in code outside of the database (this is essentially what Django does).
Your code could start a new (potentially large) transaction, and keep track of the fact that it's already open to keep the sub-sub-functions in your code from trying to reopen it.
In some databases, code can detect whether a transaction is already open based on the database session state, allowing you to check the DB session state instead of tracking it in code.
Both of the above allow one to use savepoints to simulate truly nested transactions.
One must be very careful to avoid calling SQL calls with implicit commits (CREATE TABLE, for example). MySQL probably deserves a lot more caution around this issue than, say, PostgreSQL.
One way to implement the one big transaction approach is to have high-level function that initiates the transaction and then itself calls the top of whatever Controller B needs to do. This makes either bubbling up errors or having a special abort-transaction exception pretty straightforward. Only the top function would call commit rather than abort if no subfunction failed and no exception was caught. Subfunctions wouldn't call commit.
Conversely, you could have all of your functions pay attention to transactional depth implemented in the non-SQL side (your code), although this is harder to set up in some languages than others (it's pretty easy using decorators in Python, for example). This would allow any of them to call commit if they were done and the transactional depth at zero.
Hope this helps someone :-)

How to create write-once properties?

I'm stuck with a general OOP problem, and can't find the right way to phrase my question.
I want to create a class that gives me an object which I can write to once, persist it to storage and then be unable to change the properties. (for example: invoice information - once written to storage, this should be immutable). Not all information is available immediately, during the lifecycle of the object, information is added.
What I'd like to avoid is having exceptions flying out of setters when trying to write, because it feels like you're offering a contract you don't intend to keep.
Here are some ideas I've considered so far:
Pass in any write-information in the constructor. Constructor throws exception if the data is already present.
Create multiple classes in an inheritance tree, with each class representing the entity at some stage of its lifecycle, with appropriate setters where needed. Add a colletive interface for all the read operations.
Silently discarding any inappropriate writes.
My thoughts on these:
1. Makes the constructor highly unstable, generally a bad idea.
2. Explosion of complexity, and doesn't solve the problem completely (you can call the setter twice in a row, within the same request)
3. Easy, but same problem as with the exceptions; it's all a big deception towards your clients.
(Just FYI: I'm working in PHP5 at the moment - although I suspect this to be a generic problem)
Interesting problem. I think your best choice was #1, but I'm not sure I'd do it in the constructor. That way the client code can choose what it wants to do with the exception (suppress them, handle them, pass them up to the caller, etc...). And if you don't like exceptions, you could move the writing to a write() method that returns true if the write was successful and false otherwise.

Categories