At the risk of being flamed... what advantage does enforcing calls to methods rather than functions have in a context where the context is implicit.
Considering that PHP's syntax is so ugly for calling methods why would PHPUnit's creators have enforced its usage?
If the framework had set a global "currentTestCase" object and then transparently associated failed asserts with that object we could be writing:
assertEquals("blah", $text);
as opposed to the equivalent, but verbose:
$this->assertEquals("blah", $text);
What exactly do we get by using OO in this context.
Please enlighten me.
Because PHPUnit is derived from xUnit and that's how xUnit does it.
Why does xUnit do it that way? I'm glad you asked. The original reason, as Robert points out, is that xUnit comes from Smalltalk and was popularized by JUnit in Java. Both are OO-or-nothing languages so they had no choice.
This is not to say there are not other advantages. OO tests can be inherited. This means if you want to test a subclass you can run all the parent's tests and just override the handful of test methods for the behaviors you've changed. This gives you excellent coverage of subclasses without having to duplicate test code.
Its easy to add and override assert methods in PHPUnit. Just subclass PHPUnit_Framework_TestCase, write your own assert methods and have your test classes inherit from your new subclass. You can also write default setup and teardown methods.
Finally, it guarantees that the test framework's methods aren't going to clash with the thing that they're testing. If the test framework just dumped its functions into the test and you wanted to test something that had a setup method... well you're in trouble.
That said, I hear your pain. A big test framework can be annoying and cumbersome and brittle. Perl doesn't use an xUnit style, it uses a procedural style with short test function names. See Test::More for an example. Behind the scenes it does just what you suggested, there's a singleton test instance object which all the functions use. There's also a hybrid procedural assert functions with OO test methods module called Test::Class which does the best of both worlds.
Considering that PHP's syntax is so ugly for calling methods
I guess you don't like the ->. I suggest you learn to live with it. OO PHP is so much nicer than the alternative.
One good reason is that assertXXX as a method name has a high risk for naming clash.
Another one is that it is derived from the xUnit family, which typically deals with object-oriented languages - Smalltalk initially. This makes it easier to related yourself to your "siblings" from e.g. Java and Ruby.
Not a direct answer, but as of PHPUnit 3.5, you do not have to write $this-> anymore. PHPUnit 3.5 added a function library for Assertions, which you have to include
require_once 'PHPUnit/Framework/Assert/Functions.php';
Then you can do
assertEquals('foo', $bar);
See Sebastian Bergmann's Blog post about it
http://sebastian-bergmann.de/archives/896-PHPUnit-3.5-Less-this-Required.html#content
Having the test cases in class-methods saves work for PHPUnit. Due to lack of built-in intelligence PHPUnit couldn't find or handle pure test functions. Only having to recognize the ->assert*() messages -in a simple boolean chain- again saves processing logic (for PHPUnit, not the test case author). It's all syntactic salt that saves overhead from the PHPUnit/SimpleTest point of view.
It wouldn't be a technical problem to capture error/warning messages, exceptions or recognize PHPs native assert() statement. It's not done because a difficult API looks more enterprisey.
Related
Is this possible to avoid mocking in phpunit?
My concept is to:
- write unit tests to test smaller parts
- when it comes to do a integration/functional test then it's possible to inject a null object class using the dependency injection (so it's good to have everything interfaced). But integration tests could be separated from unit tests.
The reason why I don't like the mocking is that I see that its underused in every test, it's totally not readable. I was thinking about separation on unit testing and integration testing.
Could this in most cases be alternative to mocking, or is the mocking not replaceable at all? What's the best practice?
Of course, you don't have to use mocks. You need fakes
A Fake avoids overspecifying a test for a very invasive contract, like
the order of calls and precise parameters; a real implementation is
more robust to changes in the contract.
This helps in focusing on the essence of your test (because you don't put long mocks' expectations in your tests). Also, it's possible to replace mocks at all.
Consider the example in this article.
When one starts searching for PHP unit tests, one usually stumbles upon:
PHPUnit.
SimpleTest.
A plethora of blogs explaining how to use PHPUnit and SimpleTest.
StackOverflow questions about PHPUnit and SimpleTest...
...and I think you get the idea.
I was wondering: How does one go about unit testing with plain ol' PHP? Is that even a wise endeavor?
I suppose I would have to build my own little framework. I'm interested because I want to have a better understanding of what's going on in my unit tests. I'm also interested because I think a lightweight, tailored solution would run my tests faster.
Bonus question: Who's the underdog? Is there a third framework worth looking into (for acedemic purposes)?
Unit testing is basically a set of assertions.
Consider the following PHPUnit test case:
class MyTest extends PHPUnit_Framework_TestCase {
public function testFoo() {
$obj = new My;
$this->assertEquals('bar', $obj->foo());
}
}
You can have a similar test case without using PHPUnit:
class MyTest {
public function testFoo() {
$obj = new My;
assert("$obj->foo() == 'bar'");
}
}
However, by doing it without a framework, you will have to manually create an instance of the test case (MyTest) and call each test method manually (MyTest::testFoo, etc.).
The framework (e.g.: PHPUnit) is nothing more than a set of "helpers" to make it easier and faster: by automatically generating skeleton; with built-in mock objects, command lines scripts, etc.
You can still unit test without a framework, but in the end, you'll probably save more time using one, because after all, that's generally what frameworks are for.
My view would be what's so wrong with the standard wheel (PHPUnit in all probability) that warrants the development of a replacement? After all, Isn't a proven solution the wiser choice in this instance?
Additionally, the output of the standard tools will be easier to use if you ever want to take advantage of a Continuous Integration server further down the line. (You could of course make your own tool output in an identical format, but that leads me back to the "why re-invent the wheel" argument.)
I've used SimpleTest about a year ago and it's very lightweight. There really wasn't much to it except providing you a way to auto-load test packages and assert.
Essentially it included all classes starting with "test" and it would call all methods starting with "test" inside that class. Inside the methods you could assert the results are what you expected.
If you were to build your own framework, I would strongly recommend looking into SimpleTest before. It's very lightweight compared to PHP Unit (if that's what scares you).
You can write tests in any language without a framework, but the code you write that you share in your tests will end up being a kind of mini-framework anyhow.
A good unit testing framework will have the common stuff that you need to make testing easier, such as assertions, mocks, stubs, exception handling and code coverage.
Bonus question: Who's the underdog? Is there a third framework worth looking into (for acedemic purposes)?
There are underdogs in the PHP unit testing space, for example Enhance PHP, which is a unit-testing framework with mocks and stubs built in. It has some advantages over the big players - it has good documentation and is very quick to get started on.
I've never been able to figure this out. If your language doesn't type-check, what benefits do interfaces provide you?
Interfaces cause your program to fail earlier and more predictably when a subclass "forgets" to implement some abstract method in its parent class.
In PHP's traditional OOP, you had to rely on something like the following to issue a run-time error:
class Base_interface {
function implement_me() { assert(false); }
}
class Child extends Base_interface {
}
With an interface, you get immediate feedback when one of your interface's subclasses doesn't implement such a method, at the time the subclass is declared rather than later during its use.
Taken from this link (sums it up nicely):
Interfaces allow you to define/create
a common structure for your classes –
to set a standard for objects.
Interfaces solves the problem of
single inheritance – they allow you
to inject ‘qualities’ from multiple
sources.
Interfaces provide a flexible
base/root structure that you don’t
get with classes.
Interfaces are great when you have
multiple coders working on a project
; you can set up a loose structure
for programmers to follow and let
them worry about the details.
I personally find interfacing a neat solution when building a DataAccess layer which has to support multiple DBMS's. Each DBMS implementation must implement the global DataAccess-interface with functions like Query, FetchAssoc, FetchRow, NumRows, TransactionStart, TransactionCommit, TransactionRollback etc. So when you're expanding your data-acccess posibilities you are forced to use a generic defined functionschema so you're application won't break at some point because you figured the function Query should now be named execQuery.
Interfacing helps you develop in the bigger picture :)
Types serve three distinct functions:
design
documentation
actual type checking
The first two don't require any form of type checking at all. So, even if PHP did no checking of interfaces, they would still be useful just for those two reasons.
I, for example, always think about my interfaces when I'm doing Ruby, despite the fact that Ruby doesn't have interfaces. And I often wish I could have some way of recording those design decisions in the source code.
On the other hand, I have seen plenty of Java code that used interfaces, but clearly the author never thought about them. In fact, in one case, one could see from the indentation, whitespace and some leftover comments in the interface that the author had actually just copied and pasted the class definition and deleted all method bodies.
Now to the third point: PHP actually does type check interfaces. Just because it type checks them at runtime doesn't mean it doesn't type check them at all.
And, in fact, it doesn't even check them at runtime, it checks them at load time, which happens before runtime. And isn't "type checking doesn't happen at runtime but before that" pretty much the very definition of static type checking?
You get errors if you haven't added the required methods with the exact same signature.
Interfaces often used with unit-testing (test-driven design).
it also offers you more stable code.
the interfaces are also used to support iterators (eg. support for foreach on objects) and comparators.
It may be weakly typed, but there is type hinting for methods: function myFunc(MyInterface $interface)
Also, interfaces do help with testing and decoupling code.
Type hinting in function/method signatures allows you to have much more control about the way a class interfaces with it's environment.
If you'd just hope that a user of your class will only use the correct objects as method parameters, you'll probably run into trouble. To prevent this, you'd have to implement complicated checks and filters that would just bloat your code and definitely have would lower your codes performance.
Type hinting gives you a tool to ensure compatibility without any bloated, hand written checks. It also allows your classes to tell the world what they can do and where they'll fit in.
Especially in complex frameworks like the Zend Framework, interfaces make your live much easier because they tell you what to expect from a class and because you know what methods to implement to be compatible to something.
In my opinion, there's no point, no need and no sense. Things like interfaces, visibility modifiers or type hints are designed to enforce program "correctness" (in some sense) without actually running it. Since this is not possible in a dynamic language like php, these constructs are essentially useless. The only reason why they were added to php is make it look more like java, thus making the language more attractive for the "enterprise" market.
Forgot to add: uncommented downvoting sucks. ;//
In static languages like Java you need interfaces because
otherwise the type system just won't let you do certain things.
But in dynamic languages like PHP and Python you just take
advantage of duck-typing.
PHP supports interfaces.
Ruby and Python don't have them.
So you can clearly live happily without them.
I've been mostly doing my work in PHP and have never really
made use of the ability to define interfaces. When I need a
set of classes to implement certain common interface, then
I just describe it in documentation.
So, what do you think? Aren't you better off without using
interfaces in dynamic languages at all?
I think of it more as a level of convenience. If you have a function which takes a "file-like" object and only calls a read() method on it, then it's inconvenient - even limiting - to force the user to implement some sort of File interface. It's just as easy to check if the object has a read method.
But if your function expects a large set of methods, it's easier to check if the object supports an interface then to check for support of each individual method.
Yes, there is a point
If you don't explicitly use interfaces your code still uses the object as though it implemented certain methods it's just unclear what the unspoken interface is.
If you define a function to accept an interface (in PHP say) then it'll fail earlier, and the problem will be with the caller not with the method doing the work. Generally failing earlier is a good rule of thumb to follow.
Interfaces actually add some degree of dynamic lang-like flexibility to static languages that have them, like Java. They offer a way to query an object for which contracts it implements at runtime.
That concept ports well into dynamic languages. Depending on your definition of the word "dynamic", of course, that even includes Objective-C, which makes use of Protocols pretty extensively in Cocoa.
In Ruby you can ask whether an object responds to a given method name. But that's a pretty weak guarantee that it's going to do what you want, especially given how few words get used over and over, that the full method signature isn't taken into account, etc.
In Ruby I might ask
object.respond_to? :sync
So, yeah, it has a method named "sync", whatever that means.
In Objective-C I might ask something similar, i.e. "does this look/walk/quack like something that synchronizes?":
[myObject respondsToSelector:#selector(sync)]
Even better, at the cost of some verbosity, I can ask something more specific, i.e. "does this look/walk/quack like something that synchronizes to MobileMe?":
[myObject respondsToSelector:#selector(sync:withMobileMeAccount:)]
That's duck typing down to the species level.
But to really ask an object whether it is promising to implement synchronization to MobileMe...
[receiver conformsToProtocol:#protocol(MobileMeSynchronization)]
Of course, you could implement protocols by just checking for the presence of a series of selectors that you consider the definition of a protocol/duck, and if they are specific enough. At which point the protocol is just an abbreviation for a big hunk of ugly responds_to? queries, and some very useful syntactic sugar for the compiler/IDE to use.
Interfaces/protocols are another dimension of object metadata that can be used to implement dynamic behavior in the handling of those objects. In Java the compiler just happens to demand that sort of thing for normal method invocation. But even dynamic languages like Ruby, Python, Perl, etc. implement a notion of type that goes beyond just "what methods an object responds to". Hence the class keyword. Javascript is the only really commonly used language without that concept. If you've got classes, then interfaces make sense, too.
It's admittedly more useful for more complicated libraries or class hierarchies than in most application code, but I think the concept is useful in any language.
Also, somebody else mentioned mixins. Ruby mixins are a way to share code -- e.g., they relate to the implementation of a class. Interfaces/protocols are about the interface of a class or object. They can actually complement each other. You might have an interface which specifies a behavior, and one or more mixins which help an object to implement that behavior.
Of course, I can't think of any languages which really have both as distinct first-class language features. In those with mixins, including the mixin usually implies the interface it implements.
If you do not have hight security constraints (so nobody will access you data a way you don't want to) and you have a good documentation or well trained coders (so they don't need the interpreter / compiler to tell them what to do), then no, it's useless.
For most medium size projects, duck typing is all you need.
I was under the impression that Python doesn't have interfaces. As far as I'm aware in Python you can't enforce a method to be implemented at compilation time precisely because it is a dynamic language.
There are interface libraries for Python but I haven't used any of them.
Python also has Mixins so you could have create an Interface class by defining a Mixin an having pass for every method implementation but that's not really giving you much value.
I think use of interfaces is determined more by how many people will be using your library. If it's just you, or a small team then documentation and convention will be fine and requiring interfaces will be an impediment. If it's a public library then interfaces are much more useful because they constrain people to provide the right methods rather than just hint. So interfaces are definitely a valuable feature for writing public libraries and I suppose that lack (or at least de-emphasis) is one of the many reasons why dynamic languages are used more for apps and strongly-typed languages are used for big libraries.
Rene, please read my answer to "Best Practices for Architecting Large Systems in a Dynamic Language" question here on StackOverflow. I discuss some benefits of giving away the freedom of dynamic languages to save development effort and to ease introducing new programmers to the project. Interfaces, when used properly, greatly contribute to writing reliable software.
In a language like PHP where a method call that doesn't exist results in a fatal error and takes the whole application down, then yes interfaces make sense.
In a language like Python where you can catch and handle invalid method calls, it doesn't.
Python 3000 will have Abstract Base Classes. Well worth a read.
One use of the Java "interface" is to allow strongly-typed mixins in Java. You mix the proper superclass, plus any additional methods implemented to support the interface.
Python has multiple inheritance, so it doesn't really need the interface contrivance to allow methods from multiple superclasses.
I, however, like some of the benefits of strong typing -- primarily, I'm a fan of early error detection. I try to use an "interface-like" abstract superclass definition.
class InterfaceLikeThing( object ):
def __init__( self, arg ):
self.attr= None
self.otherAttr= arg
def aMethod( self ):
raise NotImplementedError
def anotherMethod( self ):
return NotImplemented
This formalizes the interface -- in a way. It doesn't provide absolute evidence for a subclass matching the expectations. However, if a subclass fails to implement a required method, my unit tests will fail with an obvious NotImplemented return value or NotImplementedError exception.
Well, first of all, it's right that Ruby does not have Interface as is, but they have mixin, wich takes somehow the best of both interfaces and abstract classes from other languages.
The main goal of interface is to ensure that your object SHALL implement ALL the methods present in the interface itself.
Of course, interface are never mandatory, even in Java you could imagine to work only with classes and using reflection to call methods when you don't know wich kind of object you're manipulating, but it is error prone and should be discouraged in many ways.
Well, it would certainly be easier to check if a given object supported an entire interface, instead of just not crashing when you call the one or two methods you use in the initial method, for instance to add an object to an internal list.
Duck typing has some of the benefits of interfaces, that is, easy of use everywhere, but the detection mechanism is still missing.
It's like saying you don't need explicit types in a dynamically-typed language. Why don't you make everything a "var" and document their types elsewhere?
It's a restriction imposed on a programmer, by a programmer. It makes it harder for you to shoot yourself in the foot; gives you less room for error.
as a PHP programmer, the way I see it, an Interface is basically used as a contract. It lets you say that everything which uses this interface MUST implement a given set of functions.
I dunno if that's all that useful, but I found it a bit of a stumbling block when trying to understand what Interfaces were all about.
If you felt you had to, you could implement a kind of interface with a function that compares an object's methods/attributes to a given signature. Here's a very basic example:
file_interface = ('read', 'readline', 'seek')
class InterfaceException(Exception): pass
def implements_interface(obj, interface):
d = dir(obj)
for item in interface:
if item not in d: raise InterfaceException("%s not implemented." % item)
return True
>>> import StringIO
>>> s = StringIO.StringIO()
>>> implements_interface(s, file_interface)
True
>>>
>>> fp = open('/tmp/123456.temp', 'a')
>>> implements_interface(fp, file_interface)
True
>>> fp.close()
>>>
>>> d = {}
>>> implements_interface(d, file_interface)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in implements_interface
__main__.InterfaceException: read not implemented.
Of course, that doesn't guarantee very much.
In addition to the other answers I just want to point out that Javascript has an instanceof keyword that will return true if the given instance is anywhere in a given object's prototype chain.
This means that if you use your "interface object" in the prototype chain for your "implementation objects" (both are just plain objects to JS) then you can use instanceof to determine if it "implements" it. This does not help the enforcement aspect, but it does help in the polymorphism aspect - which is one common use for interfaces.
MDN instanceof Reference
Stop trying to write Java in a dynamic language.
I am wondering if there's a design pattern where every class has static public methods where you can just use them anywhere in the code.
Is that kind of pattern considered safe if followed by some kind of known standard?
Personally I use dependency injection, and the use of Namespaces, however, I started a new job where all their code is static fiesta with require and I don't like it.
So I am looking for a valid information about whether I should keep working their way or convince them moving to a different approach.
If all the code lives in static methods, and you're not using getters/setters, it's almost certainly a poor design.
The basic intention of object oriented design - as opposed to procedural design - is that you distribute both data and behaviour to objects. This means that you can always validate that a "car" object is valid before invoking methods, and the logic for doing this is consistent and in one place.
With static methods, the data is effectively managed by the calling application code, not the objects. This means that very quickly, the understanding of a valid state of the application is distributed to many different places.
The benefits are that this is much more familiar to old-school PHP programmers, and for simple applications, it can be perfectly sufficient.
The downside is that you're losing most of the benefit of object orientation, especially the "everything lives in one place" benefits of extensibility and maintainability.
It's not common but I can imagine someone doing that with Functional Paradigm in mind, where classes are only used for grouping related functions togheter.
However, if any of this class is instantiated, has a state and is used globally then it is a bad approach because of many side effects, untestability, coupling etc.