testing legacy codes with phpunit - php

I have a legacy codebase and I need to test that code with PHPUnit. So I am asking for suggestions based on your experiences. Which classes I should test first? Or give the priority?
Should I start with the easy/small classes or with the base/super class?

My general suggestion for introducing unit testing to an existing codebase would be the following:
Start of testing some very simple classes to get a feel for writing tests
Maybe even rewrite those tests again and make proper "this is how we should do it" examples out of them
Take the one of the biggest and meanest classes in the whole system and get that class tested as best as you can. This is an important step to show everyone in your team (and maybe management) that unit testing your codebase CAN WORK OUT and is doable.
After that I'd suggest that you focus on three things:
Make sure new code gets tests
If you fix a bug create a test before fixing it to "prove" the bug is actually fixed
Use tests as a tool when you touch/change old code so you get better an better test coverage.
PHPUnit will provide you with a CodeCoverage Report showing you how well your codebase is tested. It can be pretty cool to see the number rise from 0.3% to 5% to 20% over the course of month but it not a really strong motivator.
To make sure you test NEW code I'd suggest using PHP_Change_Coverage as described in this blog posting
This tool will help you a lot generating meaningful coverage reports as it only shows NEWLY CREATED CODE as UNTESTED and not all the old stuff you have lying around.
With that you have something at hand that makes it really easy to "get a high % very early and keep testing the new stuff" while you create tests for everything old.
Before PHP Change Coverage:
and After:

There's often too much code in a system to test it all as a first step. But most of that code already works.
I'd start with methods that were modified recently. Presumably most of the rest of the software works to some degree, and testing that won't likely find as many errors as will be found in new or newly revised code.
Should you run out of work (I doubt it in the near future if you have 1 or more developers actively working near you), you can move up to methods that use the methods that were modified, to methods that have high complexity according to software metrics, and to methods that are critical for safe system operation (login with password, storage of customer charge data, etc.)
One way to help decide what to consider testing next is to use a test coverage tool. Normally one uses this to determine how well tested the software is, but if you don't have many tests you already know with it will tell you: your code isn't very well tested :-{ So there's no point in running it early in your test construction process. (As you get more tests you and your managers will eventually want to know this). However, test coverage tools also tend to provide complete lists of code that has been exercised or not as part of your tests, and that provides a clue as to what you should test node: code that has not been exercised.
Our SD PHP Test Coverage tool works with PHP and will provide this information, both through and interactive viewer and as a generated report. It will tell you which methods, classes, files, and subystems (by directory hierarchy) have been tested and to what degree. If the file named "login.php" hasn't been tested, you'll be able to easily see that. And that explicit view makes it much easier to intelligently decide what to test next, than simply guessing based on what you might know about the code.

Have a look at PHPure,
Their software records the inputs and outputs from a working php website, and then automatically writes phpunit tests for functions that do not access external sources (db, files etc..).
It's not a 100% solution, but it's a great start

Related

Unit testing legacy php application — how to prevent unexpected database calls

I'm adding unit tests to a legacy PHP application that uses a MySQL compatible database. I want to write genuine unit tests that don't touch the database.
How should I avoid accidentally using a database connection?
Lots of parts of the application use static method calls to get a reference to a database connection wrapper object. If I'm looking at one of these calls I know how to use dependency injection and test doubles to avoid hitting the database from a test, but I'm not sure what to do about all the database queries that I'm not looking at at any one time, which could be some way down the call stack from the method I'm trying to test.
I've considered adding a public method to the database access class that would be called from the PHPUnit bootstrap file and set a static variable to make any further database access impossible, but I'm not keen on adding a function to the application code purely for the sake of the tests that would be harmful if called in production.
Adding tests to a legacy application can be delicate, especially unit tests. The main problem you will likely have is that most tests will be hard to write and easily become unintelligible, because they involve massive amount of setting up and mocking. At the same time you will likely not have much freedom to refactor them, so they become easier to test, because that will lead to ripple effects in the code base.
That's why I usually prefer end to end-tests. You can cover lots of ground without having to test close to the implementation and those tests are usually more useful when you want to do large scale refactoring or migrate the legacy code base later, because you ensure that the most important features you were using still work as expected.
For this approach you will need to test through the database, just not the live database. In the beginning it's probably easiest to just make a copy, but it's absolutely worthwhile to create a trimmed down database with some test fixtures from scratch. You can then use something like selenium to test your application through the web interface by describing the actions you take on the site, like go to url x, fill out a form and submit it and describe the expected outcome, like I should be on url y now and there should be a new entry in the database. As you can see these kinds of tests are written very close to what you see on the website and not so much around the implementation or the single units. This is actually intended because in a migration you might want to rip out large chunks and rewrite them. The unit tests will become completely useless then, because the implementation might change drastically, but those end2end-tests describing the functionality of the site will still remain valid.
There are multiple ways you can go about this. If you are familiar with PHPUnit you might want to try the selenium-extension. You should find tutorials for this online, for example this one: https://www.sitepoint.com/using-selenium-with-phpunit/
Another popular option for these kind of tests is Behat with the MinkExtension. In both cases the hardest part is setting up selenium, but once you are able to write a simple test, that for example goes to your frontpage and checks for some text snippet and get that running. You can write tests really fast.
One big downside of these tests is that they are very slow, because they do full web requests and in some cases have to do some waiting for JavaScript. So you should probably not test everything. Instead try to focus on the most important features. If you have some e-commerce project, maybe go through a very generic checkout procedure. Then expand on different variations that are important to you, e.g. logged in user vs. new user or adding vouchers to the basket. Another good way to start is write very stupid tests, that just check whether your urls are actually accessible, so go to url and check for status code and some expected text snippet. Those are not really that useful in terms of making sure your application behaves correctly, but they still give you some safety as to whether some random 500 errors appear out of the blue.
This is probably the best approach for making safe your app works well and make it easier to upgrade, refactor or migrate your application or parts of it. Additionally whenever you add new features, try to write some actual unit tests for them. It's probably easiest if they are not too connected with the old parts of the code. In the best case scenario you won't have to worry too much about the database, because you can replace the data you get from the database with some instances you prepare yourself in the test and then just test whatever feature. Obviously if it's something like a simple we want to have a form that adds this data to the database, you will probably still not want to write a unit test, but instead write one of those bigger end to end-tests instead.

retroactive phpUnit testing: is it worth it?

We have about seven different websites that we have developed in-house. They're sites that track different HR applications and help some of our people get their jobs done via scheduling. Today, the head software designer told me to start writing test cases using phpUnit for our existing code. Our main website has at least a million lines of code and the other websites are all offshoots of that, probably in the tens of thousands to hundreds of thousands of lines.
None of this code was written through any apparent design method. Is it worth it for us to actually go back through all of the code and apply phpUnit testing to it? I feel that if we wanted to do this, we probably should have been doing it from the start. Also, if we do decide to start doing this unit testing, shouldn't we adopt TDD from here on? I know for a fact that it will not be a priority.
tl;dr: I've been told to write post-rollout test cases for existing code, but I know that the existing code and future code has not been/will not be created with the principles of TDD in mind. Is it worth it? Is it feasible?
Do you still change the code? Then you will benefit from writing tests.
The harder question is: What to test first? One possible answer would be: Test the code that you are about to change, before you change it.
You cannot test all your code years after it has been written. This will likely cost too much time and money for the generated benefit your company would gain.
Also, it is very hard to "just start with PHPUnit", the benefits for such an approach are probably not that great for the first few months or years, because you'd still test very small units of code without being able to also check if the whole system works. You should also start with end-to-end tests that use a virtual browser that is "clicking" on your pages and expects some text to be displayed. Search keywords would be "Selenium" or "Behat".
I know that the existing code and future code has not been/will not be created with the principles of TDD in mind
It's a question of attitude of all the developers what will happen in the future. For existing code without tests, they likely will never happen. New code, written by unwilling developers, will also not be tested. But willing developers can make a difference.
Note that this takes more than just the lead software designer telling the team to start testing. You'd have to be coached and need a proper professional help if you haven't done it before, and if there is no infrastructure that constantly runs these tests to ensure they still work. Setting this all up means quite an effort, and if the lead software designer or any higher boss is ready to spend time and money on this, you should consider yourself very happy because you can learn something and build more reliable software.
If not, that approach will likely turn out to fail because of passive aggressive denial.
Automated tests are an extremely good idea for when you want to refactor or rewrite code. If you have a PHPUnit test for a function or class you can rewrite the code then confirm using the test that the code still functions the same as it did before. Otherwise you can end up breaking other parts of your code when you refactor or rewrite stuff.
You probably wont be able to test the code in it's current form. It's not just throwing tests to some poorly witten code cause that code probably is not testable.
To be able to write the tests you probably will need to refactor the code. Depending on the quality of the code it could mean rewrite it entirely.
I'd recommend test only the new additions to the code. So if you need a new method in a class or a new function or whatever, it must be tested. Personally I prefer TDD but to people new on testing it can be too much to make them think on a test even before having refactored the code. So you can stick to test later.
For testing the new additions, you'll need to do some refactoring on existing code. For example, if you add a method that logs info, the logger should be injected to allow stubbing it. So you should write some kind of container. Now it does not mean that everywhere the logger is used you have to inject it. Just in the places that you're adding or changing something and that must be tested. This can become quite tricky and require someone well versed in testing to assist.
In short, my recommendation is:
test only the new additions and the new code
refactor the code to be sure that the code under test is really testable
see if the tests give confidence and are worth the effort and keep going

Refactoring Code To Psr standard and making the code testable in Laravel 4

When i started making a mobile app (that uses laravel on the server) i decided to not dig into testing or coding standards as i thought it was better to actually get a working app first.
A few months later i have a working app but my code doesn't follow any standards and nor have i written any tests.
Current directory structure i'm using is:
app/controllers : Contains all the controllers the app uses. The controllers aren't exactly thin, they contain most of the logic and some of them even have multiple conditional statements (if/else). All the database interactions occur in the controllers.
app/models : Define relationships with other models and contain certain functions relative to the particular model eg. A validate function.
app/libraries : contain custom helper functions.
app/database : contains the migrations and seeds.
My app is currently working and the reason for the slack is probably because i work alone on the app.
My concerns:
Should i go ahead and release the app and then see if its even worth
making the effort to refactor or should i first refactor.
I do wish to refactor the code but i'm unsure on as to what approach
i should take. Should i first get the standards right and then make
my code testable? Or should i not worry about standards( and
continue using classmap to autoload) and just try and make my code
testable?
How should i structure my files?
Where should i place interfaces,abstract classes etc?
Note: I am digging into testing and coding standards from whatever resources i can find, but if you guys could point me to some resource i would appreciate it.
Oh dear. Classic definition of legacy code, is code with no unit tests. So now you are in that unfortunate position where, if you are ever going to have to amend/enhance/resuse this code base, you are going to have to absorb a significant cost. The further away from your current implementation you get, the higher that cost is going to get. That's called technical debt. Having testable code doesn't get rid of it though, it simply reduces the interest rate.
Should you do it? Well if you release now, and it achieves some popularity and therefore need to do more to it...
If it's going to be received with total indifference or screams of dismay, then there's no point in releasing it at all, in fact that would be counter-productive.
Should you choose to carry on with the code base, I can't see any efficient way to re-factor for a coding standard without a unit test to prove you haven't broken the app in doing so. If you find a coding standard where testable and tested aren't a key part of it, ignore it, it's drivel.
Of course you still have the problem of how do you change code to make it testable without breaking it. I suspect you can iterate towards that with say delegation in your fat controllers without too much of a difficulty.
Two key points.
Have at least some test first, even if it's a wholly manual one, say a printout of the web page before you do anything. A set of integration tests, perhaps using an automation suite would be very useful, especially if you already have a coupling problem.
The other is, don't make too many changes at once and by too many I mean how many operations in the app will be affected by this code.
Robert C Martin's (no affiliation whatsoever) Working With Legacy Code would be a good purchase if you want to learn more.
Hopefully this lesson has already been learnt. Don't do this again.
Last but not least doing this sort of exercise is a great learning experience, rest assured you will run into situations (lots of them) where this perennial mistake has been made.

Testing Legacy PHP Spaghetti Code?

I inherited a fairly large, homemade, php4+MySQL, ecommerce project from developers that literally taught themselves programming and html as they wrote it. (I would shudder except that it's really impressive that they were able to do so much by starting from scratch.) My job is to maintain it and take it forward with new functionality.
Functionality of the code depends on $_SESSION data and other global state structures, which then affect the flow of the code and which parts of the site are displayed through require statements. When I took it over last year, my first task was abstracting all of the repetition into separate files which get included via require statements and also removing most of the 'logic' code from the 'display' or output code, but I couldn't remove it all. I have moved code into functions where I can, but that's still quite limited. Classes and methods are definitely out of the question right now.
All testing is being done manually/visually. I would like to start automating some testing, but I simply don't know where to start. Unit testing of functions is pretty straightforward, but very little of the code is in functions and most of that is pretty simple. I've looked at phpUnit and DbUnit, but all of the examples and discussion about them focus on classes and methods.
So, what options do I have to begin implementing unit testing on anything more than the most trivial parts of my project?
First off PHPUnit can be used to test procedural code just fine. Don't let the fact that PHPUnit examples only shows classes deter you. It's just how PHPUnit tests are organized.
You can just write test classes and test your function from them without any problems and that should be your smallest problem :)
If the code doesn't run on PHP 5.2+ then you can't use a current PHPUnit Version which is definitely more of a concern and my first general recommendation is to find any issues an PHP 5 upgrade might bring.
To start off one book recommendation to save you some troubles:
Working Effectively with Legacy Code
The book will help you avoid a lot of small mistakes you'd have to make yourself once instead and will get you in the right mindset. It's Java based but that is not really an issue as most of the stuff is easily adaptable.
Testing is hard because you don't even know what the application is supposed to in the first place
Getting unit tests up an running takes quite some time and doesn't give you a status "is it still working" so my first point would be to get some integration and front-end tests set up.
Tools like Selenium and the web testing parts of Behat can help you A LOT with this.
The advantage of using Behat would be that you can write nice documentation for what the product actually is supposed to do. No matter how to project goes along these docs will always have value for you.
The rests read something like: "When I go to this url and enter that data a user should be created and when I click there I get an email containing my data export". Check it out. It might be useful.
The most important thing is to get a quick green/red indicator if the thing is still working!
If you then find out that it was broken despite your "light" being green you can expand the tests from there on out.
When you don't know when it is broken you will never be confident enough to change enough stuff around so that you can incrementally improve what needs fixing or changing.
After you have a general sense of how everything work and you trust your small tests to show you when you break "the whole thing" I'd say it's time to set up a small Continuous integration server like Jenkins for PHP that allows you to track the status of your project over time. You don't need all the QA stuff at the start (maybe to get an overview over the project) but just seeing that all the "does it still work" stuff returns "yes" is very important and saves you lots of timing making sure of that manually.
2% Code Coverage is boring
When you are at a point where unit testing and code coverage come into play you will be faced with a mean read 0%. That can be quite annoying to never see this number rise a lot.
But you want to make sure you test NEW code so I'd suggest using PHP_Change_Coverage as described in this blog posting so make sure everything you touch has tests afterwards.
PHP Black Magic
function stuff() {
if(SOME_OLD_UGLY_CONST == "SOME SETTING") {
die("For whatever reasons");
}
return "useful stuff";
}
When testing it is really annoying when your scripts die() but what to do?
Reworking all scripts without tests can be more hurtful than not doing anything at all so maybe you want to hack to get tests first.
For this an other solutions to scary things there is the php test helpers extension.
<?php
set_exit_overload(function() { return FALSE; }
exit;
print 'We did not exit.';
unset_exit_overload();
exit;
print 'We exited and this will not be printed.';
?>
I would probably start testing with a tool like Watir or Selenium. This will allow you to do black box testing of the the whole page automatically. Once you have set up these tests, you can start to refactor the PHP pages, and build up unit tests as you are doing the refactoring.
Legacy code without unit tests is always a pain. There's no real solution because most of the time the code isn't written in a way it's unit testable at all. At work we had to handle lots of legacy code too. We wrote unit tests for new written code (which is a pain too, because you need to be able to set up some test data and stuff like that). This way it doesn't get worse and you will cover more and more old legacy code that is called within your new code. Doing this you will get closer each time to code that is covered by unit tests.
You face a big task. You likely can't code all the tests that are needed in any reasonable length of time, and your managers won't care if you do, if they believe the application mostly works.
What you should do is focus on building tests for new or modified application code. There will be plenty of that, if your application has any life, to keep you busy.
You should
also verify that you are actually testing that new code. By diffing new check-ins (you are using source control, right? if not, FIX THAT FIRST!) against old, you can see what has changed and get precise location information about the location of the changes. You (well, your manager should) want proof that those changes have been tested. You can use a code coverage tool to tell what has been tested, and take the intersection of that with the location of your new changes; the intersection better include all the modified code.
Our PHP Test Coverage can provide coverage information, and can compute such intersection data.

How do I implement a test framework in a legacy project

I've got a big project written in PHP and Javascript. The problem is that it's become so big and unmaintainable that changing some little portion of the code will upset and probably break a whole lot of other portions.
I'm really bad at testing my own code (as a matter of fact, others point this out daily), which makes it even more difficult to maintain the project.
The project itself isn't that complicated or complex, it's more the way it's built that makes it complex: we don't have predefined rules or lists to follow when doing our testing. This often results in lots of bugs and unhappy customers.
We started discussing this at the office and came up with the idea of starting to use test driven development instead of the develop like hell and maybe test later (which almost always ends up being fix bugs all the time).
After that background, the things I need help with are the following:
How to implement a test
framework into an already existing
project? (3 years in the
making and counting)
What kind of frameworks are there
for testing? I figure I'll need one
framework for Javascript and
one for PHP.
Whats the best approach for testing
the graphical user interface?
I've never used Unit Testing before so this is really uncharted territory for me.
G'day,
Edit: I've just had a quick look through the first chapter of "The Art of Unit Testing" which is also available as a free PDF at the book's website. It'll give you a good overview of what you are trying to do with a unit test.
I'm assuming you're going to use an xUnit type framework. Some initial high-level thoughts are:
Edit: make sure that everyone is is agreement as to what constitutes a good unit test. I'd suggest using the above overview chapter as a good starting point and if needed take it from there. Imagine having people run off enthusiastically to create lots of unit tests while having a different understanding of what a "good" unit test. It'd be terrible for you to out in the future that 25% of your unit tests aren't useful, repeatable, reliable, etc., etc..
add tests to cover small chunks of code at a time. That is, don't create a single, monolithic task to add tests for the existing code base.
modify any existing processes to make sure new tests are added for any new code written. Make it a part of the review process of the code that unit tests must be provided for the new functionality.
extend any existing bugfix processes to make sure that new tests are created to show presence and prove the absence of the bug. N.B. Don't forget to rollback your candidate fix to introduce the bug again to verify that it is only that single patch that has corrected the problem and it is not being fixed by a combination of factors.
Edit: as you start to build up the number of your tests, start running them as nightly regression tests to check nothing has been broken by new functionality.
make a successful run of all existing tests and entry criterion for the review process of a candidate bugfix.
Edit: start keeping a catalogue of test types, i.e. test code fragments, to make the creation of new tests easier. No sense in reinventing the wheel all the time. The unit test(s) written to test opening a file in one part of the code base is/are going to be similar to the unit test(s) written to test code that opens a different file in a different part of the code base. Catalogue these to make them easy to find.
Edit: where you are only modifying a couple of methods for an existing class, create a test suite to hold the complete set of tests for the class. Then only add the individual tests for the methods you are modifying to this test suite. This uses xUnit termonology as I'm now assuming you'll be using an xUnit framework like PHPUnit.
use a standard convention for the naming of your test suites and tests, e.g. testSuite_classA which will then contain individual tests like test__test_function. For example, test_fopen_bad_name and test_fopen_bad_perms, etc. This helps minimise the noise when moving around the code base and looking at other people's tests. It also has then benefit of helping people when they come to name their tests in the first place by freeing up their mind to work on the more interesting stuff like the tests themselves.
Edit: i wouldn't use TDD at this stage. By definition, TDD will need all tests present before the changes are in place so you will have failing tests all over the place as you add new testSuites to cover classes that you are working on. Instead add the new testSuite and then add the individual tests as required so you don't get a lot of noise occurring in your test results for failing tests. And, as Yishai points out, adding the task of learning TDD at this point in time will really slow you down. Put learning TDD as a task to be done when you have some spare time. It's not that difficult.
as a corollary of this you'll need a tool to keep track of the those existing classes where the testSuite exists but where tests have not yet been written to cover the other member functions in the class. This way you can keep track of where your test coverage has holes. I'm talking at a high level here where you can generate a list of classes and specific member functions where no tests currently exist. A standard naming convention for the tests and testSuites will greatly help you here.
I'll add more points as I think of them.
HTH
You should get yourself a copy Working Effectively with Legacy Code. This will give you good guidance in how to introduce tests into code that is not written to be tested.
TDD is great, but you do need to start with just putting existing code under test to make sure that changes you make don't change existing required behavior while introducing changes.
However, introducing TDD now will slow you down a lot before you get back going, because retrofitting tests, even only in the area you are changing, is going to get complicated before it gets simple.
Just to add to the other excellent answers, I'd agree that going from 0% to 100% coverage in one go is unrealistic - but that you should definitely add unit tests every time you fix a bug.
You say that there are quite a lot of bugs and unhappy customers - I'd be very positive about incorporating strict TDD into the bugfixing process, which is much easier than implementing it overall. After all, if there really is a bug there that needs to be fixed, then creating a test that reproduces it serves various goals:
It's likely to be a minimal test case to demonstrate that there really is an issue
With confidence that the (currently failing) test highlights the reported problem, you'll know for sure if your changes have fixed it
It will forever stand as a regression test that will prevent this same issue recurring in future.
Introducing tests to an existing project is difficult and likely to be a long process, but doing them at the same time as fixing bugs is such an ideal time to do so (parallel to introducing tests gradually in a "normal" sense) that it would be a shame not to take that chance and make lemonade from your bug reports. :-)
From a planning perspective, I think you have three basic choices:
take a cycle to retrofit the code with unit tests
designate part of the team to retrofit the code with unit tests
introduce unit tests gradually as you work on the code
The first approach may well last a lot longer than you anticipate, and your visible productivity will take a hit. If you use it, you will need to get buy-in from all your stakeholders. However, you might use it to kickstart the process.
The problem with the second approach is that you create a distinction between coders and test writers. The coders will not feel any ownership for test maintenance. I think this approach is worth avoiding.
The third approach is the most organic, and it gets you into test-driven development from the get go. It may take some time for a useful body of unit tests to accumulate. The slow pace of test accumulation might actually be an advantage in that it gives you time to get good at writing tests.
All things considered, I think I'd opt for a modest sprint in the spirit of approach 1, followed by a commitment to approach 3.
For the general principles of unit testing I recommend the book xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.
I've used PHPUnit with good results. PHPUnit, like other JUnit-derived projects, requires that code to be tested be organized into classes. If your project is not object-oriented, then you'll need to start refactoring non-procedural code into functions, and functions into classes.
I've not personally used a JavaScript framework, though I would image that these frameworks would also require that your code be structured into (at least) callable functions if not full-blown objects.
For testing GUI applications, you may benefit from using Selenium, though a checklist written by a programmer with good QA instincts might work just fine. I've found that using MediaWiki or your favorite Wiki engine is a good place to store checklists and related project documentation.
Implementing a framework is in most cases a complex task because you kinda start rebuilding your old code with some new solid framework parts. Those old parts must start to communicate with the framework. The old parts must receive some callbacks and returnstates, the old parts must then somehow point that out to the user and in fact you suddenly have 2 systems to test.
If you say that you application itself isn't that complex but it has become due to lack of testing it might be a better option to rebuild the application. Put some common frameworks like Zend to the test, gather your requirements, find out if the tested framework suits for the requirements and decide if it's usefull to start over.
I'm not very sure of unit testing, but NetBeans has a built-in unit testing suite.
If the code is really messy, it's possible that it will be very hard to do any unit testing. Only sufficiently loosely coupled and sufficiently well designed components can be unit tested easily. However, functional testing may be a lot easier to implement in your case. I would recommend taking a look at Selenium. With this framework, you will be able to test your GUI and the backend at the same time. However, most probably, it won't help you catch the bugs as well as you could with unit testing.
Maybe this list will help you and your mates to re-structure everything:
Use UML, to design and handle exceptions (http://en.wikipedia.org/wiki/Unified_Modeling_Language)
Use BPMS, to design your work-flow so you won't struggle (http://en.wikipedia.org/wiki/Business_process_management)
Get a list of php frameworks which also support javascript backends (e.g. Zend with jQuery)
Compare these frameworks and take the one, which matches the most to your project desgin and the coding structure used before
You should may be consider using things like ezComponents and Dtrace for debugging and testing
Do not be afraid of changes ;)
For GUI testing you may want to take a look at Selenium (as Ignas R pointed out already) OR you may wanna take a look at this tool as well: STIQ.
Best of luck!
In some cases, doing automatic testing may not be such a good idea, especially when the code base is dirty and PHP mixes it's behavior with Javascript.
It could be better to start with a simple checklist (with links, to make it faster) of the tests that should be done (manually) on the thing before each delivery.
When coding in a 3 years old minefield, better protect yourself with many error checking. The 15 minutes spent writing THE proper error message for each case will not be lost.
Use the bridging method: bridge ugly lengthy function fold() with a call to fnew() which is a wrapper around some clean classes, call both fold and fnew and compare result, log differences, throw the code into production and wait for your fishes. When doing this, always use one cycle for refactoring, anOTHER cycle for changing result (do not even fix bugs in old behavior, just bridge it).
I agree with KOHb, Selenium is a must !
Also have a look at PHPure,
Their software records the inputs and outputs from a working php website, and then automatically writes phpunit tests for functions that do not access external sources (db, files etc..).
It's not a 100% solution, but it's a great start

Categories