Saturday, May 2, 2009

Test Driven Development - legacy code

Legacy code is traditionally considered to mean code written by someone else somewhere at some point in time. Old code, that is.

Based on this thinking, Michael Feathers, in the preface of his book Working Effectively with Legacy Code, coined a new definition for legacy code: “code without tests.”

If we define legacy code as “code without tests” then writing code not test first would basically be like writing instant legacy code.

How do we test-drive on top of a legacy code base?

Michael Feathers in his book describes a process for working with legacy code.

1 - Identify change point
2 - Identify inflection point
3 - Cover the inflection point
  • 3.a - Break external dependencies
  • 3.b - Break internal dependencies
  • 3.c - Write tests
4 - Make changes
5 - Refactor covered code

Lasse Koskela in his book "Test Driven" splits these steps to three phases.

1 - Analyzing the change
2 - Preparing for the change
3 - Test-driving the change

Analyzing the change

When we start analyzing the change we want to make, we first identify the change points. Change points are the places in the code base where we need to edit code in order to implement the desired change. This is fundamentally no different from the analysis we carry out with any kind of code base, legacy or not. The main difference is that a legacy code base is generally more difficult to learn and understand than one with thorough unit tests documenting the intent and purpose of individual classes and methods.

When we know where the change should take place, we identify the inflection
point. The inflection point (or test point) is the point “downstream” in our code
where we can detect any relevant change in the system’s behavior after touching the code in the change points. Typical inflection points are close-proximity seams such as method invocations after or around the change points.

Sometimes, however, it might make sense to find the inflection point farther away from the change point, Examples of such distant inflection points might be network connections to external systems and log output produced by the code in and around the change point. In some cases, it might even
be sufficient to treat the system’s persistent data source as the inflection point.

The trade-off to make is basically between the ease of writing tests at the chosen
inflection point and the certainty provided by those tests.

Close-proximity inflection points tend to provide a more localized checkpoint
without too much noise around the signal. Distant inflection points, on the other hand, are more likely to catch side effects our analysis hadn’t found—but in exchange for potentially more effort in writing the tests because we often don’t have access to the kind of detailed information we usually have when testing close to the change point.

After having analyzed the code base looking for the change and inflection points, we know what we need to change and where we can test for the expected behavior. In the next phase, we prepare to make the change in a safe, test-driven manner.

Preparing for the change

Once we’ve spotted the change and inflection points, we proceed to cover the
inflection point with tests that nail down the current behavior before we make our change. This might involve breaking dependencies with careful edits that expose the dependency through a seam we can manipulate in our tests.

The tests we write to cover the inflection point are typically what we call characterization tests, meaning that they nail down the current functionality as is, without worrying about whether that behavior is correct. Characterization tests are often also learning tests in the sense that we use them to verify assumptions we’ve made while identifying the change points.

With sufficient characterization tests in place, we’re ready to move on to the third phase of our test-driven legacy development process—making the change.

Test-driving the change

After we’ve written tests around the change and inflection points to the degree
that we’re comfortable with our test coverage, we make the change by adding a
test for the desired new functionality. As we proceed with implementing the
change, our characterization tests tell us if we broke anything while editing the
legacy code, and the newly added tests tell us when we have managed to implement the desired change correctly. Finally, once we’ve successfully made the change and all tests are passing, we refactor the code as usual, enjoying the cover of our automated tests.

That’s all there is to working with legacy code in general. The main differences
between the regular test-driven development cycle and the process described are that we need to write tests for the existing behavior before adding a test for the new behavior and that we often need to make small dependency-breaking edits without our safety net in order to be able to start writing those tests. It just
requires a bit more care and thought.

reference :

No comments:

Post a Comment