Test Driven Development involves writing tests before the code under test. This has a number of benefits to the development process, including grounding the developer in the use cases of their code, as well as encouraging better modularity and observability of the components.
One question, though, is whether you can tell if some code, that comes with accompanying tests, was written test first, or whether the tests were added afterwards.
I recently was given a coding challenge to complete where the success criteria included proof of test driven development. Could they reasonably judge whether I’d written the tests or the algorithm first?
I think it’s often quite clear when tests were NOT written first. Here are a few smells that suggest we’re dealing with coding & testing, rather than test driven development:
- The tests are written at a more end-to-end test level, rather than unit level
- There’s an absence of dependency injection, meaning that it’s hard to write further unit tests
- Capabilities are more tightly coupled
- There’s limited error handling testing
- Tests may solve multiple use cases in a single test case
I’m sure there are many others, of course.
You’ll note that I don’t mention code coverage. High code coverage CAN be a metric of TDD, though it’s more a tool you use in TDD to analyze whether the test and code have diverged in some way.
While I can humbly confess many times that my tests have come a few minutes too late (it’s seldom hours), I do try to drop to test first development as often as I can, and encourage myself to stop writing code when I have at least a testable husk.
The silly thing is that test first code, when written with sensible techniques (i.e. not overmocking etc) is always so much nicer than any alternative. Why do we let ourselves do it any other way!?