I have, on a few occasions, debated with team mates whether it is better to have a little algorithm inside a unit test to work out the right answer from some raw data before asserting the outcome, or whether the right answer should be fixed at compile time along with the right question you ask of your code-under-test.
There are clearly two sides to the story. There are some situations where you need to generate test data on the fly, where you cannot know at coding time what the exact answer is. For example, if you needed to generate a test object which took a UUID as its ID, you’d have to pick that ID up after it was generated randomly, rather than assume a pre-ordained UUID was the right identifier for all test scenarios.
Similarly, it can be clearer for trivial examples, for test input data to be added up to the answer, so long as the process of composing the expected answer is less an algorithm, and more an assembly of the initial facts in a slightly different order. In some languages, this is still, essentially, fixing the answer at compile time.
There are two absolute no-go areas for me when it comes to test result calculation inside a unit test:
- Reimplementing the algorithm under test in order to check the answer
- Processing real-world resources, like time or values in services when it’s entirely avoidable.
Let’s unpack these.
Reimplementing the algorithm to test the algorithm
Is it a test when you paste the code under test into the test to test the code under test?
Are we testing the implementation or the behaviour? If you can’t calculate the answer correctly in advance of writing the test, are you sure you’re testing the right thing?
How do you even keep a parallel algorithm in line with the intended true results as you refactor your code.
There’s something funny here. I’ve made this mistake myself, and I want to remember never to do it again.
Processing Real World Resources
If you have an XML file with your test data in and you load that up, that’s probably ok.
If you have a round trip to database and then test you can pull it out in a half integration/db layer test, that’s probably ok.
Once you start having to calculate what a real world resource ought to do, then it gets harder to get things objectively correct against all environments, and leads you away from unit testing. If you’re unit testing, you should be able to slice off real world resource.
The computer’s clock is a real world resource. LocalTime.now is not a huge friend of testing as its value changes a test time. Every time. It’s perfectly reasonable to use when you’re measuring success with relative values. For example, you could pass in now and tomorrow into a test to make sure it thinks the time gap is a day. But you might still be better off sending my birthday date and the day after my birthday as more fixed test data.
There’s a general issue here with functions that are statically bound to now and how hard they are to test… in short, if you can avoid unit testing with that static dependency in play, you’ll be happier.
If your test has to work out the test data or expected answer for itself, when you come to re-read the test, perhaps to diagnose what’s going wrong, it’s enormously difficult to understand. It’s like going to a soothsayer and asking What’s the expected answer? and being told My first is in Apple and also in Pear.
Furthermore, if there’s calculation within your tests, is that calculation tested? reliable? Have you got to build services to DRY such calculations out so that your tests aren’t full of repeated logic…? It seems like the ecosystem for this sort of approach can get out of hand.
If we think of a unit test as a piece of specification with worked examples, we shouldn’t expect to have to rely on things outside of the description to be able to see what precise inputs and outputs would be in use. The moment we start calculating expected outputs in software, we’re obscuring the explanation of the example and allowing for bugs in calculation independent or common two the code-under-test and the test code to really spoil our day.