In a couple of the coding test smells, there’s a push/pull between writing DRYed out code, where there’s no repetition, and writing tests where you can see what the test is doing, but the boilerplate overflows so much that you can no longer see what the use case is.
In general, I’m comfortable with the idea of moving the environment simulation to the setup phase of a test, and it’s usually in the setting up of mocked resources that we encounter various bits of wrangling. So maybe we just have a big old test set up and be done with it.
However, if the different tests need to simulate the same activity with different outcomes, then we need to move the setup locally, otherwise we end up with the over sharing setup anti-pattern. And even then, huge test setup functions aren’t that great to navigate.
So We Need An Easy Way To Explain Test Moves
A test is probably written in the given/when/then style, these days, and these should reflect a meaningful use case – something the product owner might recognise, or something that a lower-level technology person might recognise as the feature description of a low level component.
Let’s look at a few ways we can DRY out the code in the spirit of readability:
- Extract a few constants – let’s have test data expressed with some well named labels, referring to some objects that are constructed outside of the test suite – maybe even inside files
- Just DON’T dry it out – prefer code that’s long hand, because in your case it’s not actually that bad – explicit code is transparent
- Extract functions with names like givenTheServerWillReturnAFaultOnThirdCall or thenTheServerReceivedTheseMessages – these can really help explain the test, and can, so long as you don’t refactor it down to the atom, add a lot of purpose to the test – but they do turn the text fixture into tests and then accompanying functions, which are sort of part of the test and sort of not
- Go crazy and write your own DSL helpers
In Praise of Going Crazy
Today I wrote a test DSL helper. It took a couple of minutes. It was inspired by the nice DSLs of AssertJ and Mockito:
// assertJ assertThatThrownBy(() -> doSomething()) .isInstanceOf(MyException.class) .hasMessage("Boom, baby!"); // mockito given(myService.getNext()) .willReturn(123); ... then(theOtherService) .should() .explode();
I’ll fictionalise the example, but here’s the sort of thing I coded:
MockFileStore mockFileStore = new MockFileStore(); // creates a FileStore inside MyService service = new MyService(mockDatabase.getFileStore()); mockFileStore.given("foo.txt") .exists(); mockFileStore.given("bar.txt") .hasText("This is a test"); mockFileStore.given("mydir") .isDirectory(); // use the service, which uses the `FileStore` object to do things
The exact mechanism for mocking the behaviour of the file store in the above is hidden behind the helpers. Maybe it uses Mockito. Maybe there’s some more complex mocking, or a test-specific file client that we’re using in place of the usual one. It doesn’t matter. The point is that the caller to the methods has to describe what they’re trying to set up.
These fluent DSLs are easy enough to create and they add to the readability while keeping the test setup relatively explicit. More complex scenarios can be modeled in the DSL class, rather than turn into long-winded functions in the test fixture.