Do it if you need it – Part 1 – Testing

I have been asked to explore the idea of supplying a function with some code to use if needed. The perfect example of this is the following:

log4netLogger.Debug("This code needs to invoke " + x.ToString() + " to log");

No matter whether debug logging is needed, the x.ToString() will be called in the above, and the string concatenations will happen, which may not be an issue, unless it’s a High Availability system or x.ToString() has a large implementation, or concatenation takes a long time.

So the question is how we pass the “method” of calculating that answer, rather than the “implementation”. For logging, this means we could avoid having to always write:

if (log4netLogger.IsDebugEnabled) 
{
   log4netLogger.Debug("This code needs to invoke " + x.ToString() + " to log");
}

It’s a lovely cause.

However, the coding craftsman wants to be sure his/her code works, and so will be able to prove it via a unit test. For this we need some sort of mocks that prove that the invocation happens when it’s needed.

For the test example example, let’s take the following three interfaces:

    /// <summary>
    /// The source of some business logic
    /// </summary>
    public interface ILogicService
    {
        bool IsActive();
    }

    public interface IDataRetrievalService
    {
        // retrieve some big object or other
        string GetBigData();
    }

    /// <summary>
    /// Write the data out
    /// </summary>
    public interface IOutputService
    {
        void Output(string data);
    }

We may be building a simple bit of logic that is going to decide whether to retrieve the data from the DataRetrievalService and send it through the output service if the logic service is set at an appropriate state.

You could have an implementation like this:

        OutputIfActiveWithData(logicService, retrievalService.GetBigData(), outputService);

...

        static internal void OutputIfActiveWithData(ILogicService logic, string data, IOutputService output)
        {
            if (logic.IsActive())
            {
                output.Output(data);
            }
        }

But that violates our aim to have the data retrieved ONLY if it is necessary.

So, let’s write our first unit test. But how? We don’t have any concrete implementations of those services. Time for RHINO MOCKS!

I’m going to jump straight in here with some of the how-tos.

We’ll assume there’s an NUnit TestFixture:

    [TestFixture]
    public class ConditionalRetrieverTest
    {
        private MockRepository repository;
        private ILogicService logicService;
        private IDataRetrievalService retrievalService;
        private IOutputService outputService;

    }

This is going to have things which look like our services and the mock repository – a RhinoMocks concept.

We’re also going to assume that the implementation of our clever “look it up later when necessary” algorithm is in a class called ConditionalRetriever.

Now to some of the mechanics of NUnit. NUnit may or may not construct multiple copies of the TestFixture class as it runs the tests. Therefore, the set up of the mocks, which are stateful and only relate to a single test run, should not be in the constructor. In addition, this particular set of tests is very sensitive about the order of what happens with those mock services, as we’re going to be trying to prove that the DataRetrievalService is only used when needed and only after the LogicService has been checked. So, it’s a good idea to put the verification that everything went ok with the mocks at the end of every test.

This is where the [SetUp] and [TearDown] attributes of NUnit can be used to good effect. In the [SetUp] function we will create three mock services and in the [TearDown] we’ll release them for garbage collection (it’s useful practice) and we’ll also ensure that there was nothing “forgotten” by the last test that ran.

What Are RhinoMocks?
In short they are dynamic mock objects. You use the library to create a temporary object which implements the interface that your code-under-test will need, but you don’t need to provide an implementation for this interface in code. Instead, you tell the library how to respond as this mock object in the situation you’re testing with. You can then ensure your code-under-test gets the right input, and you can verify that it makes the right requests.

Setting up the RhinoMocks

        [SetUp]
        public void Setup()
        {
            // create mocks
            repository = new MockRepository();
            logicService = repository.StrictMock<ILogicService>();
            retrievalService = repository.StrictMock<IDataRetrievalService>();
            outputService = repository.StrictMock<IOutputService>();
        }

The MockRepository owns the mock objects. The StrictMock is a type of Mock which will only do what you’ve pre-programmed it to do. There are other more easy going mock objects which don’t care as much about what happens to them and can be programmed to just give a certain return every time. How strict you are depends on the thing your test is trying to prove. In this case, we’re trying to be sure things are evaluated in the right order, so strict is good.

Checking the mocks after

        [TearDown]
        public void TearDown()
        {
            repository.VerifyAll();
            

            // release the mock objects for garbage collection
            repository = null;
            logicService = null;
            retrievalService = null;
            outputService = null;
        }

Not rocket science. The setting to null is not hugely essential, but is not bad practice, especially if the test runner is going to reuse this class and if there is a lot of memory required by any of the test objects.

The VerifyAll is the thing which makes these mocks so powerful. They are going to be hyper strict to prove things are called in the right order.

Defining our expectations

        private void SetUpForActive()
        {
            // set active, which expects all methods to be called
            using (repository.Ordered())
            {
                Expect.Call(logicService.IsActive()).Return(true);
                Expect.Call(retrievalService.GetBigData()).Return("My Big Data");
                Expect.Call(() => outputService.Output("My Big Data"));
            }
            repository.ReplayAll();
        }

        private void SetUpForInActive()
        {
            // in an inactive situation we expect only the logic service to be called
            using (repository.Ordered())
            {
                Expect.Call(logicService.IsActive()).Return(false);
            }
            repository.ReplayAll();
        }

Another bit of testing best practice. I have extracted the method which tells the mock objects what to expect. This is because I planned to try different algorithms and didn’t need to bog down my tests with the set up of the mock objects.

Notice the Ordered() method, which is used in conjunction with the using syntax. This tells the repository to only allow the methods to come in in the given sequence.

Methods which return a value have the Expect(..).Return pattern. void methods use the lambda expression to stipulate that they’ll be called.

ReplayAll switches the repository into playback mode, ready for our tests.

An example unit test might be:

        [Test]
        public void RunsWithLateEvaluation()
        {
            SetUpForActive();

            ConditionalRetriever.OutputIfActiveWithDataInterface(logicService, retrievalService, outputService);
        }

Summary
We’ve found a problem and decided that we want to create a unit-testable solution to it. The unit testing needs dynamic mocks which we’re creating with RhinoMocks.

The issue with RhinoMocks is that the exact combination of ordering, setup, playback and verifying that everything happened right, is a bit of hocus pocus. If you get some of these wrong, the tests may pass no matter what the implementation. There was a definite case here for Red-Green – test first, get it to fail, make it pass. In addition, I ended up trying to make my tests fail again by changing them to the wrong order, just to be sure that they were passing for the right reason.

Now we can test it, we can look at the options for passing in the “do it later” request, and see what we might recommend for an ILog wrapper that would avoid that boilerplate code for Log4Net when we want to do Debug logging.

Advertisements

One comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s