Notes on creating a DSL

The first rule of creating a DSL is DON’T invent your own language. As a wise colleague of mine said: “We create DSLs intending to make people happy and end up making them very unhappy”.

However, the second rule of creating a DSL is treat ALL development as the creation of a DSL! Libraries like AssertJ or RestAssured in the test space show how amazingly useful and readable DSL-like forms can be in coding, forcing the user to think in problem space rather than implementation space. In general, always make your libraries work like a mini DSL.

In this discussion, however, I’m referring to the worst thing ever to do: create a new Domain Specific Language where users give it text and something turns that text into runtime behaviour.

Making a new language is hard!

Who wants to spend their time defining language grammars, or writing a compiler/parser/interpreter. Sure Antlr can help you with this, but that’s only the half of it. Once you’ve created your language you have to teach people how to use it and deal with all the complex edge cases around its syntax and what that may mean to its consumers.

You only need to look at Perl to find out what happens when a well-meaning language creator unleashes options on the general public.

Using existing tools can be long-winded and distracting

That said, it’s much easier to express things in problem space than in a poorly fitting implementation space. SQL queries are a very good fit for pulling data from tables and would be more long-winded and harder to understand if they were coded in, say, Java. Every problem has its commonalities, assumptions and shortcuts, and if they can be baked into something which gives an easy framework for expressing intent, rather than uninteresting detail, then that’s a very powerful tool.

People can Google existing stuff

If you invent something you’re going to have to teach it to people. Teaching syntax and grammar is the easy part. What about all the different pitfalls? The tips for doing it more cleanly? Recipes for doing common tasks? The fact that folks can easy go onto StackOverflow for Python or TypeScript issues means that those languages keep growing and the user base keeps involved.

If you create something you’re going to have to go very public with it and provide a lot of initial training, or you’re going to find that it gets limited by people’s own understanding of what you’ve made.

Documentation is an admission of failure

Producing a load of documentation seems a good idea. Producing none is definitely a bad idea. However, watch out for sentences that begin “Note, when you’re doing this, watch out for…”. It’s quite likely there’ll be some hidden surprises in the thing you’ve created. Some unexpected rules you need to follow, or some things never to do.

A lot of documentation is a warning to stop people falling off the cliff you’ve accidentally left in the implementation. While you may reasonably keep the scope of your DSL lightweight and not promise the world, your users may not be so easy to convince.

Where possible, spend more time rounding off what the language will do for the user than explaining that it can’t do it. The more predictable and unsurprising the system is, the more the users will do with it.

Starting with the familiar

There is a strong argument for not creating so much as extending something commonplace. For example, in one project I worked on, we had a function evaluation syntax that was very similar to Javascript in its form. In the end, I migrated it to use a Javascript parser, making it agree (as far as it was implemented) with the rules of Javascript, because it guaranteed that you could take someone with programming knowledge and get them using it with minimal explanation.

The challenge of starting with the familiar is people get foxed when they can’t do ALL of the familiar. This is where it’s wise to consider extending the scope to let them have as much as they’d reasonably expect to have – in the above case, I found myself adding in things like + and – to the expressions simply because it was harder to explain why they wouldn’t work when most of the machinery for them was already there.

The advantage of using something familiar is that you get a lot of tooling for free. Just pasting your DSL script into an editor like Atom or Notepad++ might give you syntax highlighting, and you can easily bring things like Ace editor into an HTML front end to give your users syntax highlighting without much set up. You can do this for a completely novel DSL too, but the closer you are to something existing, the less to do.

Know why

Parsers and editors and their ilk are hard to get right. Training users in something new can be slow and being able to express yourself clearly in a new language is hard even for that language’s progenitors. But, if there is a definite way of making something easier by adopting a DSL, then focus on that and build the language around that central vision.

In summary

Modeling the problem space and cutting out distraction is a nice thing to do for the consumers of your system/framework/library. If you can make it require a minimum of learning, be as predictable as possible and have as much tooling support as possible, then this can make them happy. Always consider other options before committing.

Advertisements
Posted in Uncategorized

Stream of Null

I quite like refactoring code from Java 7 (and lower) syntax into the Java 8 streams. I find the streams to be more expressive of the intent of the code and the boilerplate of the long-winded version seems evaporate, leaving something neater in its place. For example:

for(MyPojo pojo:listOfMyPojos) {
   // find and return the first one with a long title
   if (pojo.getTitle().length() > 12) {
       return pojo.getBody();
   }
}

// not found
return null;

The above code is not too bad, but is essentially finding the first thing in a list, and would be nicer if we wrote it as a stream:

return listOfMyPojos.stream()
   .filter(pojo -> pojo.getTitle().length() > 12)
   .map(MyPojo::getBody)
   .findFirst()
   .orElse(null);

Cool. Less code. What could possibly go wrong.

Uh Oh!

Here’s what:

// try the above code with the following in the list
MyPojo naughtyPojo = new MyPojo(
    "This is a very long title but the object has no body",
    null);

The map function within the stream will, if that object is the first found, map the MyPojo to null. So what? It makes findFirst explode!

In other words there’s a hidden use-case in the original implementation, which is that it’s possible to return null both when nothing is found, or when the thing found is null.

The Rash Fix

With Streams, findFirst is expecting to find a non-null. The following code might make your null pointer exceptions go away with the Streaming implementation:

return listOfMyPojos.stream()
   .filter(pojo -> pojo.getTitle().length() > 12)
   .map(MyPojo::getBody)
   .filter(Objects::nonNull)
   .findFirst()
   .orElse(null);

But the above has different behaviour in that it no longer finds the body of the first object with a long title – it finds the first object with both a long title and a non-null body… which is not what the original code was intending (it might be better… but these sorts of edge cases, unless heavily tested, are where bugs lie).

What To Do?

Luckily there’s a Java 8 solution to the rescue. Optional’s map function is the answer. You can perform map on optional once you’ve found what’s provably the first item you want to get, to convert it into the form you want to return. So the original method above is correctly refactored thus:

return listOfMyPojos.stream()
   .filter(pojo -> pojo.getTitle().length() > 12)
   .findFirst()
   .map(MyPojo::getBody)   // this is Optional.map, not Stream.map
   .orElse(null);

 

Posted in Java

Anyone fancy a curry?

For no reason other than it’s interesting, here’s a take on Currying in Java 8. The idea of currying is to convert a function that takes n parameters into one which can receive them one by one, or indeed all at once.

By using partial application to provide some of the inputs to a function we provided a pre-loaded function that just needs the input that the recipient most cares about, in order to do its job. For example, if streaming and mapping, you really only care about the object next in the stream in order to transform it, not any other parameters kicking around that will be the same each time in the transformation function.

Here lies my attempt at a currying library:

package uk.org.webcompere;

import java.util.function.BiFunction;
import java.util.function.Function;
import java.util.function.Supplier;

/**
 * Java implementation of currying
 */
public interface Curry {
   /**
    * Tagging interface for the {@link #curry(Curried)} function
    */
   interface Curried {
   }


   /**
    * A nullary function - takes no parameters, returns a value. Also a supplier
    * with a default for {@link Supplier#get} but requires the apply function
    * to make it consistent with partial application.
    * @param <R> type of return
    */
   @FunctionalInterface
   interface NullaryFunction<R> extends Curried, Supplier<R> {
      R apply();

      default R get() {
         return apply();
      }
   }


   /**
    * Unary function - takes one parameter
    * @param <R> return type
    * @param <T> parameter type
    */
   @FunctionalInterface
   interface UnaryFunction<R, T> extends Curried, Function<T, R> {
      /**
       * Can be converted to nullary function by full application of a parameter.
       * @param t input
       * @return a function that evaluates using the given parameter against the UnaryFunction
       */
      default NullaryFunction<R> asNullary(T t) {
         return () -> apply(t);
      }
   }


   /**
    * Binary function - takes two parameters.
    * @param <R> return type
    * @param <U> first parameter type
    * @param <T> second parameter type
    */
   @FunctionalInterface
   interface BinaryFunction<R, U, T> extends Curried, BiFunction<U, T, R> {
      /**
       * Partial application of binary function to yield unary function where the
       * first parameter of the binary function has been supplied already.
       * @param u first input parametr for partial application
       * @return unary function which takes next input parameter for full application
       */
      default UnaryFunction<R, T> apply(U u) {
         return t -> apply(u, t);
      }

      /**
       * Supply all values to return a supplier/nullary function
       * @param u first parameter
       * @param t second parameter
       * @return a nullary function that returns the equivalent of calling the binary function
       * with all its inputs
       */
      default NullaryFunction<R> asNullary(U u, T t) {
         return () -> apply(u, t);
      }
   }


   /**
    * A ternary function, which takes three inputs and returns a value.
    * @param <R> return type
    * @param <V> first input parameter type
    * @param <U> second input parameter type
    * @param <T> third input parameter type
    */
   @FunctionalInterface
   interface TernaryFunction<R, V, U, T> extends Curried {
      /**
       * The function that's being wrapped for partial application
       * @param v input 1
       * @param u input 2
       * @param t input 3
       * @return the result of applying the function
       */
      R apply(V v, U u, T t);

      /**
       * Partially apply the first two parameters to get a Unary function for the third
       * @param v first parameter
       * @param u second parameter
       * @return a function that can be called with one parameter
       */
      default UnaryFunction<R, T> apply(V v, U u) {
         return t -> apply(v, u, t);
      }

      /**
       * Partially apply the first parameter to get a Binary function for the third
       * @param v first parameter
       * @return a function that can be called with the remaining parameters
       */
      default BinaryFunction<R, U, T> apply(V v) {
         return (u, t) -> apply(v, u, t);
      }

      /**
       * Supply all values to return a supplier/nullary function
       * @param v first parameter
       * @param u second parameter
       * @param t third parameter
       * @return a nullary function that returns the equivalent of calling the function
       * with all its inputs
       */
      default NullaryFunction<R> asNullary(V v, U u, T t) {
         return () -> apply(v, u, t);
      }
   }

   /**
    * A bit of syntactic sugar to convert a plain old function into one of the above types
    * @param t a functional interface implementation - probably a method reference
    * @param <T> type of function we're going to return
    * @return a function cast as one of the partially applicable types
    */
   static <T extends Curried> T curry(T t) {
      return t;
   }
}

And here are some unit tests that give you a taste of using it.

package uk.org.webcompere;

import static uk.org.webcompere.Curry.curry;
import static java.util.stream.Collectors.toList;
import static org.assertj.core.api.Assertions.assertThat;

import java.util.stream.Stream;

import org.junit.Test;

public class CurryTest {
	public static String identity(String a) {
		return a;
	}

	public static String concat(String a, String b) {
		return a + b;
	}

	public static String concat(String a, String b, String c) {
		return a + b + c;
	}

	@Test
	public void curryTernary() {
		Curry.TernaryFunction<String, String, String, String> function = curry(CurryTest::concat);

		assertThat(function.apply("a").apply("b").apply("c")).isEqualTo("abc");

		assertThat(function.apply("a", "b").apply("c")).isEqualTo("abc");

		assertThat(function.apply("a", "b", "c")).isEqualTo("abc");

		assertThat(function.asNullary("a","b","c").apply()).isEqualTo("abc");
	}

	@Test
	public void rightHon() {
		Curry.TernaryFunction<String, String, String, String> function = curry(CurryTest::concat);

		Stream<String> names = Stream.of("Bill", "Bob", "Ben");

		assertThat(names.map(function.apply("Right", " hon "))
			.collect(toList())).containsExactly("Right hon Bill", "Right hon Bob", "Right hon Ben");

	}
}

Posted in Uncategorized

Stop fixing it already…

 photo MicrosoftFIX_IT_zpsoqz3jnck.pngOver in Fix it twice? I discussed why a second attempt at fixing an issue, to use hindsight of an actual fix as a way to improve the software, was a great idea. In this piece, I’d like to discuss the aim of bug fixing.

Why do we fix bugs?

Is it to make the error go away?

Well, kind of, but that’s only part of the story. The aim of fixing a bug is:

  • Make the software work
  • Fix the thing which caused the bug (which can be software, or process, or communication etc)
  • Make it harder to repeat the mistake

I don’t want software with fixes in, I want software without issues. I don’t want issue resolution, I want it to be hard to make that issue occur again.

Removing the source of a mistake and being mistake-proof are two sides of the same coin. However, it seems to be against human behaviour to fix the cause, rather than the side-effect of a bug, and it seems to be hard, sometimes, to identify the cause as something that can be mistake-proofed in future.

Here’s an example:

public class RouteSoFar {
    private final List<String> streetsVisited = new ArrayList<>();

    // adds the street to the route
    public void visitStreet(String street) {
        streetsVisted.add(street);
    }

    // get a fresh copy
    public RouteSoFar makeCopy() {
        RouteSoFar copy = new RouteSoFar();
        copy.streetsVisited.addAll(this.streetsVisited);
        return copy;
    }

    // show route
    public String getRoute() { ... }
}

In the above class we have an object that’s tracking a route by accumulating streets. Maybe the idea is to print out some routes to interesting stores radiating out from an initial street. In fact, that’s a nice algorithm to write.


    StreetMap streetMap = new StreetMap("Bordeaux");
    Street firstStreet = streetMap.get("Rue De Winston Churchill");
    RouteSoFar routeSoFar = new RouteSoFar();
    printRoutesToStores(firstStreet, routeSoFar);

    ...

void printRoutesToStores(Street currentStreet, RouteSoFar routeSoFar) {
    // we're now on this street
    routeSoFar.visitStreet(currentStreet.getName());

    // print interesting stores on this street
    currentStreet.getStores().stream()
        .filter(Store::isInteresting)
        .map(store -> store.getName() + " is at " + routeSoFar.getRoute())
        .forEach(System.out::println);

    // and recurse to all neighbouring streets
    currentStreet.getNeighbours().stream()
        .forEach(neighbour -> printRoutesToStores(neighbour, routeSoFar));
}

Ignoring for a moment the fact that the neighbouring streets could easily give us one of the streets we came from (let’s pretend it’s rigged not to) the above code looks like it will work, but won’t. You’d find out if you ran it that the routeSoFar doesn’t contain the route to the store, but instead is polluted with any street it’s visited. This is because it’s a mutable object and is shared between layers in the hierarchy of the program.

Ah. So that’s why there’s that makeCopy function, a kind of clone method, then? Do we just change one line?

    // replace this
    currentStreet.getNeighbours().stream()
        .forEach(neighbour -> printRoutesToStores(neighbour, routeSoFar));

    // with this
    currentStreet.getNeighbours().stream()
        .forEach(neighbour -> printRoutesToStores(neighbour, routeSoFar.makeCopy()));

}

It’s probably clear by context, that the answer I’m aiming for is no. If you’re pedantic enough to debate that it’s yes, then I applaud you. In this instance as written, yes would be fine… but then what?

In a real life project, I got into the habit of fixing bugs like the one above on a class with a similar purpose to the one above, by remembering to do that thing… Yes, if you’ve got this mutable object and it might travel beyond the method that owns it, then it might be better to send a copy, not the real thing.

How many times do you need to apply that fix to a piece of code before you realise you need to make it mistake proof?

The problem the above has is that it’s a mutable object being used naively by multiple parts of the program. If it were immutable, then you would have to knowingly change it, and you couldn’t accidentally have your copy changed. It’s a reversal of the paradigm, and the side effect is that the thing you’re interested in becomes slightly more conscious a thing to do, and the thing you don’t want to happen become impossible to do accidentally.

Here’s the same Route class made as an immutable object. I think you can see how it might be used.

// immutable pojo - no setters, but can be transformed into a new immutable pojo
public class RouteSoFar {
    private final List<String> streetsVisited = new ArrayList<>();

    // adds the street to the route and returns a new object
    // so you store the reference to that if you want to keep it
    public RouteSoFar havingVisitedStreet(String street) {
        // rather than have a clone method, we innately create a copy 
        // of this, modify that and return IT
        RouteSoFar newRoute = new RouteSoFar();
        newRoute.streetsVisited.addAll(this.streetsVisited);
        newRoute.streetsVisted.add(street);
        return newRoute;
    }

    // no need for a copy/clone method

    // show route
    public String getRoute() { ... }
}

Philosophically, this is what bug fixing is – finding a mistake and eradicating it. However, spotting an opportunity to remove the opportunity for future errors of the same sort is a ninja trick. Use it!

Posted in Uncategorized

Fix it twice?

 photo MicrosoftFIX_IT_zpsoqz3jnck.pngOne of the many catchphrases I use with my team is Fix it twice. This refers more to the Red – Green – Refactor cycle. For clarity, that’s usually used during TDD where you go Red – a failing test. Then you do whatever it takes to make the test pass – Green. Finally, you review the code you’re left with and, with the security of the unit test around you, you refactor it to the simplest possible design/structure for its current feature set.

Where I say Fix it twice, I’m usually referring to the rare case of a bug-fix or similar. With TDD you don’t get bugs so much, because your tests kind of prevent them. You do, however, get some surprises at later points. Tests are never perfect, and some issues only rear their heads when you add more features. So at some point, you need to apply a fix to existing code to make it do what it ought to have done all along.

We still use TDD for bug fixes, you can’t be sure you’ve fixed it until you’ve found a test that needs it to be fixed in order to pass. The problem can sometimes be that you don’t know the scope severity of the fix until you’ve had a crack at solving the problem, after which you can be left with code that’s not at your usual standard.

Refactoring alone may not make the root cause of your problem better. But, once you’ve fixed the code once, to get something that now works again, you’re usually in a better position to judge what you should have done all along. This is the Fix it twice of which me and the team speak.

Maybe you revisit a root cause of the bug. Maybe you generalise the code to avoid edge cases that need resolving individually. Maybe you add more tests to prevent a naive change causing a surprising regression. The philosophy of Fix it twice is the solution to the classic case of L’espirit de l’escalier that we encounter in life sometimes, where you wish you’d said something different in a conversation that’s now over. In software you’ve always got the chance for a better do over.

Posted in Uncategorized

So Let’s Test The POJOs

So the code coverage is mediocre and there are loads of getters and setters and equals methods out there sitting untested. Let’s impress our lords and masters by running some tests against them. Assert that when you set the value, the getter gets the new value and so on.

Shall we?

Is POJO coverage important?

Here are some thoughts.

Code Coverage is less important than Code-that’s-used

Writing placebo tests is just busy work. Asking why the methods aren’t already used by tests that matter for your business scenarios will reveal that either:

  • Not all scenarios are covered – so do that
  • Well, this method isn’t really needed – so delete it

And in a few cases, that’ll still leave some stuff behind that’s kind of needed structurally, but isn’t that important right now.

Absence of code coverage creates a fog

While working on Spectrum, we maintained a rule of 100% code coverage (until some new coverage tool came along and lost us 0.01% for some reason). The idea was that you have a rigid feedback loop if your change does anything to a 100% coverage statistic. We reckoned you could test every scenario, so there shouldn’t be a line of code out of place.

When you have lower code coverage, it’s hard to tell from the statistics whether this means that important or unimportant stuff is not getting tested. When it’s higher – 95%+ – you get the impression that every uncovered line represents a testing misfire and a potential bug.

So find an easy way to cover your minor lines. Use an equals/hashcode test helper to wash away doubts about those methods. EqualsVerifier is one such option.

Testing things in context leads to high coverage

We’re not really here to make code coverage. We’re here to make code that’s guaranteed to do what the business process is expected to do. This means you need high scenario coverage. So maybe your getter/setter combination is only used when you serialize the object, so maybe you need to serialize some, perhaps in the context of the flow which need serialization to work.

Once you focus on the usage in real life of your classes, you can use code coverage to feedback that the intended code paths are tested and that all your code is actually needed.

Caveat Tester!

Despite all of this, code coverage is only a surrogate statistic. Alone it doesn’t prove anything. If it’s low it proves SOMETHING, but if it’s high, it only provides a number in response to what should be a genuine heartfelt attempt to try out the behaviour of your software and show it working as intended.

Posted in Java, tdd

How Mocks Ruin Everything

Despite being an ardent advocate of TDD, mocking and Mockito (which is totally awesome), I’m going to argue against mocks.

It had to happen. I’ve found really clear situations where mocks have made my work harder, and it’s got to stop!

Why?

When used incorrectly Mocks do not test your code

Yep. Mocks can be the fool’s gold of TDD.

What do good tests achieve?

Back to basics then. What’s it all about?

  • Really exercise the components under test
  • Specify what the component is expected to do
  • Speed up development and fault resolution
  • Prevent waste – because developing without the right tests is probably slower and less efficient overall

Why do we mock?

  • Create boundaries in tests
  • Speed of execution
  • Simulate the hard-to-simulate, especially those pesky boundary cases and errors

What is a mock?

For the purposes of this discussion, let’s bundle in all non-real test objects:

  • Pretend test data
  • Stubs – which receive calls and do nothing much
  • Mocks – which can employ pre-programmed behaviours and responses to inputs, despite being just test objects
  • Fakes – implementations that simulate the real thing, but are made for testing – e.g. an in-memory repository

How can mocks go wrong?

Your mock starts to work against you if:

  • It makes your test implementation focused, rather than behaviour focused
  • Mocking requires some obscure practices
  • Mocking becomes a boilerplate ritual performed before testing

Implementation focused?

You’ll know if you’re doing this. If you do it before writing the code, it feels like trying to write the code backwards, via tests that predict the lines of code you need. If you do it after the fact, you’ll find yourself trying to contrive situations to exercise individual lines or statements, or you’ll find yourself pasting production code into your tests to make a mock behave the way you want.

Obscure

Mocks have the power to bypass the real code, so we may find ourselves using the mocks to generate an alternative reality where things kind of work because the mocks happen to behave in a way which gives a sort of an answer. This seems to happen when the thing you’re mocking is quite complex.

Ritual

If all tests begin with the same pasted code, then there’s something odd about your test refactoring and mocking.

So What’s The Solution?

  • You ARE allowed to use real objects in tests
    • Mock at heavy interface boundaries
    • Refactor your code so more of your real algorithms and services can be used at test time
  • You SHOULD test with real-life data
    • Your fancy date algorithm may work fine with 4th July 2006, but if that’s not the sort of date your users will use, come up with more real life ones
    • Make the test data describe real-world use cases so people can better understand the intent of the module
  • Add mocks when you must
    • Add spies to real objects to simulate hard-to-reach places as a preference to total mocking
  • Consider using fakes
    • Complex behaviour of simple things tests best if you can produce a fake implementation – this might even allow for changes in how the code under tests uses the services it depends on
  • Test-first means that you delay thinking about the implementation
  • Test behaviour, not lines of code

In Conclusion

Test what you mean to test. Write tests first about important behaviour. Try to forget any knowledge about the implementation in the tests. Within reason, be proud to weave together a handful of collaborating objects to spin up a unit test.

Posted in Java, tdd