The Tablecloth Trick: The Memoirs

giphy

Above is the fabulous Mat Ricardo. Go and see him perform. He is an excellent, warm, joyous man, dedicated to his art.

Mat inhabits the comedy and cabaret world I also dabble in, and has entertained me with both his shows and his writings on the subject. He’s also given me a decent metaphor for something that comes up a lot in software development.

In this article, I want to share three stories, which may be familiar. In a follow up, I’ll describe how I think you do this process effectively. One of the challenges of pulling off the Tablecloth Trick is explaining its value/meaning to your stakeholders. Hopefully the above animated gif is the perfect metaphor for you to show them.

What is the trick!?

In software development you sometimes come to a point where the way the code was written is not the way you wish to write future code. You need to do a technology/technique change underneath the existing system while preserving its functionality.

The aim of this should always be some combination of:

  • Reduce maintenance costs
  • Increase speed of delivery
  • Decrease complexity
  • Improve performance
  • Reduce technological risk
  • Increase security, or ability to certify as secure

What you will ultimately do is remove the old technology and insert the new one in its place. Pull the tablecloth out, and then pull a tablecloth in.

Phoretix 1D Database

Now branded as TotalLab, and most likely no longer containing any of my code, Phoretix products are used in image analysis in the Life Science space. When I joined the company they were on their second generation products, written now in C++ rather than C, and they were starting to think of the next generation rewrite, around a new application framework.

The objective of the framework was to solve the annoying problems in Windows of managing colour palettes on shonky 8 bit display drivers, having standard components for navigating the open data, having a standard Multi Window Interface – a bit like an IDE – for looking at different views on the data, and having a standardised excel-table-like library for showing tabular data.

One of my friends had been building the foundation pieces of this framework using one of the applications as its first implementation. My job was to help with the framework and then bring 1D Database into it.

As a process. We essentially:

  • Engineered the framework to be able to do the features in principle
  • Proved those features in real life on one app
  • Looked at the target app that needed to run on it
  • Engineered the framework more to be able to help the target app
  • Refactored the target app to look more like it was already using the framework – separating the concerns
  • Getting to the big heave ho: a big bang lift and shift, followed by a lot of pain until it all worked

Elsevier Content Enrichment Framework

In Elsevier we set out to write some automated processes for running documents though text analysis and delivering the results to the systems which serve those documents to customers. We knew we were writing a framework but we deliberately didn’t over engineer/prematurely generalise in our first generation. As a result, we made something which had all the right sort of technologies, but was relatively specialised to the first use case.

The second use case came along, and we refined our method from the first time, found some common components, or discovered that our first guess at common components was about right, and built a more sophisticated solution for the specifics of the second use case. This had more moving parts and seemed to involve a lot more of the typing we hadn’t enjoyed the first time around.

The issue was that we were orchestrating the movement of document through services on different asynchronous nodes. The central coordination hub was doing a good job of making that happened, but required quite a lot of code at each step in the process to handle race conditions, repeated messages and the logic of how the document went through the process.

Third and fourth time around happened about the same time as each other and we knew that the fourth use case was going to get much much more complex that we’d seen before. Yet, we also knew the pattern. There was some sort of either BPM or high-level script driving the process and if we could somehow extract that out of boilerplate code into configuration, we could build the orchestration capability into a thing that could be easily reused by each of the processes.

There was a certain amount of problem analysis and design, along with technological experiments to see what would be the closest to what we already had, and what would get us going quickly.

Some teams would have adopted a service bus and BPM solution by now already, and been working for the technology rather than solving their application’s needs. Some teams would have integrated with an off-the-shelf integration/orchestration platform from the start. It was our belief that we’d mainly been solving not the integration platform issues, but the issues around each endpoint, which you always have to get right, no matter what you put in the middle.

After the week we spent analysing and theorising, a one-week prototype of an orchestration framework was made, proven out by unit tests and a faked set of external services.

One week later we had a persistence layer and some visualisations of the process as it ran – bearing in mind that the process moved control between several servers to achieve its job, coordinated by a hub. Being able to show the stakeholders what this system was doing, was a huge win for us.

We had a working system and so were able to use it for our two new use cases. The new use cases caused the system to mature and broaden as we developed them. However, we still had two (now-)legacy processes which were built the old way and the older way. Migrating those was optional. However one of our future use cases was to be able to produce management reports on the data that had passed through the system. We wanted to build that once for everywhere, so it was time to do the tablecloth trick on the most important pipeline – use case 2.

The way it played out in the end:

  • Build use case 1
  • Build use case 2 in a more sophisticated manner
  • Start building use cases 3 and 4 the cumbersome way
  • Generalise and produce a go-forward framework
  • Make the services that were common to the old and new framework compatible with both
  • Complete use case 3 on the go-forward, allowing use case 4 to develop itself and the framework as though greenfield
  • Rebuild the pipeline for use case 2 in terms of the new framework
  • Build a migrator to shift 6 months of historic data from the old version to the new, so it looked like it had always been run on the new framework
  • Perform the migration of framework and data in live
  • Build a reporting solution
  • Run a process to pull all data into the new reporting solution
  • Watch the system working solidly
  • Build several more use cases on it
  • Celebrate

Digital Project Data Access Layer

I’m currently working on a government digital project which has undergone a lot of rapid change in its database layer. As a result a number of the database queries in corners of the system had become a bit unusual. Similarly, there had been some pasting of boilerplate, especially around paginated queries. As a final complexity, some of the data model changes hadn’t been too easy to make in the existing code, so there was a hard to follow combination of inserting into tables and reading the results back from views.

To simplify and speed up the production of future DAO changes, a framework was introduced. The framework was applied to areas of the code which were proving in need of some streamlining. It’s fair to say that this added simplicity to a point. It made everything consistent, and DRYed out a lot of the implementation. It was innately a more complex design pattern than the hand-made SQL that was being used before it.

The pattern of adoption:

  • Build and unit test a new library to meet the obvious features used in the original code
    • Note: it was chosen to build the new library in such a way as it could co-exist with the old code
    • Another option was to build the new library on incompatible dependencies, requiring the whole application to change across to it in one go
    • After a few months, there are still remnants of unmigrated code, so the decision to keep things backwards compatible was enormously wise
  • Attempt to apply that to one of the simpler existing DAOs
  • Extend the library to solve problems undiscovered first time round
  • Rinse and repeat with other DAO objects
  • Hit the much more complex DAO objects, discovering the requirements to solve complex joins, wildcard searches, and elements of the original implementation that were not self-evident by just looking at the code

The result of this migration is that there’s a consistent powerful pattern in place pretty consistently. It takes a fair bit of practice to master this framework, compared to what was there before.

How does he do it?

If you asked Mat Ricardo how he performed the tablecloth trick, he should tell you one thing. Practice. In truth, until you’ve done a few things like this in your career, you won’t be wise to the traps, and you may discover things getting out of control then collapsing on top of you. However, if you pressed Mat further for how it’s done, he would probably also say that there’s a fairly straightforward technique to it, you just have to master it.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s