Fun With Tuning

Tuning an algorithm using a profiler is not my favourite thing to do. It’s an admission of failure. What it means is that there’s such a fundamental waste of resources, in software that’s so complex, that there’s no obvious single pattern I can apply to get the performance close enough without deep diving the internals.

On the whole that should be expected to come from a systemic lack of wise implementation choices, and as such it’s a mess needing to be cleared up.

We should expect to use the most efficient tools for each job we do, and get approximately the right performance as a side effect of runtime optimisations. We then throw cheap hardware at it, horizontally scaling, and things go fast more cheaply than agonising over low-level tweaks.

That said, there are times where some low-level invisible thing makes you slap your head with the frustration of it all.

So, Profiling, Then

Here’s a recipe for improving software through profiling.

  • Establish a real-world performance measurement over a significant data set, so we can easily see the speed limit the software has
  • Write a unit test which constructs the problem and runs a loop over a significant data set, so we can see how slow it is over a long operation, and can also have a single quick-to-start operation to profile
  • Look for something within the profile output that takes a singularly large percentage of the time and can be optimised
  • Optimise it – better algorithm, removal of unnecessary things… for example remove Exceptions as flow of control!!!…
  • Try it again through the unit test to see if the aggregate performance is obviously better
  • Commit the change and try the end to end test
  • If it doesn’t make an obvious difference, then revert it, otherwise rinse and repeat

One easy mistake to make is to substitute a poorly tested slow algorithm for a potentially faster one. Do not be tempted to do this without first retro-fitting the original algorithm with more test cases, exploring all paths, so you can be sure the new one works. Note: if you went too far ahead before noticing this risk, you can use the unit test time machine to fix that.

The Most Important Thing

Stop when the chances of making an improvement won’t make a difference to the outcome, or when you’re getting since figure percentage improvements or less… often these improvements won’t be visible in the real-world aggregate… or may be micro-de-optimizations that you don’t realise until less.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s