I’ve been pondering magic numbers.
We all know that code such as this:
for(int i=2; i<8; i++)
is to be discouraged, what’s two in this case? what’s 8? what if they change?
So you’re supposed to take these numbers out of the code and turn them into constants with a meaningful name. You do it for a few reasons:
- Clarity of meaning of the code
- Once-only declaration of a hard-coded value
- Because the code-police tell you to
- To ensure you’re expressing intent rather than implementation.
So the above case could become
for(int i=FEBRUARY; i<AUGUST; i++)
Which is fine if the algorithm is all about February to August. Though perhaps even this misses the point. Why February? Why August? In my ideal world, I’d expect the code to read like this
int FEBRUARY = 2; int AUGUST = 8; int START_OF_PEAK_SEASON = FEBRUARY; int END_OF_PEAK_SEASON = AUGUST; ... for(int i=START_OF_PEAK_SEASON; i<END_OF_PEAK_SEASON; i++)
Now we’re motoring. Now we don’t have hard-coded implementation, and we even have the configurable concept of our PEAK_SEASON, which may well get modified in future versions of the code.
I think the above approach, notwithstanding the fact that perhaps these constants belong in a class, and maybe could reasonably be configurable by data, rather than at compile time, is the classic answer to the magic numbers problem.
I still think there are some examples which aren’t so easy to get right.
for(int i=ZERO; i<myArray.length; i++)
Something’s wrong there. Why is “0” turned into a constant. 0 isn’t a magic number! At least it’s not in this case.
What about this?
int lastTwoDigits = number % 100;
Should it be
int lastTwoDigits = number % ONE_HUNDRED;
I don’t think universal constants should ever be magic numbers. If an algorithm is based on starting from 0 or 1, or obviously relates to numbers that are always the same – like my 100 for digit extraction, then there’s not a case for turning those into symbols, unless the expression of the number confuses the reader. Example
int twentyFourBitRepresentation = mySuperInteger % MAX_24_BIT;
is clearly better than
int twentyFourBitRepresentation = mySuperInteger % 16777214;
(Yes, I know maximum of 24 bits is lower than 16777214, but you get the idea).
Similarly, some universal constants may be subjective in how you express them. PI is the classic example.
double PI = 3.1416;
Maybe we start with that accuracy and refine it later. So we don’t want 3.1416 littered around the code.
My conclusion is to remove numbers where writing them inline is semantically different to using a constant. So 0,1,100,1000 and even things like the maximum number of months you can have, may be more obvious to inline than to provide a poorly named symbol to replace them. Clearly everything else, where it’s implementation that needs encapsulating, should be hived off into a list of constants.
To summarise, 3 probably is a magic number.