Humanity needs errors. We learn from them. Success cannot exist without failure, in the same way that light cannot exist without dark. A child learns how to ride a bike by falling off and reaching an understanding of why it happened. They can then correct their technique so that they don’t do it again. This is how most human knowledge has been arrived at.
A financial forecast is an estimate of a firm’s projected revenues and expenses that is worked out using its internal or historical data, as well as taking into account external market factors. Businesses’ decision-making processes rely heavily on forecasts, and getting them right is necessary for a company to be successful. They must also be made in sufficient time for management to change their plans if need be.
To improve forecasting, all you have to do is eliminate bias and reduce variation to an acceptable level. Although errors may seem like poison for forecasting, they’re not. Bias is a pattern of error—consistent over-or-under forecasting. Finding and understanding errors, and then accounting for them in forecast models, is therefore central to the process of removing bias.
Despite this, the majority of businesses still do not measure forecast error at all, meaning that theirs’ end up being no better than guesswork. Eliminating bias does not require big investment though, just an exceptionally finely tuned sense of what the errors are, where they come from, and how they can be either removed or mitigated against.
Key to this is defining what errors are. As we need errors to understand success, so too do we need success to define errors. According to IBM, there are two types of error. Systematic error, and unsystematic error. Systematic error is a sequence of errors with the same sign (positive or negative) which often change in an unpredictable manner. Unsystematic error is a pattern of errors without extended sequences of the same sign. In a given set of circumstances, the level of variation is often predictable.
Measuring errors is about collecting the data that display these patterns and finding the difference between the forecast and actual outcomes. However, there must be an element of caution in comparing that outcome with forecasts made far in advance as there is a risk that you will be measuring the impact of a decision made in response to the forecast rather than the forecast quality itself. Errors should be calculated by comparing actuals to short-run forecasts - before any decision that you might make in response to a forecast has had time to have an effect.
Once the errors have been found and measured, it is important to understand why they are occurring, and whether they are the result of human error, or a flaw in the forecasting models themselves. The models will then need to be adjusted to take into account the bias. Failing forecast models are easily rectified, or at least dramatically improved, by simple measurement practices.
To discover how you can transform your financial practice, join the complimentary New Way to Work Tour, visiting 6 cities in the US and Canada this September 9 & 10. Registration is free, so confirm your place today.