Working In Uncertainty

How the United Kingdom Accreditation Service improved its financial forecasting

by Matthew Leitch, 26 February 2007

Contents


Better business forecasts with far less effort are possible for many organizations that currently make business forecasts by adding up the guesstimates of many managers. Furthermore, finding out if your organization could benefit from the simple statistical methods involved is not difficult or expensive.

This case study describes how the United Kingdom Accreditation Service (UKAS) took action to improve its forecasts while cutting the effort involved. By doing so, they have also created the possibility of doing more useful things with their forecasts than before.

UKAS

The United Kingdom Accreditation Service (UKAS) is the sole national accreditation body recognised by the UK government to assess, against internationally agreed standards, organisations that provide certification, testing, inspection, and calibration services. Accreditation by UKAS establishes that the evaluator is impartial, technically competent, and that their performance reaches the required standard.

UKAS is a non-profit-distributing company, limited by guarantee, and operates under a Memorandum of Understanding with the Government through the Secretary of State for Trade and Industry. Since it does not distribute profits it has to tread a narrow path between making too much profit at the expense of its stakeholders and making too little to sustain the necessary investment in its infrastructure.

Forecasting woes

In early 2006, UKAS’s financial results had been improving strongly year on year but its ability to forecast those results in advance was a cause of some frustration for Chief Executive Paul Stennett and his finance team, led by Richard Crookes, Finance Director.

Reforecasts of year end results were made 3 months into the year, 6 months in, and finally with 3 months to go. Like many organizations, UKAS made these forecasts by asking budget holders to provide forecasts to the year end for their areas of responsibility, discussing the forecasts submitted, then adding them up to get the overall forecast.

The forecasts fluctuated too much for comfort, and uncertainty about the results was blocking an investment in IT. One quarter the forecast result was pessimistic and it looked as if belts needed to be tightened painfully, but then the next quarter the forecast was over-optimistic.

With 70% repeat business, client work planned four years at a time, and usually the same assessment work to do each year for each client, UKAS seemed like a business that should be predictable, and yet it seemed not to be as predictable as everyone wanted.

In addition to forecasting to year end, UKAS also made rolling forecasts of the next three months using information in a planning database that detailed every client assignment. In theory this should have been very accurate, but in fact these forecasts too were surprisingly unreliable.

Richard Crookes carefully studied past forecast errors, line by line, to try to establish where the main inaccuracies were coming from and thought about alternative approaches to forecasting.

The team discussed this with a newly appointed non-executive director, Professor Michael Mainelli, who is an expert in (among other things) using machine learning to solve business problems. He suggested that maybe the complicated spreadsheets and detailed consideration by budget-holders were unnecessary, and that a simple statistical approach might work just as well or even better. He recommended Matthew Leitch as someone who might be able to help do this.

The new idea

Richard Crookes and Paul Stennett engaged Matthew and explained their dissatisfactions and requirements.

It emerged that having more reliable forecasts was only one improvement they would value. Cutting the amount of work involved would also be very welcome. In response to past forecasting problems the natural reaction from budget holders had been to go into even more detail than before. Somehow this had to be reversed. Another improvement would be to make it easier to review forecasts and related action plans.

One of the problems seemed to be that forecasting and budgeting were getting mixed up. To begin to counter this Matthew suggested dividing forward looking numbers into at least three types:

  • Target Trajectories: These would be numbers representing what would be desirable financial results.

  • Extrapolations: These would be numbers based on looking at past results and extrapolating forward statistically with no knowledge of what plans for the future might be.

  • Plan Projections: These would be numbers forecasting future results that made use of knowledge about future plans.

The budgets would provide Target Trajectories, the statistical work would provide the Extrapolations, and budget holders would provide the Plan Projections.

The value of a reasonable statistical forecast generated by time series analysis would be, at worst, as a benchmark against which to judge forecasts by budget holders. If the results seemed to be heading in one direction but the budget holder forecast differently then that would need more explanation. At best, the statistical extrapolation might prove to be more accurate than budget holders’ forecasts and simply replace them, wholly or partly.

The initial research

But first, it was essential to find out if a reasonable statistical extrapolation rule could be found. Would UKAS be one of those organizations for which this is possible? The odds were good, but nothing was certain.

The planned approach was to study past UKAS results in detail to understand just how predictable they were and then to build a “test-bed” spreadsheet to test alternative statistical forecasting rules against past results, and against past forecasts.

Almost immediately it was noticed that if UKAS had forecast its results to be the same as the previous year’s actuals, month by month and to year end, it would have had more accurate forecasts than the ones it had actually generated.

Not only that, but last year’s results were also a better prediction of the next three months than the forecast derived from the detailed planning database had been, except since a recent change in technique had made the 1 and 2 month out forecasts more accurate.

Maybe the statistical approach could work. Maybe that detailed estimation was indeed unnecessary!

Experiments with the test-bed showed that, for UKAS, forecasting rules that responded slowly to recent results and apparent trends easily outperformed rules that were less conservative. Conservative rules also produced forecasts that bounced around a lot less from one month to the next.

Using this observation Matthew devised and tested further forecasting rules, eventually presenting evidence that one of the simplest was also the most reliable and accurate tested, and would have done better than the actual forecasts made in both of the previous two years.

The end of time consuming detail

This was exciting progress. For the first time there was clear evidence that detailed forecasting work was not the key to better forecasting for UKAS and that simple extrapolations could be the basis of more accuracy.

Richard Crookes explained that in the past they had carefully reconciled the forecasts submitted by budget holders to make sure they were consistent with each other, particularly for the small amount of “trading” between budget centres. However, the amounts involved were small and it was clear that this effort had never been justified by the accuracy resulting. Now that the forecasts got their credibility from the experiments that had been done it was time to stop worrying about these details.

Another piece of detailed work that turned out to be unnecessary emerged later when it was noticed that some ratios using forecast numbers occasionally looked odd. It was quickly realized that this only happened in certain situations when the numbers involved were small. Furthermore, experiments with the test-bed had already shown that the gain in accuracy from adjusting ratios was almost imperceptible and certainly not worth the complicated mathematics involved.

Prediction intervals

No forecast, however carefully researched, will be totally accurate. To present a best guess forecast is to imply that there is no uncertainty in the forecast while giving no information about the true level of uncertainty. From very early on UKAS wanted to make forecasts that would show a range rather than just a best guess. The technical term of such a range is a “prediction interval”.

Any sufficiently generous prediction interval is better than none because it removes the illusion of total certainty. However, in an effort to go beyond sheer guesswork, Matthew Leitch went back to the data.

There were two ways to calculate rough prediction intervals. One was to simulate the prediction rule using past actuals for every number to be forecast in future and use error value percentiles to estimate individual prediction intervals for each line item in the Profit and Loss account individually. The other approach was to try to find a statistical rule that matched the relationship between error percentiles and some characteristic(s) of past data and that could be applied to any line item.

For reasons specific to UKAS predictions, only the second approach was feasible. Matthew analysed the forecasting errors made by the extrapolation rule when applied to past UKAS actuals, searching for a rule that would allow sensible prediction intervals to be set for every number.

After several failures using single and multiple regressions he found a remarkably strong relationship that involved just one independent variable, and made sense intuitively.

Being able to visualize and quantify the reliability of forecasts is a great step forward.

Integrating statistics and plans

However, what was still needed was a way to integrate these statistical discoveries into the reforecasting process, and so use them to help produce Plan Projections.

In principle what was required was a way to compensate for the fact that a statistical extrapolation does not know about unrepresentative things that happened in the past, and does not know when the business intends to start doing things differently in the future. It could also be very useful to see how different action plans might change the forecasts.

Experimenting showed that adjusting for known pay and day rate changes would give slightly better forecasts, and there were other things that people felt would be worth adjusting for.

The practical way of doing this turned out to be a surprisingly simple spreadsheet design. This spreadsheet was set up with summarized past actuals and with the summarized budget for the year. It then invited budget holders to list unrepresentative events that needed to be airbrushed out of history for a good extrapolation, and things they planned to do that would be a break from the past. These were called Variations, and each had to have its financial impact relative to the extrapolation quantified, month by month.

Once this was done a macro triggered by a click on a button processed the Variations and the results were immediately visible on graphs, month by month and cumulatively.

At a glance it was now possible to see, on one page, the budget, the pure extrapolation, and the expected financial impact of doing new things.

Compared to the spreadsheets used in past forecasting rounds this offered two massive advantages to UKAS:

  • Radically fewer numbers because the work was done at a far more summarized level.

  • A list of the specific things each budget holder thought should be different, focusing on specific actions and their impacts rather than a myriad of little adjustments to numbers hidden in pages of detail.

In effect, the statistical extrapolation took care of detail and of highly uncertain streams of events, but the spreadsheet gave managers the opportunity to over-ride or adjust the extrapolation where they could justify it.

Early trials

Having seen the spreadsheet design demonstrated UKAS liked the concept and decided to try implementing it with one spreadsheet model for each of their profit/cost centres, intending to add up the predictions to get to an overall forecast.

Progress towards this was made in small but rapid stages. Initially Richard Crookes tried it with his own cost centre using his own budgets, actuals, and Variations. Next, another budget holder was asked to have a go using his numbers.

Following this, confidence was high and the Finance team decided to use the new approach at the next reforecasting round – faster progress than had initially seemed realistic. “This works. It’s the future for us.” said Richard. Spreadsheets were made for each budget holder and then the budget holder was briefed individually on how it worked and what they would be asked to do.

Some helpful ideas to make the new tool more user-friendly were picked up and quickly incorporated.

First live use

The first live use was an exciting step, but not a leap into the unknown. The experiments done showed that, even if budget holders were unable to produce sensible Variations, the worst outcome would be that the pure Extrapolation would be used instead, and this was known to be better than forecasts used in the past, and much easier to produce.

The first live use demonstrated the relative ease of working with reduced detail and leaving a lot of the forecasting to an automated calculation. It also showed the value of concentrating the budget holder’s adjustments into one table of Variations.

Whenever you try something new, especially something that is conceptually new, there will be some confusion and a tendency to try to stick with what is old and familiar. This first live use of the new approach, despite the trials and individual briefings, was no exception.

A few budget holders tried to do what they had done in the past, which was to adjust the forecast to be the number they wanted to show. However, it was easy to see which Variations made sense and which should be excluded.

The main learning point from this first live use was that it takes time to work out how to express ideas as Variations. Gradually, one by one, the techniques to use in each case became clearer.

Summary of improvements

By the second use of the new forecasting rule and Variation approach UKAS was happy to be saving time and effort on producing forecasts that were more reliable, and accurate to an extent that was much better understood than before.

Their experience shows that the accuracy of statistical extrapolation can be assessed quickly and cheaply using past results as a test-bed and, if the accuracy is reasonable, then this can be exploited without specialist software.

In future they may refine their forecasting rules, make more use of the projections as an action planning tool, project further into the future, show prediction intervals more often, and use their new approach in budget setting. For now, forecasting has stopped being the biggest headache in Finance.




Acknowledgements and links: This case study is published with the kind permission of UKAS (www.ukas.com). We are also grateful to Professor Michael Mainelli for suggesting a statistical approach. Michael frequently generates good new ideas and is an executive director of consultancy, Z/Yen (www.zyen.com).


Hundreds of people receive notification of new publications every month. They include company directors, heads of finance, of internal audit, of risk management, and of internal control, professors, and other influential authors and researchers.

Please share:            Share on Tumblr

 

Company: The Ridgeway Expertise Company Ltd, registered in England, no. 04931400.

Registered office: 29 Ridgeway, KT19 8LD, United Kingdom.

Words © 2007 Matthew Leitch