Working In Uncertainty

Matthew Leitch column: Uncertainty quantification

by Matthew Leitch, first published 2005.

(This article first appeared under the title ‘The Matthew Leitch Column: Uncertainty quantification’ in Emerald Insight's publication ‘The Journal of Risk Finance incorporating Balance Sheet’, volume 6 number 1, 2005.)

Putting numbers on uncertainty

Most readers of this journal will be more than familiar with risk quantification as it is used for making decisions about insurance, credit, and investments in securities. But what about other situations where big decisions involve risk and uncertainty?

A couple of years ago I was part of a team doing due diligence for a bank that was interested in buying a high risk, high technology company for around 100m. The company was awash with uncertainties, many of them ominous, and these were urgently reported and discussed in a series of tense meetings.

As part of this effort a cash flow model was built, yet despite the circumstances and the money involved there was no explicit consideration of uncertainty in the model. The model was designed to report exactly the cash flow to expect month by month, with no hint of doubt. At the time nobody commented that this level of precision and apparent certainty was at odds with reality but, of course, it was.

There are still many of these single-value models around and, as we will see, sensitivity analysis is not enough.

Simulate to illuminate

The most common answer to this is to replace single value inputs with probability distributions, use Monte Carlo simulation to crunch the numbers, and present the results as probability distributions for key output variables.

Instead of saying, for example, that the NPV of a project is 21.7m, you might show a graph of the distribution of the NPV, on which the mean and 80% certainty levels are highlighted.

Avoid big mistakes

Straight away this avoids the main problem of not showing uncertainty, which is the bogus impression that you know exactly what the future holds and so plans can be made on that basis.

To some extent the same sense of uncertainty can be conveyed by a sensitivity analysis, but there are two other technical problems that can lead to large errors that cannot be removed by sensitivity analysis.

First, there is the Flaw of Averages, as it is called by Professor Sam Savage of Stanford University. This is the assumption that ‘average’ inputs applied to a model will give average outputs. In other words, there's no need to worry about uncertainty. This is only true if the model is linear and how many people know if their model is linear? Usually the model is not and errors can be large.

For example, imagine a publisher predicts demand of 4,000 copies for a new book, and prints 4,200 just in case. If actual demand should be greater than 4,200 sales will be capped at 4,200. Average sales will be lower than average demand because of the risk of demand being more than 4,200.

Second, there is the value of information and flexibility (often called the value of options). When I was trained as an accountant I learned to build cash flow models. While it was never discussed there was an assumption that all decisions we might make had already been taken. We would never again learn or react to situations by changing our plans. We were committed.

But of course in real life we do learn and we do respond. Failing to model those decisions means our valuations are either too low (because we failed to see the value of our options) or too high (because we didn't see how the plan would reduce our existing options). Again, the error can be large.

Not enough information?

One of the most common reasons for not capturing uncertainty explicitly in quantitive models is the feeling that we don't have enough information. Oddly, we seem to find it easier to guesstimate a number exactly than to make looser, more probabilistic statements about its value. Then there's the theoretical legacy of the frequentist school of probability.

In fact people have no difficulty putting numbers on certainty in the absence of hard data, if you ask the question in the right way, as can be seen from the popularity of betting on sports events. Putting numbers on our judgements helps us work out their implications and generates feedback that can help us revise our judgements.

This can help with difficult decisions. For example, a few years ago a company had a 50% share in a joint venture that had been struggling for years. The future of the venture was so uncertain that, one year, a long and painful process of analysis was needed to convince the Board and the auditors that the venture was still a going concern.

Not long after the accounts had been signed off a new finance director arrived who decided the problem should be quantified. A model was built that captured alternative futures for the venture, and assigned probabilities. As a result people realised that, realistically, there was little hope of success and the venture was closed.

Today's tools

Modelling risk in general business situations is easier than doing it for insurance decisions or calculating value at risk for a bank. We don't have to be as accurate, especially with extreme values, and don't have to work so hard at fitting models to large databases of history.

What I like about the spreadsheet add-in tools for risk modelling is that they substitute raw computing power for my limited mathematical skill. My inability to cope with advanced calculus and time series is no problem when, with the click of a button, my laptop can run a common-sense model 100,000 times and show me the results as a graph after a few seconds.

Human nature

A caveat. Explicit uncertainty modelling is a good thing and encourages people to recognise and respond to uncertainty. However, it's not always enough. In one systems company a model was developed to generate minimum bid levels to guide sales people. The model considered risk. Unfortunately, when the model gave a minimum bid figure the salespeople thought was too high to win the work they would sometimes remove risk factors until the bid price fell to their liking!

Words © 2005 Matthew Leitch. First published 2005.