Working In Uncertainty

Automating open-mindedness

Contents

One reason we tend to think too narrowly and ignore or underestimate our uncertainty is that doing better is hard work. Thinking through multiple alternatives is tiring and time-consuming, so we tend not to bother. For example, a detective investigating a serious crime will usually be more effective by following several reasonable hypotheses about the crime than if she works only one theory at a time. More can be learned from each interview and each fragment of forensic evidence if multiple hypotheses are considered immediately instead of just one. Similarly, a business chief executive making a plan for his company will do better if he considers a number of alternative forecasts for the economy, customer reactions, and competitors' tactics than if he just considers one forecast only.

In the days before cheap, powerful electronic computers there was no alternative to hard mental work. If you wanted more than one cash flow forecast from your accountant then he would have to spend hours calculating each one, and checking it.

Today, if you want another forecast to see what would happen if interest rates rose, or prices had to be dropped, or if a key customer were lost, it's usually a matter of typing another number into a spreadsheet, hitting the enter key, and looking at the revised results. Your computer might do thousands of calculations to produce the new forecast but it is all done in a second or so.

This degree of automation of mental work is staggering. Now we can consider additional possibilities with ease and even program computers to consider alternatives on a massive scale and present the results back to us. Open-mindedness can be automated. In this article I provide an overview of ways that this can be done.

People sometimes say that using maths and computers in business is hard, but this is a mistaken view based on a mistaken comparison. If you tried to consider as many possibilities, as accurately, and as consistently, without maths and computers it would be much, much harder. The mistaken comparison that people often make is between doing a really thorough job by mathematics and computer power versus doing a much less thorough job without those tools. The fair comparison is when both approaches are used to achieve the same level of rigour. Maths and computers are labour saving, not harder work. No wonder employment prospects for top maths graduates are so good.

Automation is not appropriate in every case, of course. Automation is not appropriate in the following situations:

  • Unimportant decisions and diagnoses, where the effort of setting up automation is not justified.

  • Where the main challenge is to gather relevant data and those data are easy for a human to perceive but hard to capture for use by a computer. For example, body language, visual style of furniture and clothing, and physical beauty. (In some cases it is worth getting humans to rate perceptions to create data that a computer can process because the computer can be made to do this better than unaided judgement.)

  • The first attempt at a decision or diagnosis, because trying to do things by brain power alone helps us understand how best to automate. Human thinking is messy, limited, and unreliable, but it is flexible and driven by a vast store of knowledge, so we need to find out what is applicable before automating.

  • Where nobody available to help knows how to use maths and computers to automate parts of thinking.

Alternative predictions

When you're thinking about a plan of action, (e.g. business plan, choice of project, choice of employee, alternative computer system design) you usually need to think about what would happen if you adopted that plan of action. Typically, you are not sure what would happen. You could imagine many possibilities, each of which might be evaluated. Using paper and pen you could tabulate alternatives or draw a decision tree.

Obviously, considering each possibility you can imagine is hard work so we tend to ignore all but the most likely. We tend to regard remote possibilities as impossible, which can lead to large errors. One of the great advantages of automated evaluation is that vast numbers of very unlikely possibilities can be accounted for, often showing that the possibilities that otherwise would have been ignored are important, collectively.

Here are some specific techniques, roughly in order of increasing sophistication.

  • Ad hoc reforecasts by spreadsheet: Imagine you have created a forecast on a spreadsheet. You wonder what difference it would make to the forecast if one of your input variables was slightly different, so you type in a different number and look at what happens. This is so simple, so natural, and so easy it hardly seems important but it is. Sixty years ago this could not be done but now we do it almost instinctively. While other techniques discussed below are more powerful, ad hoc reforecasts by spreadsheet are done so often and by so many people around the world that they are probably the single most important technique.

  • Systematic tabulation of alternative forecasts: If you realise early on that one input variable is particularly important and uncertain, and if the model is quite simple, then you can make a table where each row deals with one particular value of the important input variable. The results can be graphed against the input variable.

  • Matrices of alternative forecasts:. If you realise early on that two input variables are particularly important and uncertain, and if the model is quite simple, then you can create a matrix of alternative input values and another matrix of resulting output values on your spreadsheet. Again, graphs can be created.

  • Stored scenarios: Spreadsheet software usually has features that let you name and save collections of input values as ‘scenarios’ so that you can show people the results of running each scenario more conveniently. This makes it easier to explore the effect of changing several input values at the same time.

  • Sensitivity analysis: Systematically exploring variations of input variables to see how much difference the changes make is called sensitivity analysis, and there are alternative techniques. These include testing the effect of a unit change of each variable, a fixed percentage change of each variable, or a change that is equally likely. Another approach is to find out how much a variable would have to be changed in order to change a decision.

  • Monte Carlo simulation by table: Even a cheap laptop computer today can easily calculate the results of tens of thousands of alternative forecasts but of course thinking of that many scenarios to try is itself rather laborious. Why not let the computer do it for you? Monte Carlo simulation is a very simple technique where values for uncertain input variables are picked at random, but reflecting your views and evidence of the true values. The forecast calculation is then run on each of thousands of alternative sets of input values.

    If your model is simple then there is an easy way to do Monte Carlo simulation that requires no special software. Create a table where each row is a forecast and each of the input variables has its value generated randomly by formula. For example, to generate independent values according to a Normal distribution with a mean of 100 and a standard deviation of 10 you would type something like ‘=NORM.INV(RAND(),100,10)’. You can then analyse the results by taking the average and variance of the results. You can also copy and past the values of the table and sort the table by results. This allows you to see the specific details of scenarios where extreme results were achieved.

  • Monte Carlo simulation with summarised results: To go further with Monte Carlo simulation it helps to have a tool to make it easier, such as the well known @RISK Excel add-in. There are many alternatives, some free. These summarise and graph the results conveniently, among other things. The probability distributions and tornado diagrams produced by these tools make it much easier for people to visualize and respond to their uncertainty, and appreciate the value of making robust, flexible plans.

  • Monte Carlo simulation with programmed decisions: We often change our plans when circumstances change, but when evaluating planned courses of action we tend to ignore this fact and make our forecasts as if all our actions are decided up front. Within a Monte Carlo simulation it is easy to take future decisions into account to some extent. Design the model to work through a series of time periods (e.g. months, quarters, or rounds of a competition) and use conditional formulae to check results so far and alter plans. For example, you might calculate the change in sales effort with something like ‘=IF(SALES_LAST_MONTH > 1000, 10,-10)’ which means that higher sales will lead to increased sales effort while lower sales will lead to reduced sales effort.

Alternative plans

Another reason we tend to underestimate the value of using maths and computers to automate open-mindedness is that we underestimate the number of times we will have to revise our thinking. We imagine coming up with a plan, producing a forecast, agreeing then plan, then following it until the actions have been finished. On that basis we will only need to make one forecast.

In reality we find ourselves having to reforecast over and over again. First, a problem with the original forecast is noticed so it has to be done again, and again for each subsequent correction or refinement. Then we find it helpful to evaluate several versions of our plan as we go along, to make a better plan. Then we find that other people have ideas for improvement and want revised evaluations. Once the plan is agreed and time has passed we find that conditions change unexpectedly, more is learned from experience, and that leads to requests for revised plans and revised forecasts to evaluate them. The idea of doing one forecast is extremely optimistic.

If our expectations about the number of forecasts that will be needed were more realistic we would start automating forecasts earlier and more often.

Here are some representative techniques:

  • Ad hoc trials: As with forecasts, the simplest and most common approach is to work with a computer spreadsheet and simply type alternative plans into the model to see what happens. For example, you could change sales effort, or imagine buying an extra vehicle, or try setting a different price.

  • Option flags: If some of the options that people are likely to want to explore can be anticipated in advance of a planning meeting then it is possible to set up cells in the spreadsheet that set option in the plan. For example, you might have three patterns of sales effort set up in the model and use a cell value to indicate which pattern is to be used for a particular forecast.

  • Systematic tabulation and matrices: If one or two variables in the plan or design are obviously important then it is easy to make a table or matrix to show the results of systematically varying those variables. A graph should make it easy to spot where the best results are found.

  • Iterative search: If more than two variables in the plan can be tweaked, or if you are too lazy to look at the results yourself and choose the best combination, you can set up your model with an overall measure of results and then let the software explore the strategy options and pick the best for you. On Excel this usually means using Solver to vary fields to achieve a best result subject to some constraints. There are many methods for searching for a good plan and Excel's Solver uses only two of them.

  • Evolutionary algorithms: A particularly flexible but usually rather slow way to search for better plans/designs is to use an evolutionary algorithm. This works like evolution in biology, generating populations of varied possible plans/designs then evaluating their ‘fitness’, and then combining the fittest to produce a new generation of plans/design. This process can go on for thousands of generations. Although the process might take hours to get reasonable results (instead of the seconds taken by Solver), evolutionary algorithms can explore wider varieties of possible plans/design, combining elements in ways that are more elaborate and harder to characterise.

  • Portfolios of actions: Faced with a list of proposed projects or other investments we tend to evaluate each one, put them in descending order of attractiveness, and then choose the ones at the top of the list. This is quite a good approach, but it isn't the best. What we are trying to choose between is not projects but sets of projects. In theory we should be considering each set of projects. The problem is that this gives a lot of mental work to do. For example, with 4 proposed projects there are 16 possible sets of projects ranging from accepting none of the projects to accepting them all. Sometimes it is possible to set up a program that knows how to evaluate sets of projects, checking for synergies and the combination of financial impacts over time for example. The program can then be given sets of projects to evaluate, or made to evaluate every possible combination exhaustively, or to search more intelligently through the sets mostly likely to be attractive.

Multiple criteria and performance levels

Another problem we face when evaluating alternative courses of action concerns valuing their consequences. We can usually see many consequences and find it easiest to think in terms of multiple criteria. For example, when choosing a camera you might consider its price, the quality of the pictures you might get with it, the ease with which you can carry it around in situations where you might actually use it, the ease of using it to take pictures, and how people will react when you show them your new camera (anywhere between, ‘Wow, that's so cool’ and ‘You idiot, why didn't you get the Nikon?’).

If you write down all the reasonable criteria you can think of for a decision you will usually find that there are several (even though you have probably only thought of half the criteria you would regard as relevant if more were suggested to you - see Keeney 2007). Taking all these criteria into account at the same time is difficult.

Another complexity comes from the fact that we value different levels of achievement differently. You can't say, for example, that you think ease of use is more important than price. That would imply that you would pay any amount of money for a tiny improvement in ease of use, which is not the case. In fact there is a particular amount you would pay for a specific improvement in ease of use.

In decisions at work we tend to simplify the problem by using targets. A target says that a particular level of achievement is valued, but less than that is not, while more than the target is not more valuable than achieving the target. Achievement is simplified into above and below the target. This is yet another mental shortcut that can be tackled through automating open-mindedness.

Once again our thinking is narrower than it should be because of the effort of thinking. We restrict our attention to only some of the criteria and only some of the levels of achievement that we should consider. This in turn means we tend to stop thinking of the alternative consequences of our actions too soon. Automation can help us overcome these weaknesses.

Here are some techniques:

  • Objective functions: Almost none of the techniques developed for mathematical optimization involve targets. Instead the most common approach is to define a function that summarises the desirability of any alternative into one number. The function is called the objective function because the objective is to maximize or minimize it.

  • Linear additive models: Having said that it is not accurate to just weight criteria to show that some are more important than others, mathematical combinations of performance that do this are still more accurate than unaided judgement, in almost all cases (Dawes 1979). And of course evaluating 100 options using a formula calculated by a computer takes almost no time whereas doing them by judgement would be tiring and slow.

  • Additive conjoint models: A more refined approach takes into account the specific levels of achievement on each objective. If you can't be bothered to work out the function intellectually an alternative is to use a program that poses choices to someone and works out from their answers what their system of values is. This can then be used as an objective function.

Alternative hypotheses

Although humans learn from experience, we often learn more slowly than we could and that is partly because we only consider a tiny number of possible explanations for experiences at one time, prefer very simple explanations, stick with preconceptions for too long, and tend to forget experiences. Automated learning from experience is a huge area with hundreds of alternative tools for automating the work. In nearly all cases the automated approach involves a wider, more open-minded search for patterns than humans can cope with.

Here are some representative techniques:

  • Multiple regression analysis: This refers to a wide range of techniques designed to analyse many similar examples of something and learn how the variables are related. The more data available the more variables the methods can consider. The analysis usually takes a second or less by computer so you can try different hypotheses to see which seem to work best. Alternatively, there are automatic methods that explore alternative combinations of variables and look for the best, most convincing relationships.

  • Cluster analysis: Many techniques have been developed for putting items into groups based on their attributes. This is often done by specifying a measure of the difference between any pair of items and using that to drive the clustering. You can choose to create a few, large clusters by accepting some variation in each cluster, or you can get the software to suggest a larger number of smaller clusters by being more demanding.

  • Factor analysis: These techniques group variables on the basis of correlation. Variables that tend to correlate are grouped together. As with cluster analysis you can choose to recognize larger or smaller numbers of groups.

  • Diagnostic expert systems: Most techniques that automate open-minded thinking involve calculations using numbers. However, reasoning using other forms of rule is also well developed. An expert system is a program that uses reasoning techniques inspired by human thinking to tackle tasks. Some expert systems use no arithmetic at all, but instead use rules based on categorical variables, such as ‘IF forced volume capacity is high AND Bronchoscopy results are positive AND local symptoms are present THEN surgery is probably necessary’. Other expert systems use probabilistic reasoning too. An expert system can be created that, in effect, remembers to consider a very wide range of diagnoses and does not make the mistake of seeing one very likely diagnosis and then forgetting other possibilities.

  • Bayesian updating and model averaging: Most of the techniques in the list above suggest one answer and leave it at that, though of course they do it very quickly, allowing you to ask for alternative analyses. However, it is possible to do even better than that. Instead of picking a best guess it is possible to construct a set of alternative hypotheses, one of which must be true, and then process the evidence so that the probability of each of the hypotheses being true is calculated. The answer that comes back is not a single hypothesis but a distribution showing the probability of each hypothesis being true. This distribution can then be fed into the evaluation of alternative courses of action, with an evaluation performed for each hypothesis, then summarised in some way.

    As usual, this sounds exhausting and would be for an unaided brain, but done by computer it is easy and quick.

Finally

If you are new to the techniques mentioned above you are probably still feeling rather doubtful. ‘But surely...’ you are thinking. Isn't it very hard to set up this kind of automation? Don't people prefer to just go with their gut? Isn't it obvious that doing maths is harder? But, remember that we are only talking about decisions and diagnoses important enough to justify the effort of automation and the fair comparison is between doing the thinking rigorously without a machine's help and doing it with help. You wouldn't expect to beat a computer in an arithmetic competition, so why not consider using a tool for other mental tasks too?

References

Dawes, R.M. (1979). The robust beauty of improper linear models in decision making. American Psychologist, 34(7), pp 571-582.

Keeney, R.L. (2007). Developing objectives and attributes, in Advances in Decision Analysis: from Foundations to Applications, edited by Edwards, W, Miles, R.F., von Winterfeldt, D., Cambridge University Press.






Made in England

 

Words © 2014 Matthew Leitch