Working In Uncertainty

When is it OK to use a risk register? (revised)

Contents

This revised edition (in 2015) is significantly expanded in scope and reorganized from the first edition (in 2011) to reflect research in 2015[1] showing that many organizations do not use risk registers within a pure Risk Listing[2] approach (despite almost always being expected to).

Risk registers are simply lists of 'risks' with additional information associated with each one. There are differences in the nature of the 'risks', in the associated information, and in the way the register is used.

The following sections discuss preconditions for using three common styles of risk register and associated design details. The three styles of risk register covered in this article, in what I think is descending order of frequency of use in the UK, are:

  • high level risks used as a list of objectives;

  • low level risk events used in Risk Listing; and

  • variables in a decision model.

The article is concerned with the conditions in which risk registers are 'ok' and this does not make them the best choice. Risk registers are almost never the best choice of technique because they tend to promote risk management that is separate from core management activities. However, risk registers are expected of some people in some organizations, so our objectives here are just to (1) limit the damage done by using risk registers when they are worse than useless, and (2) suggest the design details for risk registers that make them more appropriate for the use that is being made of them.

High level risks used as a list of objectives

General description

In this style most of the 'risks' are stated with high level phrases such as 'Patient safety', 'Financial security', or 'Cyber risk'. Only rarely are they worded as events that might happen. Typically there are no more than 20 such risks on the register and I have heard 7 risks only recommended as good practice in one UK central government department.

These 'risks' are used as the basis for wide ranging conversations about what is being done about them and what actual performance has been. In effect, these 'risks' serve as objectives. Performance and progress are discussed periodically.

Conditions and recommendations for risk register design

This style of risk register is widely applicable but, as with any set of objectives, the scope of risk management driven by such a register is limited to decisions on those objectives only. This is a more limiting constraint than you might think because most of the important decisions we take at work involve a lot of factors, some uncertain and some uncertain, some related to 'risks' and some not (see Appendix A: The Scope Problem). The decisions that are just about risks (and money costs) tend to be quite small, narrow, and often remedial.

Here are some points to consider in designing this style of risk register:

  • It is not appropriate to rate the risks for their probability of occurrence and impact if they do occur. If you consider some typical examples (e.g. "Health and safety") then it is obvious that probability and impact ratings are absurd. Instead use something sensible, such as the cumulative probability of impact greater than a given threshold level, a forecast range for actual performance, or just an overall rating of 'importance'.

  • If you are confident that there will be no actions to consider that affect more than one of the risks on the register then you can use the side-by-side format of risk register, where actions are listed beside risks (see Appendix A: The Mapping Problem). Otherwise, it is better to use a matrix style[3], where risks and actions are the column and row headings of a matrix which can then be filled in to show which actions relate to which risks.

  • Do not have a common, fixed level of risk that is the trigger for taking action or for upwards escalation or reporting. The wide range of actions and the flexibility over the boundaries of 'risks' mean that a fixed threshold is meaningless (see Appendix A: The Aggregation Problem). Instead, avoid such thresholds and just take whatever decisions are worthwhile, regardless of actual performance against each 'risk'. Alternatively, have target risk levels as you might have targets for other objectives (with all the issues that targets can bring).

Alternative techniques to use if you can

The obvious alternative technique is to have those 'risks' on the same lists as other objectives, worded as objectives, and treated in exactly the same manner as other objectives. Push all objectives through management as usual. In this way, things you want to increase (e.g. sales) and things you want to reduce (e.g. injuries) are considered together, as are certain consequences and uncertain consequences (sometimes also called 'risks').

Decisions and the reasons for them can be documented for these objectives/risks as usual and, if this is done, then nothing is gained by documenting them again in a risk register.

Low level risk events used in Risk Listing

General description

The 'risks' in this style are mostly worded as events that might or might not happen. There may also be material about causes and effects and, in some variations of this style, the risks are listed against objectives. In IT related risk registers it is common to have lists of vulnerabilities, which take the place of risks or are listed next to them.

These lists of risks tend to be quite detailed and there may be many risks on the list, potentially thousands.

The risk register is used within a Risk Listing process[2], which involves meetings and documents separate from core management activities and focused on managing the risks on the list.

Conditions and recommendations for risk register design

The conditions for this style of risk register to be 'ok' are quite stringent:

  • The scope must be limited to decisions on actions seen as responses to the risks (see Appendix A: The Scope Problem). That again means a very limited set of decisions is involved and most of the important decisions in an organization are not in scope.

  • There should not be causal connections between the risks on the list. Causal connections create difficulties in deciding probabilities and estimating potential impacts that cannot be resolved within a risk register format (see Appendix A: The Multiple Counting Problem). This again is quite a stringent condition because causal connections between risks on these lists are quite common.

Here are some points to consider in designing this style of risk register:

  • Do not use ratings of probability and impact unless:

    Instead, use cumulative probability distributions (or a simplification of such) or just an overall rating of 'importance'.

  • Unless you are confident that no action will be a response to more than one risk, do not use the side-by-side style of risk register (see Appendix A: The Mapping Problem). Instead use a matrix style[3].

  • The importance of risks on a list like this depends on how widely each risk is defined. Consequently, it makes no sense to use a fixed, common trigger level of risk for deciding when to take action or escalate risks, unless the risks have a particular kind of equivalence (see Appendix A: The Aggregation Problem). For example, if the risks were a particular insurable event for each of a pool of motor vehicles, then it might be sensible to use a fixed threshold for some purpose. More often it is better to just take additional actions to respond to a risk when those actions are worthwhile, taking everything into account.

Here are some common applications of risk management classified according to whether they are suitable for risk registers used in this way or not.

ApplicationRisk event reg. OK?Comments
Book-keeping errorsYesA classic application of the idea, but even here it is best to have a model of the book-keeping system, then work from the model to identify possible problems. Don't just brainstorm randomly.
Revenue assuranceNoA model helps you quantify the implications of errors that do not translate directly into lost money.
Small scale fraud/security incidentsYes 
Routine insurance policy planning e.g. vehicles, premisesYesThe fact that the policies available are well known and long established gives this a structure that can work. Avoid probability x impact ratings, of course.
Relatively small safety worries e.g. personal accidents on a building siteYesAnother classic application of risk registers. The incidents need to be small enough that one incident is very unlikely to lead to another.
Safety of systemsNoThere's too much interconnection.
Business planningNoCompletely unsuitable as a general approach due to interconnections.
Management of business operationsNoAgain, unsuitable as a general approach due to interconnections.
Project risk managementNoUnsuitable as a general approach due to interconnections.
Investments in financial marketsNoWho would even think of using a risk register for this?

Alternative techniques to use if you can

The best way to manage risk is, usually, as an integral part of core management activities such as planning, design, and objective-setting. There are many well established ways to do this. Typically, this means getting decisions right in the first place, rather than trying to correct badly made decisions after they have been taken.

Variables in a decision model

General description

In this style the risk register is really just documenting the elements of a model, where at least some of these have associated uncertainty. The centre of attention is the model, and the register is just a view of its variables. These variables include its inputs, intermediary variables, and the variables whose values we want to predict. They also include parameters for relationships between other variables. For example, we might think that better customer service drives increased sales, but by how much and after how much delay? Such crucial parameters are often highly uncertain.

A common form of such a list is the Tornado Diagram, which is a list of variables with a coloured bar next to each whose length indicates the importance of the uncertainty around the variable to the uncertainty in the final prediction. You could think of this as a form of 'risk rating' done automatically by the computer and reflecting the logic of a model that will ideally be coherent and based on our best thinking and research.

Models such as this go through several iterations of development and at each stage it can be helpful to review the major areas of uncertainty and consider further research and further actions that be taken to manage those. These ideas could be documented on the risk register.

Conditions and recommendations for risk register design

This style of risk register is appropriate for any situation where a decision support model is appropriate, which is most important decisions. However, it may not be necessary to document the model in the form of a risk register.

A model is required if this style is to be used, of course, but it need not be fully quantified and automated. It might be a conceptual model.

Here are some points to consider in designing this style of risk register:

  • Unless your model only has binary variables, do not use probability and impact ratings. Models like this are quite rare but do occur in models of the reliability of machinery, for example, where components are often thought of as either ok or broken.

  • For mapping to actions, use a matrix[3] style of risk register.

  • Try to avoid threshold risk levels for triggering actions or escalation. In the past this has been done with some safety models where components have been deemed completely safe once they are deemed safe enough. The residual risk has been ignored. Unfortunately, the cumulative effect of many remote chances can be too big to ignore. It is better to let the model accumulate all those tiny probabilities.

Alternative techniques to use if you can

The more traditional way to document models is to write a paper using the language and symbols of mathematics.

References

[1] Leitch, M. (2015). Integration in future risk management guidance and standards: results from a Risk Improvement Group survey.

[2] Leitch, M. (2012). The Risk Listing school.

[3] Leitch, M. (2003). Matrix Mapping: the easiest and best way to map internal controls.

Appendix A: Problems

ProblemExplanation
The Scope Problem

The Risk Listing process frames everything in terms of 'risks' that are to be responded to in some way. Consequently, it is easy to imagine it applied to decisions about actions that are primarily motivated by some worry. For example, should you get a flu jab for next winter? That's a decision prompted by your worry about getting flu, and you wouldn't get the flu jab for any other reason. In contrast, a decision such as between going to one university or another is not something driven mainly by some 'risk'. You want to go to a university and try to choose the best for you. It just doesn't make a lot of sense to consider that a 'response' to a 'risk'.

To be more precise, a Risk Listing decision is supposed to be taken on the basis of reduction to one or more risks and the money cost of the action required. This leaves out other beneficial consequences (e.g. saving money or time other than through risk reduction) and non-financial disbenefits about which there is no uncertainty (e.g. certain death).

The Mapping Problem

The usual layout of a risk register is as a table that lists risks on the left and responses to those risks (also known as controls) next to them. The idea is that this shows which controls are acting on each risk.

However, if a control acts on more than one risk then the text describing the control has to be repeated wherever it is relevant. Since people tend to be reluctant to do that a lot they often leave out relevant controls if they already have at least something next to a risk.

The Aggregation Problem

Another common technique with risk registers is to declare a level of risk to be a threshold beyond which some special action is taken. The action might be escalation or reporting. It might be that risks above a certain level must be mitigated in some way whereas risks below that level can be ignored.

Unfortunately, risk events are events, and events are sets of outcomes, and what is included in each set is up to us. Consequently, the level of each risk depends, in part, on how widely it is defined. If someone has a risk that is above the threshold and they don't want that, then all they have to do is split it into smaller risks until all the smaller risks are below the threshold. Conversely, if they have a risk that is below the threshold and they want it above then all they have to do is aggregate it with similar risks until they get there.

Completely uncontrolled aggregation is typical in risk register exercises. However, if your list of risks is more controlled (e.g. each risk is the risk of a particular customer going out of business) then aggregation is naturally controlled and the problems are reduced.

The Multiple Counting Problem

A risk register is, by definition, a list. The list format does not capture anything about possible causal links between items, such as the tendency for occurrence of one risk event to promote the occurrence of another, or for a common cause to drive two or more risks, or for the consequences of a risk to be much harder to cope with if several other risks also happen. Although some causality might be captured within a row of a risk register, through some description of consequences (even a 'bow tie' analysis), causality between items is not represented.

This is a crucial limitation because causal links, where they exist, have a huge effect on the assessed risk level. The 'impact' is supposed to be the ultimate impact of the risk. However, if a risk helps to drive another risk then its impact includes at least some of the impact for that other risk. The impact is double-counted, and may be multiple-counted. Loops present even more mind-boggling problems.

In typical risk registers for overall corporate risk management and for project risk management it is normal to find that most risks are causally linked to at least one other, and that many form part of vicious/virtuous circles. This is another reason why risk registers are not normally appropriate for overall corporate and project risk management.

The Impact Spread Problem

The risk register style also tends to involve some pseudo-mathematical calculations involving 'probability' and 'impact'. The 'probability' concerns how likely it is that the risk event will happen. (If the risk is not worded as a risk event then this doesn't work, which may be one reason why so many risks are worded as events.) The meaning of 'impact' tends to be much less clear.

The impact is supposed to be some kind of valuation of possible outcomes within the 'risk event', reduced to one number or category (e.g. High, Medium, Low). Unfortunately, most risks have impacts that could vary over a wide range for several reasons. For example, "Loss of market share" is a risk whose impact could surely range widely depending on how much market share is lost. Other risks seem narrower, but even here there tend to be problems because the impact is uncertain. If it was represented properly, with a probability distribution over impact, the distribution would be spread because of the uncertainty.

I call this phenomenon 'impact spread' and it is almost universal. If an 'impact' number or category is used to represent a spread of impact then uncertainty is being excluded from consideration. In order for this style of risk register rating to be applicable, the consequences of a risk need to be well understood so that the distribution is narrow and reducing it to a single 'impact' figure or category does not throw away much uncertainty.

Be aware that impact spread makes it very difficult for people to give impact ratings they feel comfortable with.

The Shades of Grey Problem

Risk events are written in black-or-white terms. The risk either will happen or it won't and there are no degrees of it happening. For example, "There is a risk of failing to meet the sales target" is a typical wording, as if all levels of sales can be simplified down to 'below target' or 'not below target'. In practice many results we care about involve shades of grey. We are interested in the exact level of sales, not just if sales meets the target.

The black-or-white style of risk event is only appropriate if the outcomes themselves have an inherently black-or-white character. For example, whether a bomb explodes or not while being defused, and whether a bid is successful or not, are outcomes with an inherently black-or-white nature that can be captured nicely with events. In contrast, the level of sales in a year or the extent of fire damage are not inherently black-or-white; they could vary over a range.

The Extremes Problem

'Impact' is usually quantified in terms of money, or lives lost, or some other proxy for the ultimate value involved. The problem here is easiest to see with money. Does losing twice as much money mean experiencing twice as much 'impact'? No, not if the extra loss means financial ruin. In that case it is much more than twice the 'impact'. Choosing a single 'impact' number or category for a 'risk' means taking some sort of representative level and for many people that is a kind of average. However, if you translate the average over the distribution of money into value this will not be the same as the average over the distribution of value, because of such things as financial ruin.

Another reason why consequences should not be extreme if probability and impact ratings are to be used is that extreme situations tend to create strong correlations between events.

Yet another problem is that the levels available for rating probability and impact tend to be rather broad and do not do a good job of capturing very small probabilities and very large impacts. Differences of 10 fold, 100 fold, and more can be lost in the one category "High".

The Two Tails Problem

The risk register style is strongly linked to the word 'risk', which most people understand to refer in some way to bad things that might happen. You could try to encourage people to think of it in a different way, but generally it is more appropriate to restrict risk registers to situations where outcomes are either 'OK' or 'bad' in some way. For example:

  • Accounting cycles, where numbers are either correct or wrong to some extent.

  • Machinery, which is either working or broken.

  • Compliance with laws, where you are either compliant or law-breaking in some way.

  • Health and safety, where people are either healthy and alive, or damaged in some way.

Typical risk registers, even where the scope is something where happy surprises are also possible (e.g. projects, corporate performance), are almost entirely populated by bad things that might happen.

In reality it is common for the outcomes of potential events to include both positive and negative consequences, and consequences that might be positive or negative. For example, I might win a lucrative new consulting project, which means I would gain some money (positive) but also have to do work (negative) and may find the people nice to be with or unpleasant (could be either positive or negative). The only one of these consequences that can be represented by a probability and an impact rating is the possible work.






Made in England

 

Company: The Ridgeway Expertise Company Ltd, registered in England, no. 04931400.

Registered office: 29 Ridgeway, KT19 8LD, United Kingdom.

Words © 2011, 2015 Matthew Leitch