Working In Uncertainty

Two studies of risk registers

by Matthew Leitch; first published 2008.

Contents


Introduction

Like them or loathe them, risk registers are an unavoidable fact of life in many organizations. Regulators demand them, auditors check for them, even customers are asking for them.

All around the world people are being given forms and asked to fill them in. Whether this involves a group meeting or just filling in screens on a database the work boils down to filling in boxes in a table

The purpose of these two studies is to describe what happens in practice when people fill in those boxes, and find some opportunities for improvement.

Research data source

The research is based on a collection of risk registers gathered by Googling the Internet for ‘risk register’ and related terms, then downloading all the registers and instruction/explanation documents found until an adequate number had been gathered.

This is not a statistical ‘sample’ of risk registers because they were not selected by any kind of random method and risk registers for publication are unlikely to be representative of all risk registers. Almost all published risk registers are from the public sector and most are quite short.

Nevertheless the collection of risk registers shows interesting variations and the phrasing of risk items seems typical of unpublished risk registers I have seen.

Study 1: Summary

This is the first of a series of simple studies of real risk registers that aims to provide improved understanding of what people do when asked to work on a risk register.

The objective of this study was to examine the incidence of a phenomenon referred to here as ‘Impact Spread’. If we want to characterise risks appropriately we must understand Impact Spread.

The results show that risks that could have a range of impacts are very common for a number of reasons. In fact, they are the overwhelming majority of risks. Consequently, methods of characterising risks that ask people to give views on potential impact must make it clear how Impact Spread is to be handled. Just asking for ‘the impact’, without explanation, and expecting a single number or rating is inappropriate.

Impact Spread

What is Impact Spread? Take a risk like ‘Building Collapse’ (verbatim from a risk register in the research collection). What is the impact of ‘Building Collapse’? Clearly it depends on the building, the forewarning, whether anyone is in the building at the time, and probably other important factors too.

It could be as trivial as a garden shed falling down during a storm and chipping a garden gnome, or as horrific as a high rise building in a busy city centre toppling sideways, killing thousands.

This is what I call Impact Spread. It is a common feature of risk register items.

The risk register from which this was taken has another box for ‘impact’ and the answer must be one of the categories ‘insignificant’, ‘minor’, ‘moderate’, ‘significant’, or ‘catastrophic.’ Is the impact of ‘Building Collapse’ ‘minor’? It could be. What about ‘catastrophic’? Again, it could be. This is the problem most people experience as a result of Impact Spread and there are a number of responses to it.

Reasons for Impact Spread

There are several reasons for Impact Spread and this study looks at how prevalent each is across a range of typical published risk registers. Here are the reasons studied, starting with the easiest to spot.

  • Explicit multiple outcomes: For example, ‘The organization fails to achieve its corporate plan objectives or is perceived to be ineffective’ lists two things that might happen, connected by ‘or’: (1) failing to achieve objectives or (2) being perceived as ineffective.

  • Multiple objects potentially involved: For example, ‘There may be unforeseen problems with site safety during construction’ refers to problems, of which there may be none, one, or more. In ‘The business fails to maintain its IT systems’ there are multiple IT systems involved.

  • Multiple occasions: For example, ‘Our shop closes due to an emergency’ could happen multiple times during a period.

  • Obvious variable extent: For example, ‘Increase in the number of people qualifying for subsidy’ refers to any degree of increase from just one extra person upwards. Another example is ‘Inadequate resources at weekends.’ Here the degree of inadequacy is important. Having too few to provide the usual performance is one thing, while having no resources at all might be much worse.

  • Other variables extent: For example, ‘Building Collapse’ sounds at first like an all or nothing event. Either a building collapses or it does not, but of course the impact could vary with many variables other than the extent of building collapse, as mentioned in the introduction above, such as the size and location of the building, and how much warning there is.

  • Uncertainty about impact: For example, ‘Losing £1m on this next bet’ seems at first to provide a very narrowly defined outcome, and yet who can say what impact such a loss would really have on them or their organization? No pure examples of this type have been found in the risk register collection as all items have had Impact Spread for other reasons too. However, in principle, even if one very specific outcome was pinned down by a risk description it could still be uncertain what impact it would have.

  • Combinations: The above reasons can occur in combinations.

Prevalence of Impact Spread for different reasons

A total of 14 risk registers from the collection were analysed, providing 384 examples of risk register items. Each item was assessed and decisions made about what reasons for Impact Spread were present. Judgement was often needed to decide what the intention of the writer had been. However, only a very small number of risk register items were so unclear that no decision could be made at all. Only 0.83% of judgements were prevented by lack of clarity.

Reason% of items affected*
Explicit multiple outcomes23%
Multiple objects71%
Multiple occasions86%
Obvious variable extent87%
Other variables extent100%

* To be precise, this is the percentage of risk register items where a decision could be reached that exhibited Impact Spread for the given reason.

Each reason for Impact Spread, taken individually, is sufficient to require risk characterisation methods to cater properly for Impact Spread.

There were some variations between risk registers.

Risk RegisterType**Number of itemsExplicit multiple outcomes*Multiple objects*Multiple occasions*Obvious variable extent*Other variables extent*
rr001civil5036%94%100%100%100%
rr006corp1520%73%100%93%100%
rr007proj40%50%50%50%50%
rr008corp1118%64%91%55%100%
rr009corp1619%63%94%100%100%
rr011corp90%33%89%78%89%
rr012corp1225%83%83%75%100%
rr013corp683%67%83%83%100%
rr014corp1712%59%71%88%100%
rr015corp3540%69%86%91%100%
rr016corp729%71%71%100%100%
rr018proj5215%63%71%96%100%
rr019corp4328%79%91%84%100%
rr020corp10715%65%86%75%100%

* This is a percentage of the total items in the register, not just those that could be classified.

** These mean:

  • civil = a risk register of civil contingencies

  • corp = a corporate risk register

  • proj = a project risk register

Study 2: Summary

This is the second of a series of simple studies of real risk registers that aims to provide improved understanding of what people do when asked to work on a risk register.

The objective of this study was to examine the extent to which information about causality is captured in risk registers.

The results show that just under half of risk register items mention any causal links at all and no risk register was designed to capture, explicitly, causal links between risk register items.

It was not possible to determine what proportion of relevant causal links were captured.

Causality within and between risk register items

When we think about how the future might unfold, causality is never far from our thoughts. The usual risk register format, which is simply a list, does not encourage thinking about causal links between risk register items, but some formats do encourage thinking about causes and effects of risk ‘events’.

The objective of this study was simply to establish the extent to which published risk registers from the research collection showed information about causality in each of three ways:

  • Between risk register items on the risk register, using a specific mechanism such as cross referencing or a dedicated column for inter-item links.

  • In other fields associated with the risk register item, such as a Consequences field.

  • Within the description/definition of the ‘risk’ itself.

The analysis was performed by examining the layout of each risk register and analysing each ‘risk’ description/definition for evidence of causality.

Findings

A total of 14 risk registers from the collection were analysed, providing 384 examples of risk register items, of which 358 were risks in the accepted sense, the remainder being statements of fact or headings.

None of the risk registers had anywhere specific to write about causal links between risk register items, though it was possible to describe causal links within risk descriptions/definitions and, sometimes, in other columns and some items may have mentioned items that were in fact other risk register items.

Four of the risk registers had a column in which text concerning the effects of the risk was elaborated, and one other had a column in which some potential causes were given.

Overall, 28% of the 358 risk descriptions/definitions included a causal link and only one risk register had causal links in more than half of its descriptions/definitions. The average proportion of risk descriptions/definitions with causality stated was 26%. The highest proportion of descriptions/definitions with causality was 58% and the lowest was 0%.

Where the risk register layout had a column for capturing cause or effect the proportion of risk descriptions/definitions including causal links was 14%, whereas if there was no extra column the proportion was 32%.

The proportion of risk register items having causal links either within their risk description/definition or in an additional column was 48%.



Words © 2008 Matthew Leitch. First published 2008.