Working In Uncertainty
Two studies of risk registers
by Matthew Leitch; first published 2008.
Like them or loathe them, risk registers are an unavoidable fact of life in many organizations. Regulators demand them, auditors check for them, even customers are asking for them.
All around the world people are being given forms and asked to fill them in. Whether this involves a group meeting or just filling in screens on a database the work boils down to filling in boxes in a table
The purpose of these two studies is to describe what happens in practice when people fill in those boxes, and find some opportunities for improvement.
Research data source
The research is based on a collection of risk registers gathered by Googling the Internet for ‘risk register’ and related terms, then downloading all the registers and instruction/explanation documents found until an adequate number had been gathered.
This is not a statistical ‘sample’ of risk registers because they were not selected by any kind of random method and risk registers for publication are unlikely to be representative of all risk registers. Almost all published risk registers are from the public sector and most are quite short.
Nevertheless the collection of risk registers shows interesting variations and the phrasing of risk items seems typical of unpublished risk registers I have seen.
Study 1: Summary
This is the first of a series of simple studies of real risk registers that aims to provide improved understanding of what people do when asked to work on a risk register.
The objective of this study was to examine the incidence of a phenomenon referred to here as ‘Impact Spread’. If we want to characterise risks appropriately we must understand Impact Spread.
The results show that risks that could have a range of impacts are very common for a number of reasons. In fact, they are the overwhelming majority of risks. Consequently, methods of characterising risks that ask people to give views on potential impact must make it clear how Impact Spread is to be handled. Just asking for ‘the impact’, without explanation, and expecting a single number or rating is inappropriate.
What is Impact Spread? Take a risk like ‘Building Collapse’ (verbatim from a risk register in the research collection). What is the impact of ‘Building Collapse’? Clearly it depends on the building, the forewarning, whether anyone is in the building at the time, and probably other important factors too.
It could be as trivial as a garden shed falling down during a storm and chipping a garden gnome, or as horrific as a high rise building in a busy city centre toppling sideways, killing thousands.
This is what I call Impact Spread. It is a common feature of risk register items.
The risk register from which this was taken has another box for ‘impact’ and the answer must be one of the categories ‘insignificant’, ‘minor’, ‘moderate’, ‘significant’, or ‘catastrophic.’ Is the impact of ‘Building Collapse’ ‘minor’? It could be. What about ‘catastrophic’? Again, it could be. This is the problem most people experience as a result of Impact Spread and there are a number of responses to it.
Reasons for Impact Spread
There are several reasons for Impact Spread and this study looks at how prevalent each is across a range of typical published risk registers. Here are the reasons studied, starting with the easiest to spot.
Prevalence of Impact Spread for different reasons
A total of 14 risk registers from the collection were analysed, providing 384 examples of risk register items. Each item was assessed and decisions made about what reasons for Impact Spread were present. Judgement was often needed to decide what the intention of the writer had been. However, only a very small number of risk register items were so unclear that no decision could be made at all. Only 0.83% of judgements were prevented by lack of clarity.
* To be precise, this is the percentage of risk register items where a decision could be reached that exhibited Impact Spread for the given reason.
Each reason for Impact Spread, taken individually, is sufficient to require risk characterisation methods to cater properly for Impact Spread.
There were some variations between risk registers.
* This is a percentage of the total items in the register, not just those that could be classified.
** These mean:
Study 2: Summary
This is the second of a series of simple studies of real risk registers that aims to provide improved understanding of what people do when asked to work on a risk register.
The objective of this study was to examine the extent to which information about causality is captured in risk registers.
The results show that just under half of risk register items mention any causal links at all and no risk register was designed to capture, explicitly, causal links between risk register items.
It was not possible to determine what proportion of relevant causal links were captured.
Causality within and between risk register items
When we think about how the future might unfold, causality is never far from our thoughts. The usual risk register format, which is simply a list, does not encourage thinking about causal links between risk register items, but some formats do encourage thinking about causes and effects of risk ‘events’.
The objective of this study was simply to establish the extent to which published risk registers from the research collection showed information about causality in each of three ways:
The analysis was performed by examining the layout of each risk register and analysing each ‘risk’ description/definition for evidence of causality.
A total of 14 risk registers from the collection were analysed, providing 384 examples of risk register items, of which 358 were risks in the accepted sense, the remainder being statements of fact or headings.
None of the risk registers had anywhere specific to write about causal links between risk register items, though it was possible to describe causal links within risk descriptions/definitions and, sometimes, in other columns and some items may have mentioned items that were in fact other risk register items.
Four of the risk registers had a column in which text concerning the effects of the risk was elaborated, and one other had a column in which some potential causes were given.
Overall, 28% of the 358 risk descriptions/definitions included a causal link and only one risk register had causal links in more than half of its descriptions/definitions. The average proportion of risk descriptions/definitions with causality stated was 26%. The highest proportion of descriptions/definitions with causality was 58% and the lowest was 0%.
Where the risk register layout had a column for capturing cause or effect the proportion of risk descriptions/definitions including causal links was 14%, whereas if there was no extra column the proportion was 32%.
The proportion of risk register items having causal links either within their risk description/definition or in an additional column was 48%.
Words © 2008 Matthew Leitch. First published 2008.