Working In Uncertainty
Matthew Leitch column: What's on your risk registers?
by Matthew Leitch, first published 2004.
(This article first appeared under the title ‘The Matthew Leitch Column: rethink your attitude to risk – start to think about sets of risk’ in Emerald Insight's publication ‘Balance Sheet’, volume 12 number 5, 2004.)
Risks are not like buses, so why do so many risk management guides and standards talk as if they are?
Risks are not physical objects with their own, obvious boundaries. They don't arrive in small numbers. You can't see them coming down the road, standing out from the mass of the future that is not risky.
There are lots of risks around – infinitely many. The number of risks a team of managers can list is limited only by time and imagination. If you want to list specific risks, especially if you want causes and implications to be part of each risk, book the rest of your life to do it.
The fact is that the things we are used to calling ‘risks’ and that go on risk registers are really sets of risks. Consider something specific sounding like ‘Contravention of the Copyright, Designs & Patents Act leading to prosecution and/or adverse publicity resulting in loss of public confidence in the company.’ Even this is a set of risks because of the many ways you could contravene the Act and the different extents of impact contraventions might have.
The simple observation that we are dealing with sets of risks, not individual risks, has many implications, some with great practical value. Here are 10:
It makes no sense to talk about ‘identifying risks.’ What we're really doing is defining risk sets and then trying to estimate their properties. Changing from talking about ‘identifying’ to talking about ‘defining’ is a helpful reminder that we need to be precise about what each item is about, not just write something like ‘technology risk’ and assume it has a meaning.
There are many alternative ways to group risks. Given the same situation, different styles of risk analysis give different risk sets. It's quite possible to have risk sets that are a jumble because they've been defined on different bases. They have arisen from different perspectives of workshop participants, for example.
Very often we're breaking down a total set of risks, and that means making a series of choices about how we would like to do it. Some alternatives are better than others so we need to think.
Some styles of risk analysis are quite explicit about how to break down the risks, but others are not. Generally, the breakdowns people produce are influenced by one or more of the following:
It is possible, and usually desirable, to create a breakdown of risks that is complete and without overlaps between sets.
We can group risks so that they conveniently map onto components of our control framework.
Risk responses that have been selected already, or are in place, are anticipated, or whose characteristics are assumed by the risk analysis method have a profound effect on what risks people define.
Usually that's helpful. What is the point of analysing out many risks that all point to the need for a particular response we already know we will implement?
We can choose to define broad sets of risk first and then subdivide into more detail where worthwhile. That's efficient.
Those familiar ratings of ‘probability of occurrence’ and ‘impact if it does occur’ are illogical when applied to sets of risks where the impact of risks in the set is not equal – and they usually aren't. The probability rating might make sense, but the impact rating does not. It depends which risk or risks in the set have occurred. We need to use probability distributions of impact, or approximations of them.
A risk appetite line can't be trusted. I'm talking about the technique where you rate each ‘risk’ (i.e. risk set) then say that ‘risks’ whose impact is on the OK side of the line need no more action, while risks on the wrong side need more action if possible.
The trouble is that risk sets vary in their level of aggregation as well as the gravity of the risks within them. It seems to be difficult and probably impossible to equalise level of aggregation in most cases.
Suppose you had a risk set that was on the wrong side of the line but did not want to act on it. No problem. Just sub-divide the risk set into several smaller ones, each of which is on the OK side of the appetite line.
For the same reason – aggregation – the familiar idea of ‘key risks’ needs to be rethought. The high ratings of some risk sets reflect, in part, their level of aggregation. If you don't want something to be in the top 10 all you need to do is sub-divide the set into several smaller sets that end up outside the top 10.
Most of the well known risk management standards and official guides to risk management based on them should be revised to take account of the fact that risks are not like buses. This includes AIRMIC/ALARM/IRM's ‘A risk management standard’, Turnbull's ‘Internal control: guidance for directors on the combined code’, the Australian / New Zealand Standard for Risk Management 4360:1999, the APB's briefing paper ‘Providing assurance on the effectiveness of internal control’, the UK government's Orange Book, CIPFA's ‘Guidance on Internal Control and Risk Management in Principal Local Authorities and other Relevant Bodies to Support Compliance with the Accounts and Audit Regulations 2003’, the Charity Commission's guide ‘Charities and Risk Management’, COSO's Enterprise Risk Management Framework, and the Project Management Body of Knowledge.
Harsh? If you want to check this yourself take a look at a sample of risk register items to confirm that virtually all are sets of risks. Then examine some official documents and see if you can find any that consistently take account of all the preceding nine implications. Most trip up almost immediately by talking about ‘identifying’ risks.
Words © 2004 Matthew Leitch. First published 2004.