Working In Uncertainty

Research on risk management within performance management

by Matthew Leitch; first published 27 February 2006.

Contents


Introduction

More than a year ago now I conducted an online survey of public and private sector organizations to find out what people were doing to embed risk management within performance management. My survey had a lot of questions and correspondingly getting to the 30 responses was hard work and I am very grateful to those who gave the time to respond. Since respondents were also self-selected volunteers not much can be deduced from the statistics of their answers. However, some important overall observations can be made, and the individual cases were fascinating to study. The topic is still hot and indeed client work on the issues it involves is the reason it has taken me so long to get around to writing up these results!

The impression I got from the responses is that the issues raised in the survey had not been considered systematically by most respondent organizations. Nearly all respondents gave patchy answers; in one area they would have good management of risk and uncertainty in place, and yet be missing equally important controls in another. Responses varied greatly from one organization to another. There were also some risk managers who were very satisfied with their risk management systems despite lacking important controls in the performance management processes.

This lack of systematic design and implementation is consistent with the lack of published articles and books dealing with the subject in any depth.

The main practical messages from this research are that (1) in most organizations there are many simple steps that can be taken to ensure that risk and uncertainty are considered and managed throughout performance management activities, and that (2) this could greatly improve the impact of performance management processes.

Each group of questions in the survey is covered in a sub-section below and highlights the controls that organizations should have in place.

Measurement uncertainty

Most organizations today measure a variety of things about themselves, in addition to their finances, and call them something like ‘Key Performance Indicators’ (KPIs). In practice any kind of measurement is fraught with difficulties such as shifting definitions, system changes, clerical errors, and system errors. Even audited financial numbers are not reliable, to an extent that few people appreciate. Accounting is an art that relies to some extent on estimates about what will happen in the future. You may think your profit is £X, but in fact this is just one of the possible numbers the auditors would have accepted as justifiable choices. Non-financial KPIs often involve a lot more uncertainty because the culture of checking and double checking numbers that is strong in accounting departments is not so strong elsewhere. Also, some KPIs, like the results of customer satisfaction surveys, rely on sampling or subjective ratings that can be affected by a person's mood from one moment to the next.

In my work I have frequently found KPIs to be unreliable. For example, one company had weekly ‘quality reviews’ of insurance claims processing. A claims operator would be selected and would spend a few days checking claims processed by his/her colleagues. The number of errors found was tracked for several months until an internal auditor reperformed a quality check and discovered that the quality reviews were finding only half the errors made. In other words, the numbers that had been reported, analysed, and even praised, were 100% wrong.

I cannot explain why so many companies have non-financial information on their internal reports that is wrong, but it happens. Sometimes the error is in plain view and anybody who looked at the management report critically would realise something was amiss, and yet still it is not picked up.

Measurement uncertainty is a result of incorrect collection, summarising, or presentation of data, and inherent uncertainties such as from samples and subjective ratings. Since even audited financial numbers contain some measurement uncertainty it is hard to believe that any internal report of KPIs is entirely free of measurement uncertainty.

Here are the relevant questions from the survey, and the answers given:

QuestionAnswer distribution
Is there any KPI whose exact value is in any way uncertain, for example because it is based on surveys, samples, estimates, indirect measurement, or is considered easy to manipulate dishonestly?25 :Yes
  5 :No
  0 :Don't know
Is there any KPI whose exact value is in some way uncertain where the report that shows the KPI displays near it some explicit information about that uncertainty? (For example: a health warning, error bands, confidence level, caveats, notes on the less certain components of the number.)19 :Yes
  8 :No
  3 :Don't know

The answers show that most respondents recognised there was measurement uncertainty. The 17% of respondents who saw no measurement uncertainty may have been quite right, but it seems unlikely. Incidentally, two of these five said their reports gave information about measurement uncertainty, despite having said there wasn't any uncertainty.

Also, most respondents could think of at least one KPI where uncertainty was disclosed. Of the 25 who recognised some measurement uncertainty as being present, 17 could think of at least one KPI where that uncertainty was disclosed in some way.

This leaves 8 respondents where there was measurement uncertainty but no information about it given when the KPIs are reported.

There is no way to tell from this survey whether all uncertain numbers had information about their uncertainty. My observation over the years, looking at many reports, is that only a small proportion of measurement uncertainty is ever disclosed. People prefer to present their numbers as facts.

The implications of this include the risk of decisions being made on the basis of information that is less reliable than it appears, and failure to take actions that would reduce measurement uncertainty. For example, in banks it is currently common to maintain a database for reporting ‘operational risk events’ and their impact e.g. cock-ups, frauds, accidents. Of course people are busy and also reluctant to report their mistakes, so it can be very difficult to capture everything in the database. If you take the value of incidents reported in the database as being the true cost of operational risk then you are making a big mistake. In reality many banks don't even know how much they don't know.

Showing measurement uncertainty clearly is also an everyday reminder of the organization's approach to risk and uncertainty. It shows a healthy awareness rather than a tendency to ignore or actively suppress uncertainty, and do nothing about it.

Key Point: Organizations should disclose measurement uncertainty, in some way, for all numbers on internal management reports that are in some way uncertain. Make it an official policy. This encourages more care over preparing numbers and reduces the risk of unwittingly making important decisions on the basis of misleading data.

Variability and time series

It is very hard to spot trends and patterns over time unless you see the numbers over time displayed on one page.

Where performance numbers are seen without their history it is hard to judge the importance of changes. Is a 5% increase important? Perhaps ‘yes’ if the number hasn't changed that much for years, but ‘no’ if it usually changes by much more, and in an unpredictable and trendless way. Variability that is only partly understood appears for most KPIs and a simple way to understand it is to show the past history of the number, preferably using a graph.

Here is the relevant question from the survey, and a summary of the answers given:

QuestionAnswer distribution
Are any KPIs presented as a time series i.e. a sequence of at least 3 numbers showing past history as well as the latest result, or a graph showing that information?22 :Yes
  7 :No
  1 :Don't know

Encouragingly most respondents used time series to some extent, but 25% did not.

Key Point: Organizations should show KPIs as time series, preferably with graphs. This reduces the risk of mistaking normal variations for important changes that require special action, and greatly increases the chances of understanding what is going on.

Auditing KPIs

The easiest way to spot most incorrect numbers is to scrutinise the numbers very carefully, preferably using graphs and statistical analysis. Going further than this might mean performing an audit of some kind that probes the way the data are collected, summarised, and reported.

QuestionAnswer distribution
Are KPI values screened by anyone with the specific goal of identifying and querying odd looking values before the KPIs are reported and used?26 :Yes
  2 :No
  2 :Don't know
Is there any non-financial KPI where the processes that provide values of that KPI have been specifically reviewed and tested by internal audit work in the last two years?21 :Yes
  9 :No
  0 :Don't know

Respondents nearly all thought their KPIs were scrutinised for errors before use, but 30% had not done any audit of any of the KPIs in the last two years.

Key Point: All KPIs should be scrutinised effectively before use, and all should be subjected to some more rigorous audit from time to time.

Existence of secondary indicators

The Holy Grail of performance managers is a small set of indicators that truly show everything that matters. The next best thing is one that responds to everything important and so triggers managers to look at more detailed information when they see something they don't understand or expect.

Such an informative set of KPIs is extremely difficult to achieve and in reality most KPIs are selected on judgement rather than on empirically demonstrated importance. In addition, conditions change. When something is identified as a KPI people start managing it, and soon it has less importance than it did. Some other indicator correlates better with performance.

Consequently organizations need to collect and look at data that might turn out to be ‘Key’. Of course, every organization of any size is computerised and is awash with data, much of it unused. The challenge is to use it without being overwhelmed by it.

QuestionAnswer distribution
Are there any performance indicators other than KPIs for which values are collected and reported regularly, at some level in the organisational unit?25 :Yes
  4 :No
  1 :Don't know

Most respondents recognised that ‘secondary’ indicators existed in the organizational unit for which they were responding.

Key Point: Organizations should accept that secondary indicators are necessary, and know how to use them efficiently.

Use of secondary indicators

If secondary indicators involved as much work as KPIs life would be full of indicators with little time left over for anything else. And yet if a secondary indicator shows something important that has not shown up on the KPIs then ideally someone should notice and report it upwards.

QuestionAnswer distribution
Are there any performance indicators other than KPIs where odd or otherwise interesting values are supposed to be spotted by someone and reported upwards? (It might even be that an explicit trigger value has been set above/below which some kind of alert is expected.)18 :Yes
  7 :No
  5 :Don't know
Are there any performance indicators other than KPIs where a computer system has been set to report values that break a defined trigger condition?  8 :Yes
16 :No
  6 :Don't know
Are there any other established reporting procedures whereby changes or events that have unexpected relevance to the organisational unit's performance and strategy get reported promptly? (The point here is that KPIs are designed to track things believed to be relevant to performance, but it is possible to be wrong and to miss things that were not originally seen as important but turn out to be later.)13 :Yes
14 :No
  3 :Don't know

Five respondents could think of no mechanisms at all for identifying changes of unexpected relevance to performance not picked up in the KPIs.

Key Point: Organizations should never assume that their selected KPIs will show everything of importance and that other indications can be ignored. Even those few organizations that empirically test the relevance of their KPIs should recognise that things change.

Involvement in the selection of KPIs

Selecting KPIs is not easy and without empirical validation any selection is no more than an educated guess. Having said that, a wise precaution is to involve the right people in selecting KPIs initially. Of course many people do not have that luxury as their KPIs are simply imposed from above.

QuestionAnswer distribution
To what extent was the original selection of KPIs driven by imposed requirements such as regulation and measures used higher up in the organisation?  8 :KPIs entirely imposed
16 :Imposed requirements were some influence
  5 :Not a factor
  1 :Don't know
Were the end users of the KPI report(s) involved in selecting KPIs when the scorecard/report(s) were originally created?22: Yes
  7 :No
  1 :Don't know

(Excluding respondents for whom KPIs were entirely imposed:
18 :Yes
  3 :No
  0 :Don't know)
Were internal performance management specialists involved in selecting KPIs when the scorecard/report(s) were originally created?15 :Yes
14 :No
  1 :Don't know

(Excluding respondents for whom KPIs were entirely imposed:
13 :Yes
  8 :No
  0 :Don't know)
Were IT specialists and others responsible for the processes that would be used to collect and report KPI data involved in selecting KPIs when the scorecard/report(s) were originally created?  9 :Yes
18 :No
  3 :Don't know

(Excluding respondents for whom KPIs were entirely imposed:
  8 :Yes
12 :No
  1 :Don't know)
Were external consultants involved in selecting KPIs when the scorecard/report(s) were originally created?  9 :Yes
19 :No
  2 :Don't know

(Excluding respondents for whom KPIs were entirely imposed:
  5 :Yes
16 :No
  0 :Don't know)

The desire for end user involvement is particularly strong. So strong that 3 respondents indicated that end users were involved in the selection of KPIs even though the KPIs were entirely imposed.

Most organizations seem to have decided that involving IT specialists or the people who would produce KPI numbers in selecting KPIs was not important, and perhaps the thinking was that practicalities should not ‘bias’ the selection of ideal KPIs. That's a pity, since asking for numbers that are not already collected and available on a computer system is likely to lead to unreliable data being supplied. It's a compromise but if I had a choice I would involve IT specialists to help me understand what data are already available.

Recording agreement on selection of KPIs

A simple control is to record agreement, if any, when KPIs are selected.

QuestionAnswer distribution
How, if at all, was agreement on the original selection of KPIs signified?  2 :Individual signatures/e-mails of agreement
12 :Verbal agreement but formally minuted
  6 :Verbal agreement only
  7 :Not signified at all
  3 :Don't know

(Excluding respondents for whom KPIs were entirely imposed:
  1 :Individual signatures/e-mails of agreement
  9 :Verbal agreement but formally minuted
  6 :Verbal agreement only
  3 :Not signified at all
  2 :Don't know)

Clearly, selecting KPIs is not like authorising an expenditure or starting a project. Although the choice may be at least as important as that, initial agreement is not always crucial. In some cases it may have been that a tangible product – some kind of KPIs report – was needed in order to get some experience and stimulate interest in KPI use.

Use of a causal model

Leading practice in ‘balanced scorecard’ development is to use some kind of causal model showing how actions management take are thought to lead to desired outcomes. The model represents beliefs about how the world works.

QuestionAnswer distribution
To what extent was the selection of KPIs based on a causal model or other causal rationale of some kind which related the strategy to the eventual outcomes and so suggested what could be measured to track progress?  4 :Quantified model
  2 :Boxes and arrows but not quantified
  8 :Model was narrative only
  9 :No explicit rationale or model
  7 :Don't know

(Excluding respondents for whom KPIs were entirely imposed:
  4 :Quantified model
  2 :Boxes and arrows but not quantified
  7 :Model was narrative only
  4 :No explicit rationale or model
  4 :Don't know)

Most respondents who did not have their KPIs entirely imposed had some kind of causal model to justify their choice of KPIs, though it was usually narrative only. Where KPIs were imposed there was rarely a causal model to justify them, even in narrative form.

Recognition of uncertainties in beliefs about causal mechanisms

From a risk and uncertainty management perspective the use of an explicit model is important as a way to draw out beliefs and, potentially, uncertainty. The amount of uncertainty surrounding a causal model can easily be underestimated. Subjectively, we think we ‘know the business’. In reality there are some severe limits to this knowledge unless we have taken steps to gather and use data. Although we may have a strong belief that one factor drives another it is much harder to be certain about how strongly it does so, for different starting levels, or to combine the effects of more than one driver, or to rule out the possibility of drivers not yet identified having a significant effect, or to rule out the possibility of an indirect feedback loop that cancels out the effect we predict and perhaps even reverses it.

QuestionAnswer distribution
Were specific steps taken to identify uncertainties in the selection of KPIs and any rationale/model used? (For example: explicit consideration of the possible impact of things that are hard to know, like the impact of indirect causal links, human behaviour problems, the strength of causal links, the length of delays in causal links, and things that seem random.)  7 :Yes
17 :No
  6 :Don't know

(Excluding respondents for whom KPIs were entirely imposed:
  6 :Yes
10 :No
  5 :Don't know)

To what extent was uncertainty explicitly recognised in the model?

(Explanation of options:

  • Not recognised specifically: i.e. documentation is as if everything in the model was certain. No acknowledgement of uncertainty except perhaps for some general caveats.

  • Uncertain bits highlighted: i.e. documentation acknowledges some uncertainty and records where it is, for example by marking it, or using notes.

  • Highlighted and alternatives shown: i.e. uncertainties flagged or noted and alternative theories shown.

  • Numbers estimated as distributions: i.e. uncertain model inputs and/or parameters have been represented in the model using probability distributions rather than single 'best guess' figures, but this has not been used to calculate the combined impact of uncertainty.

  • Monte Carlo simulation: i.e. uncertain numbers have been estimated as probability distributions and then Monte Carlo simulation has been used to calculate their combined impact on outcomes.

  • Other quantitive combination: i.e. uncertain numbers estimated as probability distributions and combined by some mathematical method other than Monte Carlo simulation.

  0 :Monte Carlo simulation
  2 :Other quantitive combination
  0 :Numbers estimated as distributions
  2 :Highlighted and alternatives shown
  8 :Uncertain bits highlighted
  9 :Not recognised specifically
  5 :No model
  4 :Don't know

(Excluding respondents for whom KPIs were entirely imposed:
  0 :Monte Carlo simulation
  1 :Other quantitive combination
  0 :Numbers estimated as distributions
  2 :Highlighted and alternatives shown
  6 :Uncertain bits highlighted
  6 :Not recognised specifically
  4 :No model
  2 :Don't know)

A little under half those who had some choice about their KPIs did something specific to look at uncertainties connected with their choice. None of those with imposed KPIs did so.

Of the 15 that had some choice as to their KPIs, and used a model, 9 represented uncertainty in it in some way.

This suggests that having a model did make it more likely that uncertainties would be considered. In fact, taking just those with some choice over their KPIs, of 13 who had some kind of explicit model, 6 looked for uncertainties, while of the 4 with no explicit model none looked for uncertainties.

The tendency not to consider uncertainties affecting the choice of KPIs is a considerable missed opportunity. If you highlight uncertainties at the start this prompts actions to reduce those uncertainties and makes adapting and improving the KPIs much easier. It is human nature to avoid changing our minds in public, but if we said at the start that there were uncertainties and we were going to learn more and keep on improving, then we are only doing what we said we would do.

Key point: When selecting KPIs, carefully identify and document the uncertainties related to the choice and the rationale supporting it. This may be easier and more effective if you have a explicit causal model underpinning the choice of KPIs.

Research to reduce uncertainties relating to selection of KPIs

Research (in the broadest sense) can be carried out to reduce uncertainties related to the selection of KPIs. This research might have been planned before the analysis started, or be planned in response to uncertainties identified.

QuestionAnswer distribution
Was any kind of research carried out during selection of KPIs or development of the underlying model to reduce uncertainties? (This includes data gathering, modelling, a programme of interviews, etc but excludes research into methodologies.)13 :Yes
13 :No
  4 :Don't know

(Excluding respondents for whom KPIs were entirely imposed:
  9 :Yes
10 :No
  2 :Don't know)

Around half had not bothered with research.

Key point: Finding out more is usually essential and either needs to be done once the initial selection has been made, or for the initial selection and subsequently.

Evolution of KPIs

It makes sense to plan to review and adjust the selection of KPIs from time to time and to plan activities that will reduce uncertainties about KPI selection. Kaplan and Norton stress the value of empirically testing imagined causal relationships between variables.

QuestionAnswer distribution
When the KPIs were first selected how often, if at all, was it planned to review the selection of KPIs or underlying rationale/model (if there was one)?  8 :First review after a quarter
12 :First review after a year
  3 :First review after more than a year
  5 :Not planned
  2 :Don't know

(Excluding respondents for whom KPIs were entirely imposed:
  7 :First review after a quarter
  6 :First review after a year
  2 :First review after more than a year
  5 :Not planned
  1 :Don't know)
How often, if at all, has the selection of KPIs actually been reviewed to identify improvements/adaptations?  5 :Quarterly on average
12 :Annually on average
  3 :Less than once a year on average
  6 :Not done specifically with that in mind
  4 :Don't know

(Excluding respondents for whom KPIs were entirely imposed:
  4 :Quarterly on average
  7 :Annually on average
  2 :Less than once a year on average
  6 :Not done specifically with that in mind
  2 :Don't know)
How often, if at all, has the selection of KPIs actually been changed?  2 :Almost constantly
  2 :About quarterly on average
  3 :About half yearly on average
14 :About annually on average
  6: No changes in the last 2 years
  3 :Don't know

(Excluding respondents for whom KPIs were entirely imposed:
  2 :Almost constantly
  2 :About quarterly on average
  2 :About half yearly on average
  9 :About annually on average
  5: No changes in the last 2 years
  1 :Don't know)
If there were doubts about the selection of KPIs or the rationale supporting them when the KPIs were originally selected, were specific plans made to gather information, e.g. by research or by trial and error, that would reduce those uncertainties over time?  8 :Yes
15 :No
  7 :Don't know

(Excluding respondents for whom KPIs were entirely imposed:
  5 :Yes
10 :No
  6 :Don't know)

Although more than half of organizations planned to review their selection of KPIs after less than a year the actual reviews were less frequent. Imposed KPIs tended to be reviewed annually. This lack of review may be linked to the fact that fewer than half of organizations who had a choice in their KPIs had made any plans to find out information that would have helped them review their KPIs and select better or more up to date ones.

It may be that many of these organizations were reporting their KPIs monthly and thought that at this frequency it would be a long time before enough history had built up to reveal the presence or absence of links between indicators. That is true. It would take literally years for patterns to be revealed, which is why KPIs need to be gathered in a much more detailed way to support rapid learning and empirical analysis.

Key points: The selection of KPIs should be reviewed more than annually and plans made to find out information that will help with an effective review. Regardless of how often the KPIs are reported, they should not be summarised so far that learning from them in a reasonable period of time is prevented. Monthly totals or averages for the whole organizational unit are far too summarised.

Adjustments to targets

A common problem with target setting is that targets quickly become obsolete. The future just isn't predictable enough to set targets that stay exactly correct for a long period, except for certain kinds of relative targets. The longer the time between revisions of the targets the more of a problem this is. Ideally, an organization will revise its internal management targets as often as possible to take into account the latest information.

QuestionAnswer distribution
How often can targets be set/adjusted?  6 :At any time
  1 :Monthly
  6 :Quarterly
  2 :Half yearly
12 :Annually
  1 :Less than once a year
  2 :Don't know
On how many occasions in the last year have targets actually been set/adjusted?  1 :Five or more times
  0 :Four times
  0 :Three times
  3 :Twice
15 : Once
11 :Don't know

Organizations actually change their targets less frequently than their official policy allows for, though the finding is clouded by the large number of respondents who did not know how many times targets had actually been changed. The reasons for this are not visible in this survey but obviously if the process is time consuming and emotionally bruising (which it often is in my personal experience) then people will be reluctant to do it any more than strictly necessary. That is only one possible explanation.

Key Point: Organizations should revise their targets as often as they can cope with, and increase their ability to work with rapidly changing targets. At higher frequency of change, most targets only need adjustment on each occasion.

Reliance on variance analysis as a control method

Analysis of variances between targets and actual results is a long established management technique but reactive and a limited way to manage risk and uncertainty. One might expect that the greater an organization's belief in management by negative feedback loops the less it will see the need to think about the future and all its alternatives.

QuestionAnswer distribution
What is the perceived importance to management control of explaining differences between targets and actuals, and motivating people to reduce those differences?  4 :Not done at all
  5 :Done but not important
12 :Done and important
  5 :The main mechanism
  4 :Don't know

When the perceived importance of variance reduction was compared with scores for risk management within performance management there was no correlation at all. Perhaps the tendency for control oriented organizations to be strong in all areas counteracts a tendency to rely on variances at the expense of looking ahead at the future and what it might bring. Perhaps both my theories are wrong!

Key Point: Organizations should not rely heavily on variance reduction as their main control mechanism. Think of it more as a safety net.

Forecasting

If relying on feedback from variances is too reactive and slow for many purposes, the obvious thing to do is to start looking forward and one part of that is making forecasts. (How forecasts are used is important, but not covered in this survey.)

QuestionAnswer distribution
Are there forecasts (at any frequency) of financial KPIs?23 :Yes
  6 :No
  1 :Don't know
Are financial KPIs forecast or reforecast more than once a year?18 :Yes
11 :No
  1 :Don't know
Are any non-financial KPIs forecast, at any frequency?21 :Yes
  8 :No
  1 :Don't know
Are any non-financial KPIs forecast or reforecast more than once a year?15 :Yes
12 :No
  3 :Don't know

Forecasting financial KPIs is slightly more common than forecasting non-financial KPIs but notice that the survey does not reveal how many of the KPIs have forecasts. It might be just some of them.

Most organizations do forecasts more than once a year but a sizeable minority do not. This suggests that in this minority the forecasts do not have a significant role in management control during the year.

Key Points: Most KPIs should be forecast and reforecast more than once during the year, and these forecasts should be derived from planned actions and expectations about the environment. Forecasts should never be aspirations. Actions should be chosen with some idea of the likely impact on KPIs.

Expressing and analysing confidence of achieving outcomes

A common step in performance management systems is that people are asked to agree to performance targets. Obviously it makes no sense to accept agreement to a target if the person agreeing to it also says it will not be achieved, or even that it is impossible to achieve, and here lies an important problem. In the real world our future achievements are uncertain to some degree. If a target has some stretch in it then our achievement of that objective is almost certain to be uncertain to a degree we cannot ignore. Will our plans give the results we desire?

A rational response to this problem is to keep that uncertainty in mind and use it as a spur to action planning. For example, what would we do if a particular action proved less effective, or more effective, than expected? What could we do to research further the likely impact of a new idea?

QuestionAnswer distribution
Is the degree of confidence in achieving outcomes assessed, even subjectively, in a systematic way at any stage in planning for performance or performance improvements as measured on the KPIs? (This could be anything from people saying if they are/are not confident of achieving a target, to performing calculations to assess the likelihood of achieving various levels of outcome.)17 :Yes
  9 :No
  4 :Don't know
How, if at all, is confidence in outcomes expressed? (If more than one technique is used please select the most sophisticated technique that is widely used in the organisational unit.)  0 :Numerically e.g. probability
  9 :With a scale of words e.g. ‘very confident’
16 :Not expressed systematically
  5 :Don't know

Against what outcomes is confidence expressed, if at all? (If more than one technique is used please select the most sophisticated technique that is widely used in the organisational unit.)

Explanation of options:

  • Against a single level of achievement: i.e. for a given measure of performance, confidence of reaching a single, specific level of performance is expressed e.g. ‘I'm 70% sure we'll sell more than 2 million units.’

  • As a confidence distribution: i.e. for a given measure of performance, a number of potential performance levels are considered and a confidence is attached to each. e.g. ‘I'm 20% sure we'll be rated 1 star, 30% sure of 2 stars, and 50% sure of a 3 star rating.’ Could even be a probability density function.

  3 :As a confidence distribution
  7 :Against a single level of achievement
14 :Not expressed systematically
  6: Don't know
Is the credibility of confidence judgements challenged, for example by comparing them with indications from objective risk factors relating to the difficulty of the outcome and past history of success?  7 :Yes
15 :No
  8 :Don't know

The wording of these questions clearly left respondents wondering what ‘expressed systematically’ might mean. While most respondents (17) said that confidence was expressed systematically at some point in their performance management process, and only 9 said it was not, these numbers were almost exactly reversed when respondents were asked which of the alternative techniques were used for this.

No organization used numbers to express confidence, even though this is the clearest and simplest way to do it. Nine used a scale of words, but presumably the remainder who considered their approach systematic in the previous question were using some non-standardised verbal means to express confidence. I would consider that non-systematic.

Ten organizations express confidence against either one or more than one levels of achievement. Most of these preferred the single level of achievement and most likely this is people being asked how confident they are of achieving a target. Covering multiple levels of achievement is more informative and useful, and can be used to help set targets or to discuss the relationship between resource consumption and potential results.

A minority of respondents challenged confidence levels expressed, and this was more common where the ratings were more formal. Seven out of 17 organizations where confidence was expressed (according to the first question in this group) challenged ratings. Of those 4 challenged out of the 5 respondents able to say what technique was used and against what the ratings were made. Two more challenged out of the 7 respondents able to answer either of the two technique questions. The remaining one example of challenge was from someone unable to answer either technique question using the options available. In summary, the more formal detail could be provided by the respondent the more likely challenge was.

Key Points: Confidence in outcomes should be expressed using a numeric scale of probability against a small set of potential achievement levels, and such ratings should be evaluated against objective evidence such as broad indicators of the difficulty of the task and past results achieved. This can be used in a variety of ways. The greatest mistake is to demand that people either are confident of achieving a target or are not. Reality doesn't work that way.

Analysis of risk and uncertainty for project/action planning

Most performance management systems involve some kind of action planning. A common problem is to plan for one possible future only and find that as events unfold the plan rapidly becomes obsolete. One practical approach to this problem is to use some form of scenario planning to ensure that plans cover the most important likely variations in future conditions.

A less potent, but still vital, method is to analyse areas of risk and uncertainty affecting the planned actions.

QuestionAnswer distribution
Is scenario planning used at any stage in developing plans for performance improvements?13 :Yes
12 :No
  5 :Don't know
In developing plans for performance improvements is any attempt made to analyse and document, systematically, areas of risk and uncertainty affecting the plans?20 :Yes
  6 :No
  4 :Don't know

An impressive 13 respondents used scenario planning in some way, most of these being in UK central or local government, though there were 4 private sector examples too.

The more typical approach of analysing risk and uncertainty affecting plans was followed in 20 organizations, leaving 6 doing nothing in this area.

Key Point: Risk and uncertainty affecting action plans should be considered systematically as part of performance management.

Planning actions to address specific areas of risk and uncertainty

Just analysing risk and uncertainty will not be valuable unless actions flow from that analysis.

QuestionAnswer distribution
In developing plans for performance improvements is there an explicit process of planning actions specifically to manage areas of risk/uncertainty? (Many actions on a project would be needed even in a world without uncertainty. Planning those doesn't count for the purposes of this question!)16 :Yes
  9 :No
  5 :Don't know

A large minority had nothing within action planning specifically to deal with risk and uncertainty, though it is possible that some respondents legitimately considered it rolled into other action planning.

Key Point: Whether explicit or not, action plans should address areas of risk and uncertainty, and work is needed to confirm this.

Prioritisation of actions

In action planning we are usually uncertain to some degree about the resources required for planned actions (compared with the resources that might be available) and about the results that our actions will bring. Many organizations sensibly set some priorities on actions so that if they find they cannot do everything on their action list it is easier to see what should be cut back.

However, if we follow the logic of the uncertainty faced it is clear that priorities on actions will themselves be educated guesses and so it is sensible to revise priorities from time to time. Ideally, monitoring will provide increasingly clear information about the impact of various actions and their actual resource consumption, so priorities can be guided by this information.

QuestionAnswer distribution
Where multiple initiatives are planned to improve performance, are they prioritised?16 :Yes
  7 :No
  7 :Don't know
Where multiple initiatives are planned to improve performance, when is it planned (explicitly) to review and possibly revise the priorities?  0 :Every 1 – 7 days
  4 :Every 8 days – a month
  4 :Every 32 days – 3 months
  5 :Every 91 days – a year
  1 :Less than once a year
  2 :Not planning to revise priorities
  4 :Not prioritised
10 :Don't know

These questions were hard for respondents to answer. Many did not know the answer to the question and some may have been confused by these questions as evidenced by the fact that 7 respondents said priorities were not set when answering the first question buy only 4 said this in answer to the second question. It should have been the same each time.

Key Point: Priorities, where set, should be revised frequently in the light of evidence about effectiveness and resource consumption.

Rapid deliveries

Consider two projects identical in every respect except that one is planned in a waterfall style with much preparatory work eventually leading to a big bang implementation of beneficial change at the end, while the other delivers the same changes in much smaller pieces, something every week or two. It is not hard to see that the second project, the incremental one, has a much better risk profile than the waterfall project. At any point in time our commitment of resources without seeing a benefit is much smaller for the incremental project.

What is less widely known is that the incremental project will usually be less costly as well as less risky. How can that be when surely there is an overhead caused by introducing, for example, 30 small software changes instead of just one big one? The explanation is that the work involved in changes rises rapidly with the size of change and is non-linear. For example, a change that is twice as complicated requires much more than twice the resources to do. Thirty small deliveries of software really can be less effort that one big one that amounts to the same overall change. The advantage of smaller deliveries is larger for projects that face a lot of uncertainty. It is true that there is an overhead with multiple deliveries, but it is outweighed by the advantages of piecemeal delivery for most real life projects.

Another advantage with incremental deliveries is that you learn so much faster from delivering something tangible that can be used. As soon as something is delivered you can study its performance and learn. Months of talk and theory amounts to little more than speculation compared to the valuable lessons of real experience, carefully measured.

A simple first step organizations can take towards having projects stick to low risk, high learning action plans is to forbid long periods without at least one delivery of value to a stakeholder.

QuestionAnswer distribution
Are there any policies or procedures applying to the organisational unit that discourage plans for projects that involve a long period of work (e.g. 6 months) without any useful delivery to a stakeholder and, instead, encourage incremental delivery of usable improvements?  9 :Yes
15 :No
  6 :Don't know

Only a minority had any kind of procedure or policy in place to discourage high risk project designs. There is a considerable opportunity to improve execution by introducing such policies or procedures and making sure people are able to design action plans in the required structure.

Key Point: Organizations should have policies and procedures to discourage long periods without useful deliveries, and instead encourage frequent, incremental delivery of usable improvements. Managers do not always plan in this way when it is sensible to do so (which is almost always) and need the encouragement.

Rapid feedback

In choosing measures of performance for use in a performance management system there are many important factors to consider. One with particular relevance to risk and uncertainty is the speed with which the measures will be available. The sooner we can get feedback the better because our actions may have more or less impact than we expect and we need to find out quickly. For example, if the goal is to improve academic performance in a school the ultimate measures are likely to be annual exam results, but getting one measurement a year (and one affected by so many other factors) is not going to be very helpful on its own. What we also need is measures that provide feedback from week to week. We need feedback that will quickly show the impact, or lack of it, when we deliver what we hope will be improvements.

QuestionAnswer distribution
Are there any policies or procedures applying to the organisational unit that encourage use of short term, rapid feedback measures of progress in addition to longer term measures, even though short term measures may not be of the outcome ultimately desired? (The point is that, without rapid feedback, learning is likely to be slow.)16 :Yes
  7 :No
  7 :Don't know

It is interesting to see that more organizations recognise the need for frequent, rapid feedback on progress than encourage the frequent, rapid deliveries that would make that feedback really valuable.

Key Point: Measures of progress should include items that give frequent, rapid information.

Other mechanisms

The survey also invited respondents to describe any other ways that risk was managed within performance management. Although the responses given were interesting no striking patterns emerged.

Personal opinions of respondents

The final section of the survey gave respondents a chance to give their views on some key questions.

QuestionAnswer distribution
Are there any areas of uncertainty whose (mis)management has a material impact on performance results for the organisational unit? (Important: Please exclude the risks of incorrect/false accounting and business continuity issues like fires and major computer failure. However, include things like uncertainty about customer reactions to new products, doubts about funding, regulatory uncertainties, etc, etc.)24 :Yes
  3 :No
  3 :Don't know
How important is it to manage risk/uncertainty as it relates to managing performance in the organisational unit?  1 :The most important aspect
15 :Very important
12 :Too important to ignore
  2 :Not important
  0 :Don't know
How well do you think the organisational unit is managing risk/uncertainty as it relates to managing its performance?  3 :Very good – little scope for improvement
13 :Good but could be better
10 :Poorly
  4 :Very poorly
  0 :Don't know
In your ideal approach to managing uncertainty and risk in performance management, what is the role of uncertainty/risk management? (Explanation of the alternatives:

  • Achieve original objectives: i.e. the role of risk/uncertainty management is to help the organisation achieve the objectives/targets it set at the start of a year or a project. Uncertainty/risk management has no role in setting or revising those objectives.

  • Achieve given objectives: i.e. the role of risk/uncertainty management is to help the organisation achieve its objectives/targets, though these may change during a year or project. Uncertainty/risk management has no role in setting or revising those objectives/targets.

  • Perform well: i.e. the role of risk/uncertainty management is to help the organisation perform well, and that includes helping to set and revise objectives/targets.)

20 :Perform well
  5 :Achieve given objectives
  2 :Achieve original objectives
  3 :Other
  0 :Don't know

Two of the ‘Other’ respondents gave their alternative: One of these had misread the question and commented on their current lack of risk management. The other respondent gave the answer ‘Perform well and learn faster.’

A large majority signed up to the most progressive notion of risk management, that of ‘Performing well’, with only a handful preferring the more limited and traditional alternatives. There is a slight tendency for those signing up to the more traditional alternatives to be more satisfied with their existing risk management arrangements but have lower scores for their risk management on this questionnaire, but the differences are slight and the number of traditionalists who responded is low.

About half of respondents thought that their organizational unit managed risk and uncertainty in performance management poorly or very poorly.

Scope for improvement

The scope for improvement is enormous. Although there are examples of good practice on nearly all the questions of the survey no organization excelled on all areas.

To analyse this point I constructed a risk management score by awarding a point each time one of the more risk aware answers was given to the factual questions. This score was broken into two parts: (1) the Obvious controls, and (2) the Progressive, less obvious controls. For example, analysing risks of an action plan is Obvious, but using scenario planning is Progressive.

There was a correlation of r = 0.67 between Obvious and Progressive scores, but this does not amount to systematic coverage by some organizations.

Out of a possible 20 points for Obvious controls, the average score was 9.4 (47% of the maximum possible), the highest was 17, and the lowest was 5. There was a slight tendency for high scores to be achieved by larger organizations, but the correlation is weak.

By comparison, out of a possible 21 points for Progressive controls, the average score was 7.4 (35% of the maximum possible), the highest was 16, and the lowest just 1. There was no tendency for better scores to be obtained by larger organizations.

This not only illustrates the extent of the opportunity for improvement, but is consistent with the idea that lack of awareness generally is the reason for the patchy scores. In time, as ‘embedded’ risk management comes to mean something practical to more people, we will see a change for the better.

Respondent profile

There were 30 respondents in all, each one answering for a different ‘organisational unit’, perhaps a department, a company, a division of a charity, and so on. Most respondents were in the UK (20) with others from the USA (4), Australia (2), and 1 each from Tanzania, Malta, Thailand, and the Czech Republic.

The respondents represented various sectors: Private sector (11), Central government (9), Local government (7), and Other public sector (3).

Organizational unit size also varied greatly:

Organizational unit sizeNumber of respondents
0 – 5 people  2
6 – 20 people  2
21 – 100 people  2
101 – 1,000 people  4
1,001 – 10,000 people15
10,001 – 100,000 people  4
100,001 or more people  1

Respondents had various roles: Risk Manager (8), Performance Manager (8), Internal auditor (5), external consultant (2), and Other internal (7).

Finally

This survey has asked questions around a number of important ways that risk management should be embedded in risk management. It was a long survey (considering that respondents were voluntary) so does that exhaust the scope for embedding? Not at all.

Soon after the survey was completed I came across two encouraging examples of work by large organizations that seek to go further. One example was a central government department that was experimenting with merging its risk management and performance management into one analysis and monitoring system. The other example was a project aiming to use statistical analysis to select KPIs, and continue selecting and adjusting them over time.

A little more on these ideas is described in a recent publication ‘Seven frontiers of internal control and risk management.’



Words © 2006 Matthew Leitch. First published 27 February 2006.