Matthew Leitch, educator, consultant, researcher SERVICES OTHER MATERIAL |
Working In UncertaintyResearch on risk management within performance managementby Matthew Leitch; first published 27 February 2006. IntroductionMore than a year ago now I conducted an online survey of public and private sector organizations to find out what people were doing to embed risk management within performance management. My survey had a lot of questions and correspondingly getting to the 30 responses was hard work and I am very grateful to those who gave the time to respond. Since respondents were also self-selected volunteers not much can be deduced from the statistics of their answers. However, some important overall observations can be made, and the individual cases were fascinating to study. The topic is still hot and indeed client work on the issues it involves is the reason it has taken me so long to get around to writing up these results! The impression I got from the responses is that the issues raised in the survey had not been considered systematically by most respondent organizations. Nearly all respondents gave patchy answers; in one area they would have good management of risk and uncertainty in place, and yet be missing equally important controls in another. Responses varied greatly from one organization to another. There were also some risk managers who were very satisfied with their risk management systems despite lacking important controls in the performance management processes. This lack of systematic design and implementation is consistent with the lack of published articles and books dealing with the subject in any depth. The main practical messages from this research are that (1) in most organizations there are many simple steps that can be taken to ensure that risk and uncertainty are considered and managed throughout performance management activities, and that (2) this could greatly improve the impact of performance management processes. Each group of questions in the survey is covered in a sub-section below and highlights the controls that organizations should have in place. Measurement uncertaintyMost organizations today measure a variety of things about themselves, in addition to their finances, and call them something like ‘Key Performance Indicators’ (KPIs). In practice any kind of measurement is fraught with difficulties such as shifting definitions, system changes, clerical errors, and system errors. Even audited financial numbers are not reliable, to an extent that few people appreciate. Accounting is an art that relies to some extent on estimates about what will happen in the future. You may think your profit is £X, but in fact this is just one of the possible numbers the auditors would have accepted as justifiable choices. Non-financial KPIs often involve a lot more uncertainty because the culture of checking and double checking numbers that is strong in accounting departments is not so strong elsewhere. Also, some KPIs, like the results of customer satisfaction surveys, rely on sampling or subjective ratings that can be affected by a person's mood from one moment to the next. In my work I have frequently found KPIs to be unreliable. For example, one company had weekly ‘quality reviews’ of insurance claims processing. A claims operator would be selected and would spend a few days checking claims processed by his/her colleagues. The number of errors found was tracked for several months until an internal auditor reperformed a quality check and discovered that the quality reviews were finding only half the errors made. In other words, the numbers that had been reported, analysed, and even praised, were 100% wrong. I cannot explain why so many companies have non-financial information on their internal reports that is wrong, but it happens. Sometimes the error is in plain view and anybody who looked at the management report critically would realise something was amiss, and yet still it is not picked up. Measurement uncertainty is a result of incorrect collection, summarising, or presentation of data, and inherent uncertainties such as from samples and subjective ratings. Since even audited financial numbers contain some measurement uncertainty it is hard to believe that any internal report of KPIs is entirely free of measurement uncertainty. Here are the relevant questions from the survey, and the answers given:
The answers show that most respondents recognised there was measurement uncertainty. The 17% of respondents who saw no measurement uncertainty may have been quite right, but it seems unlikely. Incidentally, two of these five said their reports gave information about measurement uncertainty, despite having said there wasn't any uncertainty. Also, most respondents could think of at least one KPI where uncertainty was disclosed. Of the 25 who recognised some measurement uncertainty as being present, 17 could think of at least one KPI where that uncertainty was disclosed in some way. This leaves 8 respondents where there was measurement uncertainty but no information about it given when the KPIs are reported. There is no way to tell from this survey whether all uncertain numbers had information about their uncertainty. My observation over the years, looking at many reports, is that only a small proportion of measurement uncertainty is ever disclosed. People prefer to present their numbers as facts. The implications of this include the risk of decisions being made on the basis of information that is less reliable than it appears, and failure to take actions that would reduce measurement uncertainty. For example, in banks it is currently common to maintain a database for reporting ‘operational risk events’ and their impact e.g. cock-ups, frauds, accidents. Of course people are busy and also reluctant to report their mistakes, so it can be very difficult to capture everything in the database. If you take the value of incidents reported in the database as being the true cost of operational risk then you are making a big mistake. In reality many banks don't even know how much they don't know. Showing measurement uncertainty clearly is also an everyday reminder of the organization's approach to risk and uncertainty. It shows a healthy awareness rather than a tendency to ignore or actively suppress uncertainty, and do nothing about it. Key Point: Organizations should disclose measurement uncertainty, in some way, for all numbers on internal management reports that are in some way uncertain. Make it an official policy. This encourages more care over preparing numbers and reduces the risk of unwittingly making important decisions on the basis of misleading data. Variability and time seriesIt is very hard to spot trends and patterns over time unless you see the numbers over time displayed on one page. Where performance numbers are seen without their history it is hard to judge the importance of changes. Is a 5% increase important? Perhaps ‘yes’ if the number hasn't changed that much for years, but ‘no’ if it usually changes by much more, and in an unpredictable and trendless way. Variability that is only partly understood appears for most KPIs and a simple way to understand it is to show the past history of the number, preferably using a graph. Here is the relevant question from the survey, and a summary of the answers given:
Encouragingly most respondents used time series to some extent, but 25% did not. Key Point: Organizations should show KPIs as time series, preferably with graphs. This reduces the risk of mistaking normal variations for important changes that require special action, and greatly increases the chances of understanding what is going on. Auditing KPIsThe easiest way to spot most incorrect numbers is to scrutinise the numbers very carefully, preferably using graphs and statistical analysis. Going further than this might mean performing an audit of some kind that probes the way the data are collected, summarised, and reported.
Respondents nearly all thought their KPIs were scrutinised for errors before use, but 30% had not done any audit of any of the KPIs in the last two years. Key Point: All KPIs should be scrutinised effectively before use, and all should be subjected to some more rigorous audit from time to time. Existence of secondary indicatorsThe Holy Grail of performance managers is a small set of indicators that truly show everything that matters. The next best thing is one that responds to everything important and so triggers managers to look at more detailed information when they see something they don't understand or expect. Such an informative set of KPIs is extremely difficult to achieve and in reality most KPIs are selected on judgement rather than on empirically demonstrated importance. In addition, conditions change. When something is identified as a KPI people start managing it, and soon it has less importance than it did. Some other indicator correlates better with performance. Consequently organizations need to collect and look at data that might turn out to be ‘Key’. Of course, every organization of any size is computerised and is awash with data, much of it unused. The challenge is to use it without being overwhelmed by it.
Most respondents recognised that ‘secondary’ indicators existed in the organizational unit for which they were responding. Key Point: Organizations should accept that secondary indicators are necessary, and know how to use them efficiently. Use of secondary indicatorsIf secondary indicators involved as much work as KPIs life would be full of indicators with little time left over for anything else. And yet if a secondary indicator shows something important that has not shown up on the KPIs then ideally someone should notice and report it upwards.
Five respondents could think of no mechanisms at all for identifying changes of unexpected relevance to performance not picked up in the KPIs. Key Point: Organizations should never assume that their selected KPIs will show everything of importance and that other indications can be ignored. Even those few organizations that empirically test the relevance of their KPIs should recognise that things change. Involvement in the selection of KPIsSelecting KPIs is not easy and without empirical validation any selection is no more than an educated guess. Having said that, a wise precaution is to involve the right people in selecting KPIs initially. Of course many people do not have that luxury as their KPIs are simply imposed from above.
The desire for end user involvement is particularly strong. So strong that 3 respondents indicated that end users were involved in the selection of KPIs even though the KPIs were entirely imposed. Most organizations seem to have decided that involving IT specialists or the people who would produce KPI numbers in selecting KPIs was not important, and perhaps the thinking was that practicalities should not ‘bias’ the selection of ideal KPIs. That's a pity, since asking for numbers that are not already collected and available on a computer system is likely to lead to unreliable data being supplied. It's a compromise but if I had a choice I would involve IT specialists to help me understand what data are already available. Recording agreement on selection of KPIsA simple control is to record agreement, if any, when KPIs are selected.
Clearly, selecting KPIs is not like authorising an expenditure or starting a project. Although the choice may be at least as important as that, initial agreement is not always crucial. In some cases it may have been that a tangible product – some kind of KPIs report – was needed in order to get some experience and stimulate interest in KPI use. Use of a causal modelLeading practice in ‘balanced scorecard’ development is to use some kind of causal model showing how actions management take are thought to lead to desired outcomes. The model represents beliefs about how the world works.
Most respondents who did not have their KPIs entirely imposed had some kind of causal model to justify their choice of KPIs, though it was usually narrative only. Where KPIs were imposed there was rarely a causal model to justify them, even in narrative form. Recognition of uncertainties in beliefs about causal mechanismsFrom a risk and uncertainty management perspective the use of an explicit model is important as a way to draw out beliefs and, potentially, uncertainty. The amount of uncertainty surrounding a causal model can easily be underestimated. Subjectively, we think we ‘know the business’. In reality there are some severe limits to this knowledge unless we have taken steps to gather and use data. Although we may have a strong belief that one factor drives another it is much harder to be certain about how strongly it does so, for different starting levels, or to combine the effects of more than one driver, or to rule out the possibility of drivers not yet identified having a significant effect, or to rule out the possibility of an indirect feedback loop that cancels out the effect we predict and perhaps even reverses it.
A little under half those who had some choice about their KPIs did something specific to look at uncertainties connected with their choice. None of those with imposed KPIs did so. Of the 15 that had some choice as to their KPIs, and used a model, 9 represented uncertainty in it in some way. This suggests that having a model did make it more likely that uncertainties would be considered. In fact, taking just those with some choice over their KPIs, of 13 who had some kind of explicit model, 6 looked for uncertainties, while of the 4 with no explicit model none looked for uncertainties. The tendency not to consider uncertainties affecting the choice of KPIs is a considerable missed opportunity. If you highlight uncertainties at the start this prompts actions to reduce those uncertainties and makes adapting and improving the KPIs much easier. It is human nature to avoid changing our minds in public, but if we said at the start that there were uncertainties and we were going to learn more and keep on improving, then we are only doing what we said we would do. Key point: When selecting KPIs, carefully identify and document the uncertainties related to the choice and the rationale supporting it. This may be easier and more effective if you have a explicit causal model underpinning the choice of KPIs. Research to reduce uncertainties relating to selection of KPIsResearch (in the broadest sense) can be carried out to reduce uncertainties related to the selection of KPIs. This research might have been planned before the analysis started, or be planned in response to uncertainties identified.
Around half had not bothered with research. Key point: Finding out more is usually essential and either needs to be done once the initial selection has been made, or for the initial selection and subsequently. Evolution of KPIsIt makes sense to plan to review and adjust the selection of KPIs from time to time and to plan activities that will reduce uncertainties about KPI selection. Kaplan and Norton stress the value of empirically testing imagined causal relationships between variables.
Although more than half of organizations planned to review their selection of KPIs after less than a year the actual reviews were less frequent. Imposed KPIs tended to be reviewed annually. This lack of review may be linked to the fact that fewer than half of organizations who had a choice in their KPIs had made any plans to find out information that would have helped them review their KPIs and select better or more up to date ones. It may be that many of these organizations were reporting their KPIs monthly and thought that at this frequency it would be a long time before enough history had built up to reveal the presence or absence of links between indicators. That is true. It would take literally years for patterns to be revealed, which is why KPIs need to be gathered in a much more detailed way to support rapid learning and empirical analysis. Key points: The selection of KPIs should be reviewed more than annually and plans made to find out information that will help with an effective review. Regardless of how often the KPIs are reported, they should not be summarised so far that learning from them in a reasonable period of time is prevented. Monthly totals or averages for the whole organizational unit are far too summarised. Adjustments to targetsA common problem with target setting is that targets quickly become obsolete. The future just isn't predictable enough to set targets that stay exactly correct for a long period, except for certain kinds of relative targets. The longer the time between revisions of the targets the more of a problem this is. Ideally, an organization will revise its internal management targets as often as possible to take into account the latest information.
Organizations actually change their targets less frequently than their official policy allows for, though the finding is clouded by the large number of respondents who did not know how many times targets had actually been changed. The reasons for this are not visible in this survey but obviously if the process is time consuming and emotionally bruising (which it often is in my personal experience) then people will be reluctant to do it any more than strictly necessary. That is only one possible explanation. Key Point: Organizations should revise their targets as often as they can cope with, and increase their ability to work with rapidly changing targets. At higher frequency of change, most targets only need adjustment on each occasion. Reliance on variance analysis as a control methodAnalysis of variances between targets and actual results is a long established management technique but reactive and a limited way to manage risk and uncertainty. One might expect that the greater an organization's belief in management by negative feedback loops the less it will see the need to think about the future and all its alternatives.
When the perceived importance of variance reduction was compared with scores for risk management within performance management there was no correlation at all. Perhaps the tendency for control oriented organizations to be strong in all areas counteracts a tendency to rely on variances at the expense of looking ahead at the future and what it might bring. Perhaps both my theories are wrong! Key Point: Organizations should not rely heavily on variance reduction as their main control mechanism. Think of it more as a safety net. ForecastingIf relying on feedback from variances is too reactive and slow for many purposes, the obvious thing to do is to start looking forward and one part of that is making forecasts. (How forecasts are used is important, but not covered in this survey.)
Forecasting financial KPIs is slightly more common than forecasting non-financial KPIs but notice that the survey does not reveal how many of the KPIs have forecasts. It might be just some of them. Most organizations do forecasts more than once a year but a sizeable minority do not. This suggests that in this minority the forecasts do not have a significant role in management control during the year. Key Points: Most KPIs should be forecast and reforecast more than once during the year, and these forecasts should be derived from planned actions and expectations about the environment. Forecasts should never be aspirations. Actions should be chosen with some idea of the likely impact on KPIs. Expressing and analysing confidence of achieving outcomesA common step in performance management systems is that people are asked to agree to performance targets. Obviously it makes no sense to accept agreement to a target if the person agreeing to it also says it will not be achieved, or even that it is impossible to achieve, and here lies an important problem. In the real world our future achievements are uncertain to some degree. If a target has some stretch in it then our achievement of that objective is almost certain to be uncertain to a degree we cannot ignore. Will our plans give the results we desire? A rational response to this problem is to keep that uncertainty in mind and use it as a spur to action planning. For example, what would we do if a particular action proved less effective, or more effective, than expected? What could we do to research further the likely impact of a new idea?
The wording of these questions clearly left respondents wondering what ‘expressed systematically’ might mean. While most respondents (17) said that confidence was expressed systematically at some point in their performance management process, and only 9 said it was not, these numbers were almost exactly reversed when respondents were asked which of the alternative techniques were used for this. No organization used numbers to express confidence, even though this is the clearest and simplest way to do it. Nine used a scale of words, but presumably the remainder who considered their approach systematic in the previous question were using some non-standardised verbal means to express confidence. I would consider that non-systematic. Ten organizations express confidence against either one or more than one levels of achievement. Most of these preferred the single level of achievement and most likely this is people being asked how confident they are of achieving a target. Covering multiple levels of achievement is more informative and useful, and can be used to help set targets or to discuss the relationship between resource consumption and potential results. A minority of respondents challenged confidence levels expressed, and this was more common where the ratings were more formal. Seven out of 17 organizations where confidence was expressed (according to the first question in this group) challenged ratings. Of those 4 challenged out of the 5 respondents able to say what technique was used and against what the ratings were made. Two more challenged out of the 7 respondents able to answer either of the two technique questions. The remaining one example of challenge was from someone unable to answer either technique question using the options available. In summary, the more formal detail could be provided by the respondent the more likely challenge was. Key Points: Confidence in outcomes should be expressed using a numeric scale of probability against a small set of potential achievement levels, and such ratings should be evaluated against objective evidence such as broad indicators of the difficulty of the task and past results achieved. This can be used in a variety of ways. The greatest mistake is to demand that people either are confident of achieving a target or are not. Reality doesn't work that way. Analysis of risk and uncertainty for project/action planningMost performance management systems involve some kind of action planning. A common problem is to plan for one possible future only and find that as events unfold the plan rapidly becomes obsolete. One practical approach to this problem is to use some form of scenario planning to ensure that plans cover the most important likely variations in future conditions. A less potent, but still vital, method is to analyse areas of risk and uncertainty affecting the planned actions.
An impressive 13 respondents used scenario planning in some way, most of these being in UK central or local government, though there were 4 private sector examples too. The more typical approach of analysing risk and uncertainty affecting plans was followed in 20 organizations, leaving 6 doing nothing in this area. Key Point: Risk and uncertainty affecting action plans should be considered systematically as part of performance management. Planning actions to address specific areas of risk and uncertaintyJust analysing risk and uncertainty will not be valuable unless actions flow from that analysis.
A large minority had nothing within action planning specifically to deal with risk and uncertainty, though it is possible that some respondents legitimately considered it rolled into other action planning. Key Point: Whether explicit or not, action plans should address areas of risk and uncertainty, and work is needed to confirm this. Prioritisation of actionsIn action planning we are usually uncertain to some degree about the resources required for planned actions (compared with the resources that might be available) and about the results that our actions will bring. Many organizations sensibly set some priorities on actions so that if they find they cannot do everything on their action list it is easier to see what should be cut back. However, if we follow the logic of the uncertainty faced it is clear that priorities on actions will themselves be educated guesses and so it is sensible to revise priorities from time to time. Ideally, monitoring will provide increasingly clear information about the impact of various actions and their actual resource consumption, so priorities can be guided by this information.
These questions were hard for respondents to answer. Many did not know the answer to the question and some may have been confused by these questions as evidenced by the fact that 7 respondents said priorities were not set when answering the first question buy only 4 said this in answer to the second question. It should have been the same each time. Key Point: Priorities, where set, should be revised frequently in the light of evidence about effectiveness and resource consumption. Rapid deliveriesConsider two projects identical in every respect except that one is planned in a waterfall style with much preparatory work eventually leading to a big bang implementation of beneficial change at the end, while the other delivers the same changes in much smaller pieces, something every week or two. It is not hard to see that the second project, the incremental one, has a much better risk profile than the waterfall project. At any point in time our commitment of resources without seeing a benefit is much smaller for the incremental project. What is less widely known is that the incremental project will usually be less costly as well as less risky. How can that be when surely there is an overhead caused by introducing, for example, 30 small software changes instead of just one big one? The explanation is that the work involved in changes rises rapidly with the size of change and is non-linear. For example, a change that is twice as complicated requires much more than twice the resources to do. Thirty small deliveries of software really can be less effort that one big one that amounts to the same overall change. The advantage of smaller deliveries is larger for projects that face a lot of uncertainty. It is true that there is an overhead with multiple deliveries, but it is outweighed by the advantages of piecemeal delivery for most real life projects. Another advantage with incremental deliveries is that you learn so much faster from delivering something tangible that can be used. As soon as something is delivered you can study its performance and learn. Months of talk and theory amounts to little more than speculation compared to the valuable lessons of real experience, carefully measured. A simple first step organizations can take towards having projects stick to low risk, high learning action plans is to forbid long periods without at least one delivery of value to a stakeholder.
Only a minority had any kind of procedure or policy in place to discourage high risk project designs. There is a considerable opportunity to improve execution by introducing such policies or procedures and making sure people are able to design action plans in the required structure. Key Point: Organizations should have policies and procedures to discourage long periods without useful deliveries, and instead encourage frequent, incremental delivery of usable improvements. Managers do not always plan in this way when it is sensible to do so (which is almost always) and need the encouragement. Rapid feedbackIn choosing measures of performance for use in a performance management system there are many important factors to consider. One with particular relevance to risk and uncertainty is the speed with which the measures will be available. The sooner we can get feedback the better because our actions may have more or less impact than we expect and we need to find out quickly. For example, if the goal is to improve academic performance in a school the ultimate measures are likely to be annual exam results, but getting one measurement a year (and one affected by so many other factors) is not going to be very helpful on its own. What we also need is measures that provide feedback from week to week. We need feedback that will quickly show the impact, or lack of it, when we deliver what we hope will be improvements.
It is interesting to see that more organizations recognise the need for frequent, rapid feedback on progress than encourage the frequent, rapid deliveries that would make that feedback really valuable. Key Point: Measures of progress should include items that give frequent, rapid information. Other mechanismsThe survey also invited respondents to describe any other ways that risk was managed within performance management. Although the responses given were interesting no striking patterns emerged. Personal opinions of respondentsThe final section of the survey gave respondents a chance to give their views on some key questions.
A large majority signed up to the most progressive notion of risk management, that of ‘Performing well’, with only a handful preferring the more limited and traditional alternatives. There is a slight tendency for those signing up to the more traditional alternatives to be more satisfied with their existing risk management arrangements but have lower scores for their risk management on this questionnaire, but the differences are slight and the number of traditionalists who responded is low. About half of respondents thought that their organizational unit managed risk and uncertainty in performance management poorly or very poorly. Scope for improvementThe scope for improvement is enormous. Although there are examples of good practice on nearly all the questions of the survey no organization excelled on all areas. To analyse this point I constructed a risk management score by awarding a point each time one of the more risk aware answers was given to the factual questions. This score was broken into two parts: (1) the Obvious controls, and (2) the Progressive, less obvious controls. For example, analysing risks of an action plan is Obvious, but using scenario planning is Progressive. There was a correlation of r = 0.67 between Obvious and Progressive scores, but this does not amount to systematic coverage by some organizations. Out of a possible 20 points for Obvious controls, the average score was 9.4 (47% of the maximum possible), the highest was 17, and the lowest was 5. There was a slight tendency for high scores to be achieved by larger organizations, but the correlation is weak. By comparison, out of a possible 21 points for Progressive controls, the average score was 7.4 (35% of the maximum possible), the highest was 16, and the lowest just 1. There was no tendency for better scores to be obtained by larger organizations. This not only illustrates the extent of the opportunity for improvement, but is consistent with the idea that lack of awareness generally is the reason for the patchy scores. In time, as ‘embedded’ risk management comes to mean something practical to more people, we will see a change for the better. Respondent profileThere were 30 respondents in all, each one answering for a different ‘organisational unit’, perhaps a department, a company, a division of a charity, and so on. Most respondents were in the UK (20) with others from the USA (4), Australia (2), and 1 each from Tanzania, Malta, Thailand, and the Czech Republic. The respondents represented various sectors: Private sector (11), Central government (9), Local government (7), and Other public sector (3). Organizational unit size also varied greatly:
Respondents had various roles: Risk Manager (8), Performance Manager (8), Internal auditor (5), external consultant (2), and Other internal (7). FinallyThis survey has asked questions around a number of important ways that risk management should be embedded in risk management. It was a long survey (considering that respondents were voluntary) so does that exhaust the scope for embedding? Not at all. Soon after the survey was completed I came across two encouraging examples of work by large organizations that seek to go further. One example was a central government department that was experimenting with merging its risk management and performance management into one analysis and monitoring system. The other example was a project aiming to use statistical analysis to select KPIs, and continue selecting and adjusting them over time. A little more on these ideas is described in a recent publication ‘Seven frontiers of internal control and risk management.’ Words © 2006 Matthew Leitch. First published 27 February 2006. |