Working In Uncertainty

Misuses of Risk Listing on projects


While Risk Listing is not the main approach to managing risk in the world today and most people never have to do it, Risk Listing has become strongly established as a required practice on large projects. Pushed by governments and the Project Management Institute (PMI), among others, Risk Listing has become very common on projects. Not only is Risk Listing used, but elements of Risk Listing have been taken and applied to project management tasks that are outside the scope of Risk Listing.

In particular, (1) the contents of risk registers (generated within Risk Listing) have been used to provide crude probabilistic forecasts for schedule and cost and (2) cost and schedule allowances have been attached to 'risks' in a risk register to provide a form of contingent budget and ongoing budgetary control mechanism. This extended use of risk-lists has not been helpful and it would be better to use techniques specifically designed for the tasks involved rather than try to reuse risk-lists.

Risk Listing on projects

Risk-listing is the approach to risk management that involves making a list of 'risks' then trying to 'manage' the 'risks' on that list[1]. The characteristic language, diagrams, logic, and techniques of Risk Listing will be very familiar to anyone who has some knowledge of project risk management. The list of 'risks' is held on a risk register, which is just a table of 'risks' with no other structure to connect them other than some categorisation. Typically, each 'risk' is rated rather subjectively for its 'probability' and 'impact' and then these ratings are combined into one risk score to prioritise the 'risks'. Decisions about how to 'manage' each 'risk' then follow.

This progression from 'risks' to responses to those 'risks' means that the actions considered are just ones that are wholly or primarily responses to 'risks', and this excludes most decisions. For example, on a project to create a railway line, a decision on whether to build a bridge over a river or a tunnel under it is a major decision with huge risk implications, but it is not within the scope of Risk Listing because it is not a response to a 'risk'.

Risk-listing does not help with decisions where there are other considerations beyond cost and control of an identified 'risk'. Such decisions are very common, perhaps especially at a 'strategic' level.

Nevertheless, Risk Listing has become very common on projects due to relentless promotion by influential bodies. As mentioned earlier, content from risk-lists has been re-used for other project management tasks.

The following section points out the methods already available and used for those project management tasks, describes how Risk Listing has been used instead, and explains some obvious problems arising.

Re-use of risk-lists on projects

Probabilistic forecasting

Forecasts of key outcomes for projects, such as cost and schedule, are useful in many ways. If trustworthy and provided in good time, they can support a wide range of decisions about a project, such as those concerning its structure, choice of technology, contract form, alternative schedules, and financial planning.

Many projects, especially large ones, involve many unknowns and cannot be controlled completely. Consequently, their outcomes are unpredictable, even for experts who have carefully thought through all available information.

Consequently, probabilistic forecasts are usually more informative than best guess forecasts and can prompt improvements to forecasting too. For example, it could be very helpful to know that a particular construction project, which is planned to finish before winter begins, could drag on into winter and the experts think this is quite likely though not the most likely outcome. Awareness of this kind of uncertainty prompts people to take sensible precautions and other actions.

The conventional way to make such probabilistic forecasts is to build some kind of model of the project and its environment. Such models are often quantified fully and put onto a computer so that forecasts can easily be made and revised. In such a model the variables can be of any appropriate type and connections, such as causal links, can be shown explicitly. Using historical information from past projects is also common.

Uncertainty around such models can be explored using various forms of sensitivity analysis and a simple technique that works with any model can be used to explore the combined impact of multiple uncertainties at the same time. The result is that best guesses are replaced by probability distributions showing, usually with graphs, what the future might hold given our current knowledge and plans.

Unfortunately, on some projects the material in a risk register has been used instead of attempting to create a coherent model. The 'risks' are imagined to be separate from each other except for (perhaps) some statistical correlations, and the ratings of each 'risk' are considered reliable.

This is not an appropriate use of the risk-list. The thinking behind Risk Listing is that the 'risks' are managed one at a time (or in small groups at most) and the ratings are designed to support this use. They are not designed to support overall forecasting.

Most 'risks' on a risk register have causal links to other 'risks' such that if one thing goes wrong others become more likely. This is not usually captured on a risk register and it is normal to ignore it in Risk Listing. However, for overall forecasting purposes links are crucial. The 'impact' ratings on a risk register usually represent the ultimate impact for the project of each 'risk' considered independently. However, if one 'risk' drives another then its impact will include a slice of the impact of the 'risk' it drives. Consequently, when you add up their impacts there is multiple counting of impact.

Risk registers have so many gaps that they tend to understate total uncertainty despite the multiple-counting of impact, but it would be silly to assume that these two types of error will always cancel each other out!

More broadly, the lack of a coherent model underlying the typical risk register means that it is unlikely to be as trustworthy as a model that has been built with a deliberate attempt to capture causality, achieve coherence, and use available data. Good practice for developing and using models has been published many times[2]. A recent review of quality assurance of business critical models used in UK government[3] (including those for projects such as rail) has recently summarised most of the main points and includes several mentions of how uncertainty should be dealt with. Risk register material falls far short of these standards in both its structure and development process and the technique of applying Monte Carlo simulation to a list of risks cannot be logically justified because of the multiple counting issue.

Contingent budgets and payments

Another area where Risk Listing content has been reused is budgeting/contract pricing.

The difficulty of completing a project often depends on conditions that are not known initially. For example, the work needed to renovate a building will depend on what problems are found as the work proceeds, such as various types of rot and crack. Similarly, the work needed to compensate customers for mis-selling of financial products will depend on how many customers come forward with claims, how many of those are genuine, and how difficult it is to investigate each claim.

In those examples, the drivers for cost are outside the control of even the most competent and hard working person. In other situations, the project team has some control and in others they have only themselves to blame if things don't go according to plan.

A number of different arrangements can be used to share out the pain of finding that work is more than expected, or the occasional pleasure of finding it is less. The situation is analogous whether a legal contract is involved between a client and contractor (e.g. a building company), or between two people in the same organization where one allocates a budget to the other. Arrangements widely recognised today include:

  • Fixed price: The contractor is paid the same amount regardless of conditions and the amount of work done. The contractor makes more profit if the work is easier than expected but makes less (and may make a loss) if the work is more than expected. The contractor is not compensated for the amount of work done or the difficulty of the conditions.

  • Fixed price after revision of quantities: The client measures/counts the work needed and contractors bid on this basis, giving unit prices for each type of work. Before work is done the contractor can re-measure/re-count the job and challenge the client's quantities. This leads to a revised fixed price based on the new quantities and items but the originally bid unit prices.

  • Unit pricing: The initial agreement is based on an indication of the work needed, itemised as a bill of works, but it is only the unit prices of items that are agreed initially. As the job proceeds the items of work done are counted and the contractor is paid for the work units actually done, priced according to the original agreement. This compensates the contractor for the amount of work done but not for the difficulty of the conditions.

  • Price adjustment formulae: Occasionally some unit prices are made dependent on market prices, exchange rates, or other available figures. In effect, this compensates the contractor for at least some of the difficulties of the conditions.

  • Cost plus: The contractor is paid whatever the job costs plus a fee that may be fixed or a percentage of the cost incurred. The client pays regardless of circumstances. The contractor is compensated for the amount of work done, the difficulty of the conditions, and even the contractor's own incompetence and laziness. The actual work conditions are not determined or considered.

  • Incentive terms: Cost plus contracts can leave contractors with no incentive to work hard or be competent so a variety of additional terms can be used to create helpful incentives. These include putting a limit on the total overall cost (including the fee), sharing costs above an initial estimate, sharing cost savings below an initial estimate, and various payments and penalties related to other aspects of performance such as schedule and sustainability.

  • Claims: In addition to all the adjustments that can be agreed up front there is also, often, a process by which the contractor can claim more time or money from the client. This is to compensate for a range of different challenges, including changes to the work asked for by the client, difficult conditions, mistakes by other parties, problems with supplies, and so on.

In summary, these various arrangements break down into:

  1. Fixed amount: Fixed price and Fixed price after revision of quantities.

  2. Adjustable amount:

    1. Contingent amount: Adjustments made according to a pre-agreed formula (i.e. Unit price, Price adjustment formulae, Cost plus, Incentives).

    2. Controlled variations: Adjustments made by following a pre-agreed process but not with a formula (i.e. Claims).

The important difference between contingent amounts and controlled variations is that contingent amounts are negotiated up front, when the contractor is (usually) in a weaker bargaining position, while the variations are negotiated later, once the client and contractor are jointly committed to the project.

In all these arrangements it is usually desirable for any party that can influence the outcome to have a healthy incentive to do so. The last thing a client wants to do is reward a contractor for being incompetent or lazy. However, the last thing the contractor wants to do is agree to complete what turns out to be an impossible task for a fixed price.

At the same time, in the construction industry, the mutual bad feeling caused by continual wrangles over whose fault problems were has at times been so counter-productive that it has been worth agreeing something that is more forgiving to the contractor in the interests of harmony and teamwork.

(Once the contract is in place both parties have the challenge of making decisions related to the project. They can support those decisions with judgement or an explicit forecasting model that reflects the rules of the contingent funding and provides estimates of costs, profits, and other results of interest.)

In designing the details of contingent amount rules it is important to:

  • focus on factors that are at least partly in the control of the contractor so that the contractor's incentives are appropriate;

  • choose measurable factors to drive contingent funds; and

  • design contingency amount formulae that avoid multiple-counting.

(Here's an example to illustrate the problem of multiple-counting. Suppose a schedule allowance was available for days with heavy rain and another for days with strong winds. Clearly, some days could have both heavy rain and strong winds but would it be appropriate to just add the allowances? If work is impossible in heavy rain, and also in strong winds, then adding the allowances would be wrong.)

Designing the details of contingent amount rules well should be easier if the main purpose of the analysis is clearly to design contingent funding rules, but what if the main purpose of the analysis was something else?

In the last several years some projects have used 'risks' from a risk register in the funding/budget adjustments. The main idea is to use 'risks' as items that might need extra funds/budget, track them over time, and claim those extra allowances when 'risks' materialise.

The obvious concern is that the risk register 'risks' will not usually have been designed for this use. They might be hard to measure, unrelated to the contractor's level of control, and interlinked so that multiple-counting is almost inevitable. Given a set of 'risks' generated from a typical Risk Listing workshop, a lot of work may be needed to analyse measurability, the contractor's control, and the problem of multiple-counting. Also, the priorities given to 'risks' in a typical risk register may be of limited use for contingent budgeting purposes. Many 'risks', such as those related to safety, may be irrelevant to cost and schedule, or at least their importance to the cost and schedule may be small even though the 'risk' is thought to be very important for other reasons.

Another problem is that 'risks' in Risk Listing are usually written as events that either happen or do not happen. This restricts the scope for contingent rules to just this kind of binary variable. For example, suppose that a project is started to examine all the road signs on a long road and replace those that have been damaged. The obvious way to write a unit price funding rule would be to allow an amount for each sign of each type that has to be replaced. The fee can depend on the exact number of signs of each type replaced. With binary 'risks' the best that can be done is to allow an extra lump sum if the number of signs to be replaced is more than some threshold number.

In addition to these problems with the main idea of using 'risks' as the basis for contingent funds, there are other problems resulting from trying to apply more of the details of Risk Listing methods.

For example, a guide to interfacing risk management with earned value management[4] produced by the Risk Special Interest Group of the Association For Project Management (APM) describes an approach to using risk register content in this way. In this method, the 'risks' include 'opportunities' too. These are sometimes alternative ways to design the project that might be better, so they are not just 'bad risks' and conceptually require different consideration.

Also, the APM's approach involves 'uncertainties' as well as 'risks'. The 'risks' are events worded so that they either happen or do not happen. The 'uncertainties' are the result of putting ranges on estimates. This highlights the challenge of separating 'uncertainties' from 'risks' and so avoiding another form of multiple-counting. For example, with the project to find and replace damaged road signs, the planner could regard the number of damaged signs as 'uncertain' and give an estimated range in some way, or could make a best guess of the number and add a 'risk' that it is greater than the best guess or something like it. Alternatively, the planner could add a 'risk' about signs damaged by recent bad weather and also show uncertainty about the number of damaged signs using a range, but a range that should now exclude the damage caused by recent bad weather. At this point, the difficulty of separating 'risks' from 'uncertainties' should be clear.

The initial inspiration for using risk register content as a means of contingent funding may well have been that it would be convenient. It is not, and just writing contingent funding rules in the conventional way, without getting tied up with 'risks', is much simpler and easier to understand.

Further reading

  1. Leitch, M. (2012). The Risk Listing school

  2. Leitch, M. (2012). Relevant authoritative guidance

  3. HM Treasury (2013). Review of quality assurance of Government analytical models: final report

  4. APM Risk Special Interest Group (2008). Interfacing Risk and Earned Value Management.

Made in England


Words © 2014 Matthew Leitch