Risk registers: slicing and dicing risk

The typical use of risk registers and, especially, the risk assessment methodology raises many issues. Often just two numbers summarise uncertainty: probability and impact. They may be shown on a two-dimensional risk heatmap, or compressed into a single number (e.g. probability * impact) or even a colour.

Aren't you already just a little uneasy? This article shows, through the use of a simple illustrative example, how confusion is hardwired into this methodology. In the example probabilities are known, so the core issue of how to "slice up" risk is not confused with the difficulty in subjective estimation of probabilities.

First I'll recap on the probability-impact risk assessment (PIRA) method used in most risk registers. Then, using numerical examples, I'll show PIRA's flaws.

How PIRA works

Many companies place significant reliance on the risk assessments made in risk registers. Usually some degree of assurance will be taken from the process and reporting via heatmaps. Audits of controls may use a "risk based" approach, often driven by the gross and net (of controls) risk assessments within risk registers.

It follows that if the risk assessments in risk registers are wrong, inconsistent, misleading or impossible to interpret we should be concerned.

If your firm uses probability-impact risk assessment (PIRA) here's the question: is PIRA simple or simplistic – can it easily mislead you?

Let's take a look at a "P-I matrix" which is often used to guide assessors using the PIRA methodology. Naturally not all companies will use all the features described.

Using the matrix

  1. A risk the assessor thinks of a potential event/loss over the next year.
  2. He estimates the probability of this occurring: say 10%.
  3. Using the matrix, 10% is converted to a probability rating of 2.
  4. The likely financial impact if the event occurs is estimated: say £15m.
  5. Using the matrix, £15m is converted into an impact rating of 3.
  6. The risk assessment is (probability,impact) = (2,3).
  7. This may be compressed further to 2 * 3 = 6.
  8. An amber "RAG status" may be assigned – amber in this case.

The probability-impact matrix (P-I matrix)

Impacts down the first two columns, probabilities in the bottom two rows.

The PIRA method is attractively simple. Simply add Excel functionality for a seemingly winning combination. But let's look a little deeper.

Too little guidance, too much choice

In our simple example a company has a liability defined (in £m) as ten times the total score on two yet-to-be-rolled dice.

The reason for giving this example is that it illustrates how confusion can arise even where the probabilities and impacts are known.

Suppose probabilities and impacts are defined by rolling two dice. The probability is simply that of rolling the outcome. The impact is (by definition) ten times the sum of the two scores. Thus an impact of 30 = 10 * (1+2) comes from the two possibilities of scoring 3, (1,2) and (2,1), with a total probability of 2/36.

The impacts range from 20 to 120 and are summarised as follows:

Scores (row/column 1) and impact (£m, body)

Score 1 / 2123456
1203040506070
2304050607080
3405060708090
45060708090100
560708090100110
6708090100110120

The probability-impact matrix (P-I matrix)

There are some minor irritations here:

  • End points muddle. An impact of £1m is "bumped up" to I = 2 in the matrix. An impact of £100m is not "bumped up" to I = 5. This is design carelessness.
  • Calibration confusion. There are three P * I values of 4. (2,2) is coloured green but (1,4) and (4,1) are coloured amber. Why?

Inconvenient truth: Although the table on the left completely characterises the "dice risk" there are several ways of summarising this. We now look at four.

Approach 1: grouping by individual impacts

Effectively there is no further summarising here, just colouring and prioritising according to the heatmap criteria.

Score 1 / 2123456
1203040506070
2304050607080
3405060708090
45060708090100
560708090100110
6708090100110120
  • The table to the left records impact
  • Each of the 36 results has a probability of 1 / 36 = 2.8% < 3%
  • So P = 1 in the matrix
  • 1 / 36 = 2.8% < 3%, so this is assigned P = 1 in the matrix
  • Since P = 1, we look down the first shaded column in the P-I matrix above
  • Impacts of more than 50 amber, smaller impacts are green
  • Makes sense, but surely no one will record 36 separate risks?

I have interpreted the "bumping up" of impacts on the edge of categories in a particular way in the above. None of this affects the central arguments.

Summary: At this point things look reasonable at an intuitive level; amber in the middle with red and green at the extremes. Now let's do some grouping.

Approach 2: grouping by common impacts

With this approach we note that there are several ways to obtain the same impact, for example 20 = (1,2) and (2,1). A full summary of each impact is:

Individual impact ProbabilityMatrix probability (P)Matrix impact (I)P * I
201/36 =   2.8%133
302/36 =   5.6%236
403/36 =   8.3%236
504/36 = 11.1%236
605/36 = 13.9%248
706/36 = 16.7%3412
805/36 = 13.9%248
904/36 = 11.1%248
1003/36 =   8.3%248
1102/36 =   5.6%2510
1201/36 =   2.8%155
Total36 / 36 = 100%   

It's getting odd: The point of greatest concern actually seems to be the most likely result (16.7%). This is not what (e.g.) an insurer means by risk.

The PIRA method issues a "call to action" via heatmaps. This commonly ignores the cost-benefit of taking action, in favour of a relatively naive approach which we might call "colouring for grown ups". The cost-benefit should be considered for all risks, but especially for those where there is a return element; in the dice example an insurer may have charged 80 for covering the risk. Returns after any losses then vary from 60 (80 - 20) down to -40 (80 - 120).

That's a good trade. In fact insurers have few such opportunities where the probabilities are known. The "control" against the -40 result would be to write a lot more such business. Other insurers would quickly realise what was going on and profit margins would be competed away. There is a theory that says the true rewards come to companies that can assess and manage subjective uncertainty, in particular probabilities that require judgement rather than just calculation.

Approach 3: grouping all impacts in a range

This approach acknowledges the range of impacts specified in the probability-impact matrix. We want to group (and sum the probabilities of) results according to whether I = 1, I = 2 etc. This comes from grouping items in the table above to produce the table immediately below:

Scores (row / column 1) and impact (table body)

Matrix Impact (I)Total Probability (%)Matrix Probability (P) P * I
1000
2030
310 / 36 = 27.8% 3 9
423 / 36 = 63.9% 4 16
53 / 36 =   8.3% 2 10

The probability-impact matrix (P-I matrix)

Here's where it gets really tricky

  • Already we have "compressed uncertainty" by grouping the 36 outcomes into the 5 "impact buckets" – this may be acceptable.
  • Did the risk assessor ever work through this process, or did they immediately think of a single scenario?
  • Was the assessor motivated by what they've read about recently, or have seen on the news> This is the so-called availability heuristic.
  • Did the assessor immediately start thinking about the "orange impact 5" event? Should he have chosen the red "3" or "4" instead? Which one?

This may seem quite abstract; you don't hear about dice on the news. What about assessing weather damage, where the scenarios are more newsworthy?

Approach 4: reporting risk as a single (P,I) point on a heatmap

The P-I matrix above is supposed to help assess a single risk. Many risk registers then plot all the risks on a heatmap. The heatmap to the right is reproduced from the COSO risk assessment guide.

Heatmap key features:

  • Risks are plotted on a likelihood (probability) and impact scale
  • The red-amber-green approach to highlighting importance
  • Other dimensions (speed and vulnerability) – no more comment here

Verdict on this assessment process

The simple reporting approach contains the seeds of greatness. There is a range of risks, including strategic risk. The multi-dimensionality of risk is recognised. The flaw is that the event-probability-impact approach compresses and may omit the most important risks.

Where next? The risk register series

User beware. Many risk experts have warned of the common flaws in risk registers. It doesn't have to be this way. The first half of the set of articles below is generally positive, starting with how five potential audiences might make better use of risk registers. The second half warns of some really dangerous flaws.

© 2014-2017: 4A Risk Management; a trading name of Transformaction Development Limited