Better risk assessment

Risk assessment is a critical component of risk management and decision making. A risk register is an important risk management tool for many organisations.

The probability-impact risk assessment (PIRA) methodology usually embedded in risk registers is a worm at the heart of risk management.

The worm probably won't kill you, but it's not doing you any good. It may provide false assurance and justify foolish decisions or just waste time.

Along with Douglas Hubbard I claim that risk assessments in risk registers are often wrong, inconsistent, misleading or impossible to interpret. This article quotes reputable sources, describes the issues with the PIRA methodology and suggests practical improvements and alternatives.

Risk assessment: the biggest challenge for risk registers

In many companies reliance is placed on the risk assessments made via risk registers. Usually some degree of assurance will be taken from the process and reporting via heatmaps. Audits of controls may use a "risk based" approach, often driven by the gross and net (of controls) risk assessments within risk registers.

If the risk assessments in risk registers are in many cases wrong, inconsistent, misleading or impossible to interpret we should be concerned. Is that your firm?

Is this just me, seeking consultant-led change? Let's hear three experts.

First a former UK government actuary:

Heatmaps don't really work at the strategic level. They try to get you to allocate a likelihood and impact to each risk. But for every risk there's a whole range of impacts. Let's take rain affecting cricket matches. The impact and likelihood of occasional drizzle is different to a thunderstorm which differs again from a summer monsoon. So which do you choose? Anyone using a heat map in this way is taking a view on which type of rain matters and very rarely are they transparent in doing so. We need to have a simpler analysis which can be quantified. Source: Source: Trevor Llanwarne, former Government Actuary: Risk registers that work at board level

Next, a review from the UK Government Office for Science:

One key weakness of deterministic assessments is that they are not readily comparable across risks ... comparisons between deterministic scenarios will not be on a consistent basis as both the likelihood and impact for scenarios will vary. However in practice risk managers routinely compare several deterministic scenarios and make decisions on that basis. Blackett review: High impact low probability risks

New for 2015 comes US risk expert, Dave Ingram:

IT is a medieval, or possibly pre-medieval practice for evaluating risks. That is the assignment of a single Frequency and Severity pair to each risk and calling that a risk evaluation. So stop IT. Stop misinforming everyone about your risks. Stop using frequency severity pairs to represent your risks. Dave Ingram Just stop IT now, and don't do IT again

Finally, the US author, risk management consultant and inventor of Applied Information Economics, Douglas Hubbard:

The most popular risk management methodologies today are developed in complete isolation from more sophisticated risk management methods known to actuaries, engineers and financial analysts. ... the methods developed by the management consultants are the least supported by any theoretical or empirical analysis. The structured risk management methods that management consultants have developed are much more likely, no matter how elaborate and detailed the methodology, to be based on simple scoring schemes. Douglas Hubbard, author of The failure of risk management

At worst, I am not alone.

Probability-impact: the well worn path

Let's take a look at a "risk matrix" which is often used to guide assessors using the PIRA methodology. Naturally not all companies will use all the features described.

Probability-impact risk guidance matrix

How is the guidance used?

  1. In respect of some risk the assessor thinks of a potential event/loss over the next year
  2. He estimates the probability of this actually occurring: let's say it's 10%
  3. Using the table / matrix above, the 10% is converted into a probability rating of 2
  4. He estimates the likely financial impact of the event, assuming it occurs: let's say it's £15m
  5. Using the table / matrix above, the £15m is converted into an impact rating of 3
  6. The PIRA result is (probability,impact) = (2,3)
  7. This may be compressed further to 2 * 3 = 6
  8. An amber "RAG status" may be assigned

Probability and impact: a winning combination?

Reasons for the seeming popularity of the PIRA method might include:

  • Common sense: Who could argue that risks have probabilities and impacts?
  • Simplicity: Easy to calculate and easy to communicate, to senior management as well as technicians.
  • Ease of implementation: Easy to calculate and display, using Excel functionality.
  • Best practice(??): Certainly the method is expected by many consultants and accountant. We'd label it pseudo-science.
The most popular risk management methodologies today are developed in complete isolation from more sophisticated risk management methods known to actuaries, engineers and financial analysts. Douglas Hubbard, author of The failure of risk management
The structured risk management methods that management consultants have developed are much more likely, no matter how elaborate and detailed the methodology, to be based on simple scoring schemes ... and ... are the least supported by any theoretical or empirical analysis. Douglas Hubbard, author of The failure of risk management

Ouch. Let's look a little deeper.

Probability-impact: the worm

The PIRA methodology usually compresses uncertainty into just two numbers – probability and impact. Those numbers may be shown on a two-dimensional risk heatmap, or compressed further into a single number: probability * impact perhaps. Are you just a little uneasy with red, amber and green?

There are some specifics; it may be impossible to recapture from a single number – 6 say – what the rating for probability and impact was. Despite this, it is unclear that there is anything technically wrong; perhaps better reporting could improve things e.g. we don't need to multiply the numbers together. But at the heart of the PIRA approach are two more serious issues: risk assessment using this methodology is usually inconsistent and incomplete. We now take a closer look at this claim.


Let's suppose that we work for an insurance company based on the coast of Florida, USA. We are asked to assess the risk of ocean wave damage to our buildings.

If we ask three people (even experts!) we might get three very different answers, none of which are "wrong" within a normal PIRA framework. How is this possible?

When people do risk assessments they tend to think in terms of scenarios, perhaps motivated by what they've read about recently, or have seen on the news. This so-called availability heuristic can cause havoc with probability estimation, but that's not the point we're after here.

Consider three types of wave, which can be thought of as scenarios:

  1. Wind-driven ocean waves
  2. Earthquake-driven tsunami waves
  3. Asteroid-driven tsunami waves

Within each scenario there is a range of wave heights, as shown above. For a given scenario the smallest and highest wave heights have the smallest probability (the height of the bar graph) with more central wave heights having a greater likelihood.

The red curve is an attempt to convert wave height into financial loss. We'll just assume this can be done and not blame everything on PIRA! This single curve summarises all potential wave damage losses and is intuitively appealing; smaller losses have a greater probability while larger losses are much less likely.

We simply don't know what scenario the risk assessor has chosen

This may be drawn out at an interview or the whole assessment process may be done remotely. Even if an interview takes place and we understand the scenario, if we are not careful this gets lost by the time we move into "heatmap reporting".

Things are worse is practice

It may become obvious that different people assessing the same risk are producing inconsistent results and we may be able to resolve that. In practice different people assess different risks. It is much less likely that we will spot the implicit inconsistencies – but inconsistency is just as likely to be there.

Incomplete: missing impacts

Let's assume that impact ranges are used (the probability of a wave height of exactly 30 feet is effectively zero, as is the probability that is causes exactly $2bn of damage). Let's suppose all three assessors identified identical scenarios and all decided to assess the middle scenario.

We still have the incompleteness of two missing scenarios. This means that the red "loss distribution" curve shown on the graph would be incomplete. It would only have a middle piece and some financial losses would not be assessed.

Incomplete: missing risk types and risks

There's another way in which risk assessment can be incomplete: risks can be missed at the "identification" stage. If a series of "risk meetings" using brainstorming techniques is the primary method for "identifying risks" the focus will usually be on "bad things that might happen", sometimes known as "event-like risks".

In practice two types of uncertainty will be played down or even omitted by such a process:

  • Estimation error: We will be wrong about key parameters in our models: the future course of interest rates, next year's expenses etc. How important is this?
  • Model error: Our financial and conceptual models are limited by our knowledge, excluding the so-called Unknown unknowns.

There's more: we might also omit Known unknowns, either because we believe them to be immaterial or for other reasons (e.g. politics).

But rather than wallow in the problems let's look for some improvements.

Improving risk assessment: a prototype

Aims for the prototype solution

The prototype must have enough power support and provide clarity in the following areas:

  • Clarity on what level of risk is being assessed (through scenarios or otherwise).
  • Consistency of risk assessment for a given risk owner and between owners (and risks).
  • Completeness of risk assessment, as far as it humanly possible.
  • Aggregation i.e. supports the potential ability to aggregate all risks to calculate a "total risk" in some sense.

The triangular prototype

Instead of asking for two numbers – probability and impact – we ask for four, explained below. While this seems like twice as much work the four numbers have an intuitive appeal that should make forming judgements easier and faster. We get consistency and (some) completeness thrown in for free. A great trade off.

Suppose we are trying to estimate the potential "impact" of a "risk". To make things specific, consider the uncertainty around a project cost:

The four numbers we ask for are:

  1. Chance of zero cost: The (potentially zero) probability of the project not going ahead.
  2. Minimum cost: This may be zero, but is £2m in the diagram above.
  3. Most likely value: Leaving mathematical niceties aside, the peak value of £5m in the diagram above.
  4. Maximum cost: This is £8m in the diagram above.

Is it pitched right?

Given that it's only a prototype, the words of a Stanford professor are encouraging:

I am now convinced that modelling every distribution in the world as triangular, specified by a minimum, maximum, and most likely value, would be a significant improvement over the status quo.Sam Savage author of The flaw of averages

Let's explore the prototype in more detail.

A pointer example: project management

Suppose we have a single risk assessor seeking to assess a single risk. We'll take a project management example, with an assessment expressed as a graph.

We'll model the possible project costs using a triangular distribution. This approach is used by "real" project managers.

Key features

  1. The potential project cost varies between £2m and £8m.
  2. The most likely cost is £5m: the "base cost".
  3. In this case the distribution is symmetrical around the base cost. This is often not the case.

Some small tweaks

Let's change the graph above slightly, to be a little more realistic.

The orange dot at the point (0,0.25) indicates that there is a 25% chance that the project doesn't start, and hence has zero cost. The red dot is one example of a project cost overrun. The probability of a project cost of more than around £7m is the area of the green triangle.

How we can use this distribution approach

Assuming our triangular distribution is correct, it clearly encapsulates all that we want to know about the possible project costs. It is a complete risk assessment.

Example: Many risk managers would be concerned with the right hand "tail" of the distribution i.e. the highest possible project costs. Elementary calculations show that, if the project goes ahead, there is a 5% chance of it costing more than £7.05m – that's the red dot on the right. We show below how to calculate the 7.05.

The triangular distribution: so much for so little

To specify a triangular distribution you need only set:

  • The minimum possible value: a = 2 in the example above
  • The most likely value: b = 5 in the example above
  • The maximum possible value: c = 8 in the example above

There may also be a "point mass" at zero, usually corresponding to an event not happening (there is no hurricane, the project doesn't go ahead etc). This is the orange dot at a height of 0.25 above.

From this small amount of information we can derive the x value beyond which a proportion K of the total area lies (i.e. K = 5% in our example above).

Triangular distribution: summary of benefits

The simple form of the triangular distribution makes it easy to:

  • Determine the probability that the project costs more than a specified amount
  • Conversely determine the cost which we are 95% confident the project will not exceed

Triangular distributions have a big benefit: you can get a valuable tool and insights without needing to give much in return. They are a proof of concept. More is possible – if required – by using more sophisticated distributions e.g. where the top loss or cost is not capped at a finite amount.

But the triangular prototype is very practical, yielding simple formulae for the results we need. The one given below can be implemented e.g. in Excel.

Formula for the right hand tail: xK = c - √[2 * K *(c - b) / h] = c - √[K *(c - b) * (c - a)]

  • This works so long as xK is to the right of b i.e. K is less than the area to the right of b which is h / 2 * (c - b).
  • If this is not the case the 1-K left hand tail returns the correct result.

Putting a = 2, b = 5, c = 8 and K = 5% we get the 5% right hand tail of x5% = 8 – √[5% * (8 - 5 ) * (8 - 2)] = 8 – √0.9 = 7.05.

Does the prototype "solve" consistency, completeness and the rest?

Consistency and completeness get full coverage in separate sections below. Let's look at clarity and aggregation.

Clarity. The values of a, b and c – plus any "point mass" probability e.g. at zero – should be clear for each risk. Scenarios may or may not be involved.

Aggregation. In principle the triangular prototype can be extended from a single assessment to aggregate all risks, although it may be better to do this in a corporate model rather than a risk register. Essentially we need two things:

  1. a simulation tool which enables us to sample from a risk's "loss distribution".
  2. Some means of allowing for the relationship (if any) between risks; do large losses tend to be associated or is there little connection?

Correlations are just one way to do (2), but further detail is beyond the scope of this article.

Putting risk assessment right: completeness

We've got completeness cheaply

We have completeness. We noted above that even complex probabilities such as those at the tail can be calculated using relatively straightforward formulae and can be coded up (e.g.) in Excel. Our Probability: magic without mystery article covers even more.

The completeness has come rather cheaply in terms of the number of assumptions we've had to make. Two points make this even better.

  1. The assumptions are intuitive: It is not unreasonable to expect an expert to be able to give a view on a, b and c. They correspond to minimum, most likely and maximum costs respectively. We're not looking for statistical complexities such as "standard deviation" or "skewness". To be fair we may also need the probability of a zero cost / loss (the point mass).
  2. Help on these assumptions may come from elsewhere:
    • It may be obvious that a = 0 i.e. the minimum loss may be zero.
    • The value of b may have been mentioned in the business plan.
    • It may also be clear that the point mass is zero e.g. the total claims for a large life insurer.
    • For some risks e.g. financial risks such opinions may not be needed.

But was it a cheap trick?

I am now convinced that modelling every distribution in the world as triangular, specified by a minimum, maximum, and most likely value, would be a significant improvement over the status quo.Sam Savage author of The flaw of averages

More often than not there is a technically better distribution than the triangular. But several points support our approach:

  1. As noted above, it may be better than the status quo.
  2. It may be good enough for practical purposes i.e. decision making.
  3. The approach can be enhanced with additional distributions in due course.

In respect of the third point, the central risk management team might take the triangular distributions and turn them into something more appropriate. This may include "fattening the tails" i.e. allowing for costs/losses – possibly uncapped – beyond the triangular maximum of c.

Putting risk assessment right: consistency

Consistency: with triangles

If it is appropriate to use the triangle approach, it is clear that this method delivers consistency. Some subjectivity remains, but PIRA's avoidable subjectivity (what scenario is being assessed?) has been avoided; risk assessors know the level of risk they are aiming to assess.

Consistency: beyond triangles

We can generalise and give risk assessors more flexibility if that is required. Without getting into the mathematics, a more general approach than triangles is:

  • Specify a distribution type: Probably led by the central risk team.
  • Set out some scenarios: As many scenarios as there are distribution parameters.
  • Use the scenario probabilities: Solve for the distribution parameter values.

The result is a distribution capable of returning probabilities of the form prob(loss > x) = p. These can be "sense checked" by the risk assessor.

This process can be used for all assessors and risks.

Alternative routes to consistency and completeness

Point and list of alternatives

PIRA risk assessment is flawed. It has the appearance of being scientific: a simple structure let down by poor logic. Like tools in other fields (PESTLE, the five forces etc) it may be a good thinking tool for risk management, but it is fatally flawed for most risk assessment. It gives us the worst of both worlds: a lose-lose.

A range of alternatives helps us form a judgement on the strengths and weaknesses of each method and the likely amount of work involved.

  1. No assessment: perhaps a "parking place" where we simply accept that we have thusfar been unable to assess some uncertainty
  2. Single H/M/L assessment: in principle a single "high/medium/low" assessment is attractive, but there are practical problems – see below
  3. Single 1-10 assessment: effectively the same as "H/M/L" above; the extra options give more flexibility, without avoiding the challenge alluded to above
  4. Triangular distributions: the proof of concept covered extensively in this document – a useful benchmark
  5. Alternative distributions: as a "second step" beyond the triangular proof of concept, for example to capture the possibility of unlimited losses – see below
  6. Multiple fixed probabilities: – a simple and flexible alternative needing (e.g.) 5 numbers – see below
  7. Fixed impact: – each risk states the probability of an impact of at least some fixed amount, constant between risks – see below
  8. Stress test at a fixed probability: as above, but keeping the probability rather than impact fixed – see below
  9. Scenarios and multiple flexible impacts: a "storytelling" approach recommended by Sim Segal – see below

Sometimes several of these techniques can be used together, for a given risk or across a range of risks. Having said this a sensible first step might be to restrict ourselves to one or two techniques initially.

Single H/M/L assessment

This approach to assessment seems particularly attractive for:

  • Strategic or other risks where assessment seems, at least initially, challenging.
  • Other risks where assessments by non-technicians (e.g. Board members) are important.

The attraction is obvious, but what are the practical challenges?

The first is the limited calibration; there are only three options, which may be insufficient to distinguish between different levels of risk, especially where there are many risks. This is not a major concern; in reality other scales (e.g. 1-10) are available, and the H/M/L classification may instead be a summary of the level of current attention being devoted to the risk – akin to a "RAG status".

A more significant challenge is how to compare two risks whose profile is very different in nature. Consider an example, simplified for illustrative purposes:

  1. High probability, low impact: a 20% chance of losing £5m
  2. Low probability, high impact: a 1% chance of losing £500m

In both cases the "expected value" is £1m – the probability multiple by the (single) impact. We suspect that most would conclude that (2) was the most significant risk (an exception could be if a loss of &pouns5m would ruin your organisation). But what if we reduced the 500 to 400, without making other changes? What if we reduced it to 300, 200, 100? Although PIRA is a poor risk assessment methodology it can be useful as a more basic thinking tool.

Probability and impact: a more nuanced approach

The simple example shows that, at the very highest level, the probability and impact idea is a useful one. The problem comes in PIRA's misleading implementation. What we need is some way of comparing risk across these two dimensions. Here's one possible approach:

  1. Long run: Prefer one risk to another if it results in better long term results. This Kelly-like criterion is facilitated by models and our prototype.
  2. Constraints: Set additional constraints which limit risk taking, especially in respect of low probability high impact risks. Risk appetite anyone?

There is clearly subjectivity associate with (2) but it's all in one place and has been made visible, rather than being hidden in risk assessment methodologies.

Alternative distributions

This is a potential improvement on the triangular prototype and therefore does not invalidate the concept. One area of application is the scenario where there is no obvious maximum loss/cost and the risk assessor or central risk function is (therefore) uncomfortable with the approach. Here's a way forward:

  1. Instead of assessing the minimum, most likely and maximum possible losses the risk assessor assesses several similar alternatives.
  2. These might be the minimum loss and the 50%, 75% and 90% confidence losses. The 90% confidence loss has only a 10% chance of being exceeded.
  3. A more technical risk assessor, perhaps in a risk department, selects a more appropriate distribution, probably depending on the risk type.
  4. He uses the information in (2) to calibrate (3) i.e. to determine the "shape" of the curve to replace the triangle, for example:
    • Operational uncertainty might be modelled by a 2-parameter lognormal distribution.
    • Strategic uncertainty might be modelled by a 2-parameter truncated normal distribution.

Despite practical benefits, making the "confidence percentages" the same across risks is not a technical requirement; the key point is that (2) should enable (4).

This is a very attractive approach; the core assessment concepts can continue to be explained in terms of triangles. Talking about fat tails is optional!

Multiple fixed probabilities

The risk assessor supplies the following:

  1. Special case: The probability of a zero loss
  2. Confidence levels: Losses at confidence levels of 20%, 40%, 60% and 80%.

This is again an attractive approach; the probabilities are "real" enough to be set independent of risks and the multiple assessments for a given risk help in obtaining some consistency. This can be extended to comparisons between risk owners, probably carried our by those with a good statistical understanding. Finally each risk has enough assessments to enable the "alternative distributions" approach to go be implemented in the background.

Fixed impact

This is an almost dramatic attempt to obtain consistency. The idea is that, for each risk, the assessor states the probability that the risk leads to a loss of at least a fixed amount, independent of the risk. The fixed amount might be a percentage of the organisation's assets, or something more linked to solvency.

This approach suffers from two practical issues:

  1. Depending on the fixed amount for many risks the probability may be close to zero. This limited information does give much practical help.
  2. There are major challenges if we want to change the fixed amount; few will want to do another round of assessments.

We now turn to a more common method, which at least solves (1).

Stress test at a fixed probability

Stress testing is uses as a risk management tool, especially in insurance companies as a means of setting solvency capital. The idea is that, for each risk, the assessor states the (minimum) loss that the risk leads to at a fixed probability, independent of the risk. A typical level for those insurers calculating capital might be 1-in-200, but higher probabilities such as 1-in-10 can be more illuminating, "real" and easier to assess.

The stress test still suffers from the "what if we need to recalibrate?" problem; insurers tend to assess a relatively small number of high level risks in setting capital. In contrast our prototype deals with the recalibration problem, since explicit formulae follow from having selected a triangular distribution. The "alternative distributions" approach described above has similar benefit.

Scenarios and multiple flexible impacts

A "storytelling" approach recommended by risk management expert Sim Segal, who advocates asking for a full range of scenarios, summarised below:

ScenarioDescriptionProbImpact range
Pessimistic Description 1: includes business (model) parameter ranges 20% -£50m or worse
Under-achieve Description 2 20% -£10m to -£50m
Baseline Description 3 20% -£10m to +£10m
Over-achieve Description 4 20% +£10m to +£50m
Optimistic Description 5 20% +£50m or better

All risks would be assessed using five scenarios. The labels ("pessimistic" etc) would be the same between risks for consistency. The scenarios are described in plain English, with the implications for model parameters (revenues, expenses etc) drawn out. In reality a scenario (description) would include "input impacts" which are expressed as a range for the relevant model parameters e.g.

  • Revenues between £150m and £200m
  • Expenses between £30m and £40m

The business financial model would then calculate the corresponding "output impact" in terms of value. As discussed previously, this is generally a better way of proceeding than to ask the risk assessor to quantify the value impact directly. As you would expect, the scenario approach has advantages and disadvantages:

Advantages include:

  • Assessor thinks more widely; the rest of the distribution isn't "filled in" automatically
  • Significant flexibility is afforded by the non-parametric approach
  • No need to assume a form of distribution
  • Every result can be explained in terms of a scenario

Disadvantages include:

  • More work for the risk assessor, with the possibility of inferior risk assessments
  • The implicit ("non-parametric") distribution may be difficult to justify theoretically
  • Scenarios can be mutually inconsistent i.e. inputs and probabilities don't make sense
  • A number of assumptions are made which are almost hidden from view

Models and other helpful tools

This section looks at three useful tools:

  1. Narrow models: our terminology for technical models which work in the background, helping risk owners.
  2. Corporate models: these model the big picture; we can see what impact various scenarios have on key results (profits, level of solvency etc).
  3. People and improvement models: very brief coverage of three ways to help people get better at assessment.

[1] Probability and narrow models

The triangular distribution prototype and the "alternative distributions" method were both covered above. Specifically, I suggested that operational uncertainty could be modelled by a lognormal distribution and strategic uncertainty by a truncated normal distribution. Let's look at two areas with better developed models.

Market risks comprise a range of financial risks, including interest rates, inflation and exchange rates. Various models can be used to estimate the likelihood of various interest rate scenarios, given the interest rate today. One such model is the stochastic Vasicek model, introduced in 1977.

The PIRA method often leads to questions such as "what is the interest rate risk?" to which the risk assessor responds with a probability and impact. This seems an unnecessary burden for a risk assessor to bear. The question is also ambiguous, leading to consistency and completeness challenges.

Insurance risks comprise a number of event-like possibilities in respect of people or non-human assets such as buildings. We might be concerned that people fall ill or die, that buildings are flooded or subject to attack. The perspective can be that of the insurer or the insured. Companies with a defined benefit pension scheme are subject to longevity risk i.e. the possibility that former employees who become pensioners live for longer than projected. This is one form of insurance risk.

Again, there are a range of longevity risk models, in particular for future longevity improvement. All firms writing longevity contracts (annuities, pension buyouts) should have one. A risk assessor asked for probability of mortality improvements being at least 1% higher than expected needs further support.

Narrow models

So what makes these models narrow, by our definition? It is because they model not the ultimate impact in terms of pounds, but rather in terms of parameter: interest rates or mortality rates. We might call these "input impacts" since they model the impact on parameters which are themselves inputs into bigger models.

[2] Impact and corporate models

A company value model helps make things explicit and is a key component of a value-based risk management programme. The corporate model projects the important outputs: profits, number of books delivered etc. In particular we can see how outputs respond to changing inputs.

Good risk management involves making improvements one step at a time:

  • If you don't have a base company value model build one.
  • If the model produces just one number, build functionality to do "what ifs" on (e.g.) revenues.
  • Extend this to enable key parameters (e.g. revenues and expenses) to vary consistently.
  • Start using the model (instead of the PIRA methodology) to quantify impacts.
  • To do simulations we need a model of how a parameter might vary. Consider the triangle prototype.
  • Refine this by using and calibrating alternative distributions.
  • Finally consider any relationship between variables; correlation is the simplest approach.

It's sensible to build a rolling programme of improvements.

[3] People and improvement models

Models which project "hard" numbers and cashflows as described above are great, but they are only as good as their assumptions. What can be done to help individuals and groups make better assumptions. Here are three ideas:

  1. Helping individuals make assessments. A tool which has built some momentum is the idea of probability calibration, as set out in Risk Intelligence and deployed at Projection Point. The cut down idea is that people are asked a question and to give their confidence percentage of being right. The idea is not to have the right answer, but the right level of confidence. Using a set of questions we can total the expected number (E) and actual number (A) of correct answers. The aim is to get A/E = 100% – perfect calibration. People who have A / E >100% are under-confident. Rather more common is A / E <100%; people tend to over-estimate their knowledge and abilities. Surprisingly, improved judgements outside our domain of expertise port across to more relevant areas.
  2. Helping groups make assessments: A key tool is the Delphi method described as a structured communication technique, with the following main features:
    • Originally developed as a systematic, interactive forecasting method (so good for risk assessment).
    • Relies on a panel of experts. But note that "expert" is subjective and could be simply a group of individuals.
    • Questions are asked / answered over a number of "rounds". The enables judgements to be refined.
    • Anonymous summaries are given of forecasts and justifications. Anonymity helps reduce bias and peer pressure.
    • It is hoped that answers converge – a limited version of the wisdom of crowd.
  3. Bringing everything and everyone to the party: This is our embryonic MODEL approach. Risk assessment is often thought to be a blend of data and expert opinion, but we can bring more than that:
    • M: Model because the way things actually work constrain impacts – buying an annuity results in a product with rules, squash is a game with rules.
    • O: Operations because people at the front line have invaluable insights beyond corporate-speak.
    • D: Data because relevant data can trump the vagaries of judgement – this is one of the ideas of "big data".
    • E: Experts because sometimes we have inadequate data and need an additional perspective (but beware!)
    • L: Leaders because we need intuition, the voice of experience, someone with responsibility and ... a decision.

Should we still use risk registers?

Since this document takes its lead from the article Risk registers, bloody risk registers with its claim that "It's the risk register that is the real worm at the heart of risk management" we should ask if the whole risk register concept is invalid. There are certainly many issues with typical risk register use:

  1. PIRA methodology: The biggest problem, covered extensively in this article.
  2. Classification: Classifications are often poor – it still seems like brainstorming.
  3. Coverage: Coverage of some areas e.g. strategic risk can be rather "light".
  4. Descriptions: These are often very poor and "unactionable".
  5. Events: When something goes wrong there is often little linkage to the risk – or learning.
  6. Controls: Many controls have little formal basis and are more like "wishlists".
  7. Actions: There is often no firm basis for setting these. They have a tendency to overrun.
  8. Metrics: Sometimes unmentioned, the metric should be value rather than (e.g.) capital.
  9. Integration: There is often little link to other business areas e.g. capital setting.
  10. Decisions: The risk register will rarely be consulted for material decision making!

Despite the list above I am still in favour of the intelligent use of risk registers, primarily as an administration tool.

Most of the issues above – even the PIRA point – are not issues for risk registers per se, but rather relate to how they are used. There's a way of using risk registers that may be "best practice", will produce colourful and reassuring charts and may be signed off by your auditors. But it's still dumb.

Many of these issues would remain if we didn't have risk registers. I suggest using both risk registers and models. Each has a role.

You can pick off the issues with typical risk register use one by one. It would be good to start with the worm of risk assessment.

The maths of triangular distributions: for those who want it

Teasing more from the triangle graph

It's a little more than a graph; one of the project costs must occur, so the area under the graph is 1; the graph is a probability distribution. We can use this fact to obtain more information. Take the general example where the minimum value is a, the most likely is b, the maximum is c and the triangle height is h.

Suppose the height of the triangle is h. Using "area = 1/2 * base * height" from our schooldays we have: 1 = 1/2 * (c - a ) * h. So h = 2 / (c - a).

The maths of triangular distributions

You can skip the details (which would be handled by your risk department) without losing the key points on risk assessment. The simple form of the triangular distribution makes it (relatively!) easy to calculate the cost for a given probability (here 5%). Other more complex distributions need "lookups" (or Excel).

The following formulae enable us to:

  • Determine the probability that the project costs more than a specified amount
  • Conversely determine the cost which we are 95% confident the project will not exceed

Formula for the right hand tail: xK = c - √[2 * K *(c - b) / h] = c - √[ K *(c - b) * (c - a)]

  • This works so long as xK is to the right of b i.e. K is less than the area to the right of b which is h / 2 * (c - b).
  • If this is not the case the 1-K left hand tail returns the correct result.

Here we go:

  1. Call the location of the red dot (x,0) and the point on the triangle above it (x,y).
  2. Using "1/2 base times height", the green triangle has area K = 0.5 * (c - x) * y.
  3. The equation of the downward sloping part of the triangle is y = h(c-x)/(c-b) – it passes through (b,h) and (c,0).
  4. Substituting (3) into (2) we get: 0.5 * h * (c - x) 2 / (c - b) = K.
  5. This simplifies to (x - c)2 = 2 * K * (c - b) / h or x = c ±√[2 * K * (c - b) / h].
  6. Since we know x < c we find x = c - √[2 * K * (c - b) / h].

Formula for the left hand tail: xK = a + √[2 * K *(b - a) / h] = a + √[K *(b - a) * (c - a)]

  • This works so long as xK is to the left of b i.e. K is less than the area to the left of b which is h / 2 * (b - a).
  • If this is not the case the 1-K right hand tail returns the correct result.

Putting a = 2, b = 5, c = 8 and K = 5% we get the 5% right hand tail of x5% = 8 – √[2 * 5% * (8 - 5 ) / (1 / 3)] = 8 – √0.9 = 7.05.

Cumulative distribution function for a triangular distribution

For a triangular distribution we can derive general formulae for the probability that the loss (or project costs) is between two amounts x0 and x1 using integration or formulae for triangle areas. These formulae can be used to find the tails. In case 1 below, the right hand tail we found previously was x such that F(x,c) = 5%.

Suppose we want to calculate prob(x0 < X < x1) where X is the triangular distribution. Call this F(x0,x1). There are three cases:

  1. x0 >= b and x1 >= b: The probability is h(x1 - x0) * (2c - x1 - x0) / [2(c - b)]
  2. x0 <= b and x1 <= b: The probability is h(x1 - x0) * (x1 + x0 - 2a) / [2(b - a)]
  3. x0 <= b and x1 >= b: The probability is F(x0,b) + F(b,x1)

Where next?

In Risk modeling alternatives for risk registers (2003!) Matthew Leitch suggests 5 things that should guide our approach to risk assessment:

  1. What is it for?
  2. Is there an ultimate yardstick?
  3. Is assessment to be formal and centralised or informal and decentralised?
  4. Are "upside risks" to be considered too?
  5. Is assessment in a list or using a model?

Highly recommended.

© 2014-2017: 4A Risk Management; a trading name of Transformaction Development Limited