Risk is more than events

Risk registers often focus on "event-like" risks: things that might / might not happen in the future. But the most areas of uncertainty are not usually events.

This article:

  • Suggests and examines a three way split: event-life risks, parameter (or estimation) uncertainty and model risk.
  • Speculates that risk registers commonly omit 75% of all uncertainty and that the result is "false assurance".

This article is supplied in two places, with different titles: How to miss 75% of your risks without trying and Risk is more than events.

A proposed three-way split

Risk is more than events, so here is a basic three way split:

  1. Events: An event-life risk has a probability (less than one) of happening. The event's potential impact may be fixed or variable. Events can be "natural" e.g. the risk of the coffee machine failing over the next year or "man-made" from non-event uncertainty e.g. the risk that interest rates increase to 5% or more over the next year; interest rates are a state of nature (with a probability of one of "being") rather than a natural event.
  2. Parameters: Important variables (expenses etc) have values which are subject to estimation error. UK supermarket Tesco does not know its revenue over the next year. Even the probability of revenues being under a certain amount has to be estimated; this is not the uncertainty of the coins and dice world.
  3. Models: In a model a set of inputs combine to gives a set of outputs. Models can be of various types: a mental model of how the world works, a cashflow model of how an insurance product works or a mathematical model of how the weather system works. Uncertainty here includes whether factors are modelled, their effect and the relationship between factors. This article covers only some aspects of model risk.

Event risks

Risk certainly includes events: "things that might happen". Such risks can have fixed or variable impacts. Consider the example of a speeding driver:

  • Fixed penalty regime: A driver caught exceeding the speed limit receives a fixed monetary fine. There is a single probability and impact. This is a very rare situation in real world risk management.
  • Variable penalty regime: This more common situation seeks to make the punishment match the crime. The driver incurs a variable fine (or points on his driving licence) depending on the degree of excess speed.
  • Drivers and accidents: An accident has a probability, but the impact of the accident varies significantly, from almost negligible to writing off the car, serious injury or death of one or more people.

Parameter (estimation) risks

Companies' results are always subject to random variation and bad (and good) things that happen "out of the blue". But usually a much bigger factor is that we simply don't know the average or "best estimate" level for variables or parameters that are important in our business model.

What appears to be random variation is at least in part due to getting estimates wrong.

Past data can help, but may not be the complete solution; there may not be enough of it or the world may simply have changed, making the data less relevant. More subtly, even using an "accurate" and data-informed average can be dangerous; what if the mix of contributors leading to the overall average changes? It can be important to understand in detail the contributors to overall results. For more on this see the front line decision makers case study.

In all of the above examples, over the long run results will be in line with largely unobservable underlying values we must estimate. Our estiamte may be "wrong", and the gap is often not best modelled as random variation, for example:

  • Optimism bias: Assessors often have a tendency to be optimistic. This can mean that results are on average worse than we thought they should be.
  • Exploiting weaknesses: Competitors may tighten their acceptance criteria for offering loans, leaving us with business others no longer want.
  • Winner's curse: A competitive bidding process may, by construction, tend to lead to the winner having paid too much. Reinsurers should be very aware of this.

Model risks

Black swans are the most famous examples of model risk. These are factors whose effect is, by definition, almost impossible to model; either we cannot imagine them at all or it seems practically impossible to associate with them a probability of more than zero – the latter was the case for the "original" black swan. There are other uncertainties which might be grouped under the general "model risk" heading.

Deliberately unmodelled factors can either arise because the implications are politically or practically unacceptable (e.g. I really don't want to think about the risks of moving house) or because the effect is deemed immaterial (that may or may not prove to be the case).

Aside: similar thinking extends the Rumsfeld view of the world e.g. What Rumsfeld doesn't know he knows about Abu Ghraib.

Inadvertently unmodelled factors can exist where the overall average value is used, without the modeller really thinking further about e.g.

  • the effect of random variation
  • non-random effects such as the mix of contributors which lead to the overall average

The "front line decision makers" case study in Risk registers: who, what, why and how? shows this.

Applications to risk management and risk registers

Risk management

Author and risk expert James Lam sets out an a-b-c vision for risk management:

There are three major business applications of risk management:
loss reduction, uncertainty management and performance optimisation.
The combination of all three is enterprise risk management. James Lam – the world’s first Chief Risk Officer

Risk management which focuses (only) on the downside only covers (a). An event-like vision will not deliver the true value of ERM. With its limited aim of avoiding losses, how could it?

This article targets part (b), uncertainty management, highlighting a variety of model and parameter (estimation) risks. Encompassing strategic uncertainty, this is a good starting point for the board.

The optimising stage, (c), is crucial to exploiting uncertainty and thereby delivering value. Optimisation is beyond the scope of this article, but you can find out more in the Own Risk and value Assessment.

The product of a fully realized ERM programme is the optimisation of enterprise risk adjusted return Professor Harry Panjer

Risk registers

Risk registers can cover (document) all three types of risk. In practice they can be dominated by operational event-life risks. This can have dire long-term consequences, giving false assurance.

How can the situation get to this stage? Quite easily – picture this.

First the risk register is initially populated via a series of risk meetings attended by staff of middle seniority. There is much brainstorming of risks, little thought about the naturally sources of uncertainty (objectives, business plans and strategic uncertainty, models and their parameters).

That's the path to false assurance. Much of the uncertainty has already been missed.

Next the risk manager is tasked with overseeing the assessment and quantification of risks, in conjunction with "risk owners", many of whom did not attend the meetings above. Having been sent the risk register template ahead of time, the hard pressed executives are ready (if not willing) to assign probabilities and impacts, being careful not to place too many of the risks in the red zone. The spreadsheet's reporting functionality automatically produces heatmaps.

Even the captured risks are now summarised in a way in which prioritisation is doubtful. For a simple but detailed example see Slicing and dicing risk.

Finally the board reporting begins. The top 10 risks are shown, heatmaps produced, actions and controls documented. Somehow is just doesn't ring true; board members note the output (towards the end of their meeting) and move on. In due course the auditors sign off the increasingly robust risk management framework. Their job is done and the risk manager is congratulated.

Within a year the company is commercially unviable and is taken over.

What happened? Little true risk management took place. Risk management wasn't owned by the CEO or board. They didn't challenge ad expected too little of their risk management (and perhaps themselves). The story that can be repeated up and down the country. But obviously not for you.

How bad could it get? False assurance and the three Is

Briefly, going down a well worn path of listing risks in a register, classifying them and "quantifying" them using probability-impact risk assessment will leave you with three problems:

  1. Inconsistency
  2. Incompleteness
  3. Irreversibility

The first of these – inconsistency – is generally the silent "killer". It leads to an inability to prioritise. Consequently there is no robust basis for risk management, action or control.

The second area – incompleteness – can be serious. It can generally be diagnosed by noting the lack of strategic risks on the risk register and that all the activity is elsewhere.

The irreversibility is an irritant; not only is the scenario being assessed not evident from the risk assessment, assessing another scenario requires much rework; probability-impact gives no means of moving from a "1-in-10" scenario to a "1-in-3" scenario, for example.

Traditional audits of risk management can miss all this. The heatmaps are pretty and your risk registers are extensive. A lot of time is spent "doing risk management". Your auditors tell you about best practice, encouraging you to do more and being willing to support you risk-based control self assessments.

Let's be honest: if you're doing this it's all false assurance.

But how bad could this really be? A "back of the envelope" approach suggests that typical risk assessments in the average risk register might miss 75% of organisational uncertainty.

As one possibility, suppose that:

  • Only "events" are identified (i.e. parameter and model risks are not considered) and that these comprise 50% of all risks. This is incomplete risk assessment.
  • Within the events identified, the risk owners' selections only cover 50% of uncertainty. This is incomplete and inconsistent risk assessment.

You’ve considered only 50% of 50% i.e. 25%. That's a 75% miss.

Where next? The risk register series

User beware. Many risk experts have warned of the common flaws in risk registers. It doesn't have to be this way. The first half of the set of articles below is generally positive, starting with how five potential audiences might make better use of risk registers. The second half warns of some really dangerous flaws.

© 2014-2017: 4A Risk Management; a trading name of Transformaction Development Limited