Traditional risk classification

Various fields classify their "objects": chemical elements, books, films etc. Thinking at the group level can be useful, even if individual actions need to be more tailored. But traditional risk classification schemes are usually of limited use in decision making and, in particular, for actually managing risks.

This article highlights one 3-way risk classification scheme: type-probability-impact. I describe the classification and consider its uses and abuses. I suggest an alternative classification, which emphasises the extent to which we expect to be able to manage the newly-classified risks. It's action-based by construction.

Classifying risks by type

How it works

The classification by type seems natural; it takes its lead from the availability of functional expertise. Regulation of financial services companies led to the highlighting of insurance, credit, liquidity, market and operational risks. This is far from an exhaustive list, leaving out e.g. strategic, reputational and group risks.

Main insight: classification by type is a good idea, but needs to be tailored by the organisation to be genuinely useful.

Let's take a look at what that might look like for one risk type, operational risk, using the Basel II banking classification and enhancements.

#Event typeDefinition
1Internal FraudMisappropriation of assets, tax evasion, intentional mismarking of positions, bribery
2External FraudTheft of information, hacking damage, third-party theft and forgery
3Employment Practices and Workplace SafetyDiscrimination, workers compensation, employee health and safety.
4Clients, Products, and Business PracticeMarket manipulation, antitrust, improper trade, product defects, fiduciary breaches, account churning
5Damage to Physical AssetsNatural disasters, terrorism, vandalism
6Business Disruption and Systems Failures Utility disruptions, software failures, hardware failures
7 Execution, Delivery, and Process Management Data entry errors, accounting errors, failed mandatory reporting, negligent loss of client assets

Insight applied: despite being devised for the banking sector, the "level 1" classification above doesn't cope well with today's biggest banking operational risk.

Conduct-related issues dominate banking risk: Chartis research (May 2015) has revealed that 98% (by value) and 82% (by frequency) of the top 50 global operational risk losses over the period March 2014 to February 2015 related to suitability or fiduciary failures and improper business or market practices.

Similarly in insurance I've seen 85% of the operational risk events falling in areas (4) and (7) above. The point is that this "level 1" classification fails to give sufficient insight regarding the nature of those loss events and, in particular, is inadequate for the management of risks.

This is not news. The challenge was recognised by 2002, in an Operational Risk Data Collection Exercise which set out:

  1. An example mapping of business lines (Annex 1)
  2. A more detailed loss event type classification (Annex 2)
  3. Decision trees to determine event categorisation (Annex 2)

The addition of two further levels of classification in (2) above is helpful; some operations are, by their nature, simply more complex and risky. Operational risk is as much about people as process, so (1) is also helpful; do some departments have a tendency to behave in certain ways?

The uses

Good classification can be used to improve and sharpen risk management in a number of ways:

  • Identification: How many risks are in each category? Does this makes sense in the context of the organisation?
  • Assessment: A senior employee or Board member should easily be able to identify which risk types are material – without needing to see a list.
  • Quantification: The methods of risk quantification (including for capital calculations) and the "shape" of the risk will differ according to type.
  • Management: Some risks will be more amenable to different forms of management – see probability and impact below.

The problems

Good classification can bring an "edge" to risk management. It can help the memory and sharpen thinking, facilitating comparisons and cross checks It can go wrong:

  • Too little granularity: The problem is that identified for level 1 (only) risk classification: insufficient detail for action and, in particular, for managing risks.
  • Too much granularity: The reverse problem is an open-ended list of risk sub-types: unmemorable, unmanageable and close to individual risk descriptions.
  • Forgetting the purpose: With too much granularity and analysis, we can forget that the point is to improve the risk-reward balance.

Possible solutions

The "solution" is natural rather than artificially created: balance achieved through clarity of purpose and results.

Classifying risks by probability and impact

How it works

Probability and impact are often recommended as good classifications. COSO guidance on risk assessment in practice (2012) says probability-impact forms:

An initial screening of the risks and opportunities is performed using qualitative techniques followed by a more quantitative treatment of the most important risks and opportunities

Noting that COSO describe probability-impact as "an initial qualitative screening", let's see how it works in practice:

  • A pre-defined set of impact ranges is set: 1-5 in the column 1.
  • Similarly for probability ranges: 1-5 in the bottom row.
  • This gives a two-dimensional classification (P,I) where P and I are 1-5.
  • Colours can be assigned – a "RAG rating" or heatmap.
  • Another approach multiplies P and I, giving a ranking 1-25.

The table / heatmaps / probability-impact can be used in various ways.

The uses

Here are three possible uses.

  1. A discussion tool. Sometimes people feel good about probability-impact, because it gives them some sort of structure within which to talk about risk. Probably the discussions are in meetings or forums. Probably structures other than probability-impact would generate more useful and action-focussed discussions.
  2. Heatmap reporting. Producing these classifications and their heatmaps can make us feel we're doing something useful. But in reality Boards don't tend to make much use of probability-impact or heatmaps:
    Heatmaps don't really work at the strategic level. They try to get you to allocate a likelihood and impact to each risk. But for every risk there's a whole range of impacts. We need to have a simpler analysis which can be quantified. Source: Source: Trevor Llanwarne, former Government Actuary: Risk registers that work at board level

    If you're using heatmap reporting, chances are you're wasting your time and your Board's time. Is anyone brave enough to say so?

  3. A tool for *management* of risk. The argument is that probability-impact should steer us towards taking action on risks above some threshold – the "red risks" say. Action sounds attractive – and is all-too-often absent in risk management – but there are obvious problems:
    • Arbitrary decision rules. Does this 1-25 / green-red really make sense? Or is it really pseudo-science?
    • Amenability to management. Sometimes further management of a risk is not just cost-ineffective, but nigh impossible.
    • Cost-benefit analysis. You know a technique is in trouble when cost-benefit is replaced by arbitrary hurdles.
    • Did we get paid. part of the cost-benefit analysis could be (e.g. for an insurer) that we got paid to take the risk.

    Where is any of this covered in probability-impact? I'll point to better approaches in "Solutions" below.

The problems

Probability-impact can become more than an initial screening tool, making its flaws more material.

While COSO suggests that the probability-impact classification is an initial screening tool. Others recommending the approach are not so explicit. Risk practitioners know that a quantitative stage is often not reached, so "... comparisons ... will not be on a consistent basis" – see Blackett below (and Llanwarne above).

Probability-impact: a poor assessment tool

While initially attractive, technical and practical challenges lurk beneath the surface. A UK Government Office for Science review said:

One key weakness of deterministic assessments is that they are not readily comparable across risks ... comparisons between deterministic scenarios will not be on a consistent basis as both the likelihood and impact for scenarios will vary. However in practice risk managers routinely compare several deterministic scenarios and make decisions on that basis. Blackett review: High impact low probability risks

Possible solutions

  • Board level. One solution – simple at one level – is to simply get the Board to rank its perceived risks: one measure rather than probability and impact. This makes sense for strategic uncertainty such as competitive risks. In practice things are not so simple, as sensitive and effective facilitation is likely to be required.
  • Operational level. Where quantification of uncertainty really matters – e.g. for setting prices – the probability and impact method is replaced by something more useful. This can involve some sort of model e.g. for setting base prices or for estimating customer responses to changes in prices (or both).
  • Probability-impact replacements. Probability and impact make sense intuitively. But its plural: probabilities and impacts for a "single" risk. A replacement is the internal / external classification which makes even more intuitive sense, without being bound up with a poor assessment methodology.
  • Better risk classification – which goes beyond the internal / external split – is what is really needed. Happily it's available and makes sense.

Where next?

  • Better risk classification : I replace probability and impact with source (internal / external) and "risk velocity".
  • Sinking, Fast and Slow : I examine the means by and the speed at which organisations can fail and speculate on market forgiveness.
© 2014-2017: 4A Risk Management; a trading name of Transformaction Development Limited