Risk registers: the claimed flaws

Content

This article comes in two parts:

  1. A relatively extensive – though brief and not exhaustive – coverage of technical and practical charges levelled against risk registers.
  2. A critique of Risk Listing, by Matthew Leitch. Risk Listing is a common approach to populating risk registers.

For (2) I have a section "what needs to be done", while for (1) my answers are within the full suite of risk regsiter articles.

Disclosure: my view

I'm in favour of risk registers in the same way that I'm in favour of surgical checklists, personal organisation tools, shopping lists and the periodic table.

But a personal organiser is only as good as its inputs. Knowing the periodic table won't make you a good scientist. It's similar for risk registers, but worse. It is very easy to convince ourselves that filling in a risk register is a big part of risk management, especially when we can find and filter risks so easily.

A good risk register is start: a tool to use alongside other tools such as models. The average risk register is conceptually and practically flawed.

The claimed list of flaws, across 5 areas

Typical use of risk registers gives rise to many often unappreciated challenges. These can be grouped into five areas:

AreaThe 4ARM view
Unclear purpose and audience This is the key. If there is no clear answer to this point we should consider giving up risk registers, regulation permitting.
Methodology: avoidable flaws There are several big flaws, failure to solve which results in misleading or even dangerous results being presented.
Methodology: wrong tool for the job Once the problem is diagnosed the solution is easy. This is, at worst, a criticism of the person who explains the risk register.
Content This is calibration problem; using the best classifications and descriptions will result in better and easier management of risk.
Practical usefulness I give only two examples, which may be caused by working under time pressure (or laziness).

It is my firm belief and experience that the last four areas have robust practical solutions. Whether a risk register has value in an organisation depends on many things, including the use of other tools. It's one way of answering "how do you organise, document and review the management of uncertainty?"

Now let's turn to look at each of those five areas in more detail.

Unclear purpose

  • Lack of objectives: Rarely is the purpose of a risk register stated; what does it achieve that other tools do not or cannot?
  • Getting personal: We usually know who is responsible for populating the risk register. Less thought goes into who it is really for and why.
  • Lack of integration: There is often little link to other business areas e.g. regular or material decision making, capital setting etc.

Methodology: avoidable flaws

  • Identification of risks: Risks are often identified as a result of brainstorming in a series of risk meetings. Strategic uncertainty tends to get missed.
  • Event-like focus for risks: The emphasis is often on risk as future uncertain events, but this probably not the most important area of uncertainty.
  • Metrics: Just some examples: Is there a time period? How do you differentiate between something you expect to happen once a year versus once a month?
  • Risk assessment methodology: Typically probability-impact: a big problem, covered extensively in this article.
  • Controls: Far too much emphasis is given to risks relative to controls, which are often simply bunched in the last column of a spreadsheet.
  • Decisions: Decisions to reduce risk are often based on arbitrary limits, which do not link to high level risk appetite or to any payments to take these risks(!)

As noted in the heading, all of the above are avoidable. But in fact the flaws are not normally avoided. What then happens?

The best case scenario is that nothing of value depends on the risk register. The result could be "just" wasted time and effort, although risk management in general may fall into disrepute in the eyes of senior management, who don't want to see a long list of risks with no implication for them.

The worst case scenario is that the risk register provides false assurance and something goes seriously wrong. False assurance is quite likely; the result of the avoidable flaws is that risk coverage is probably inconsistent and incomplete. Even a government department staffed by experts suffered from this

When I started at the Government Actuary's Department a few years ago, we had a risk register running to 50 pages. I looked for the top 7 risks and 3 were missing. When I mentioned this to a group of public sector risk managers I got the answer "you’ve got 4 out of 7" you’re lucky to have that many. Risk registers that work at board level – Trevor Llanwarne, former Government Actuary

Methodology: wrong tool for the job

  • Assessment: With extreme care a risk register can facilitate risk assessment. But usually it simply records the results of a a good or bad assessment process.
  • Links between risks: Whether probabilistic dependency, linked impacts through system effects or general correlation, risk registers rarely allow for links.
  • Aggregation: Problematic where no allowance is made for interaction between risks. Complete nonsense where risks are assessed on an inconsistent basis.

Content

  • Coverage: Coverage of some areas e.g. strategic risk can be rather "light". This follows from the identification process and even-like focus above.
  • Classification: The most most common risk classification does not lend itself to the management of risk.
  • Descriptions: These are often very weak; again it is practically impossible to use them to manage the risk described.

Practical usefulness

  • Learning opportunities missed: When something goes wrong there is often little link to the underlying risk – or learning.
  • Actions: There is often no firm basis for setting these. In practice actions have a tendency to overrun; target dates are put back.

Overwhelmed? Answering the first question is the key. There are solutions to all the other challenges. Other risk management tools also face challenges.

Risk Listing – a flawed but common approach to deploying risk registers

Have you been encouraged by a range of risk management standards and guidance to implement risk registers within your organisation? You may want to consider Matthew Leitch's critique of what he calls "Risk Listing", summarised below (*). Does your risk register have any or all of the six components described by Matthew?

(*) The summary table below is based on private communication of Leitch's unpublished paper "Scientific risk analysis and Risk Listing".

RefProblematic? Technical element (Leitch) Comment (Howe)
T1 An exclusive focus on managing Risks , which are abstract entities that can usually be defined in any way imaginable, but are almost always worded as events that either will happen or will not happen. The "event focus" is very unhelpful.
T2 A risk register, which is a list of Risks in no particular order and with no explicit links between Risks, against which Controls (i.e. responses to Risks) are written, along with assessments of the level of risk associated with each Risk, and other information. This goes under "risk register abuse".
T3 Assessments of Risks in terms of their probability of occurrence and the impact if they occur, usually recorded by dividing the scales for probability and impact into fixed regions (e.g. “High”, “Medium”, “Low”, or an index number from 1 to 5). A matrix converts assessments of probability and impact into an overall score based on the idea that risk is quantified by expected value. Potentially very damaging.
T4 A rule for choosing Controls that uses Risk criteria, which are usually targets or limits for risk levels. Typically, a Risk whose rating is above the limit requires the most cost-effective Control available but otherwise none. Risk criteria are set regardless of available Controls and are often the same for all Risks. Most such rules are artificial.
T5 A process that proceeds from Risk ‘identification’, through Risk ‘analysis and assessment’, to Risk ‘treatment’, which is just decision-making on how to respond to each Risk. The process is substandard (at best)
T6 Allocation of Risks to managers as ‘owners’, which usually implies ‘ownership’ of associated Controls. Some version of this is sensible.

I believe an average risk register will adopt the above approach, so the average risk register scores 2 / 6 at best (in practice other flaws are likely).

What needs to be done?

  1. Focus on managing risk events. The solution is twofold:
    • Where "downside risk events" is appropriate, ensure the "impact spread" is acknowledged.
    • Where "events" is appropriate, consider whether there is an upside – what's the baseline?
    • Consider whether an uncertainty focus, together with impact spread is more accurate and helpful.
  2. Risk register as a tool for Risk Listing. A range of "fixes":
    • Acknowledge that registers are often (even typically) used in this way, but can be used more positively.
    • Models can also be used to obscure the underlying risks – deliberately or otherwise.
    • Red herring: ordering risks is (of course!) easily possible, using e.g. Excel.
    • Linking: acknowledge that dealing with links in a linear list / register is not best (use models).
  3. Probability-impact risk assessment. The worm at the heart of risk registers:
    • Educate on the issue (third-party assistance is available e.g. from the former Government Actuary).
    • Show that alternatives are available, including:
      • Simpler e.g. for boards, which acknowledge their experience and the objective of better decision making under uncertainty.
      • Pragmatic e.g. use a triangular distribution for all risks, as advocated by Stanford's Sam Savage and Michael Mainelli of Z/Yen.
      • Robust where a more scientific approach is necessary (e.g. for insurer pricing or setting capital) use a tailored probability distribution.
  4. Risk limits and controls. This would benefit from a more nuanced approach:
    • Whether additional control is appropriate depends on a range of factors.
    • These include a cost-benefit analysis, whether the risk has been "paid for", ease of control etc.
    • Further, cascading risk appetite/tolerance – both amorphous concept – down to risk limits is challenging.
  5. Risk process. Given too much emphasis, unclear and often flawed:
    • Even for those who believe in a central risk function the typical risk process diagram is shambolic.
    • Action – even "risk decisions" – gets so little attention: Ingram's risk management entertainment system.
    • Plus a lack of clarity over process application and frequency, over-communication and too linear an approach.
  6. Allocation of risks to "owners". The least of the problems:
    • Potential dependency between risks does not imply that one person should not have a primary responsibility.
    • Typically this would be based on functional expertise, with the "risk owner" supported by colleagues.

Where next? The risk register series

User beware. Many risk experts have warned of the common flaws in risk registers. It doesn't have to be this way. The first half of the set of articles below is generally positive, starting with how five potential audiences might make better use of risk registers. The second half warns of some really dangerous flaws.

© 2014-2017: 4A Risk Management; a trading name of Transformaction Development Limited