A recent debate on the G31000 Linked-In forum around the gap between Inherent versus Residual exposure has shown that there is a large deviation on opinion, not only with what these terms actually stand for but why it is important to measure inherent risk as a value in the first place.

Inherent Risk is actually not a new concept to risk management and is captured in many other risk standards such as COSO but for ISO 31000, the term has uniquely been excluded. This omission adds to the confusion on whether it should be used at all, let alone why someone would want to understand their Inherent and Residual risk side-by-side.

In this short blog, we are going to investigate these two terms and how they interrelate with each other, why and importantly how inherent risk can be estimated.

**Definition of terms**

Before leaping into any model which builds up a perspective of Inherent and Residual risk, it is prudent to define the terms, just so that we are all on the same page with what we are dealing with here.

Inherent risk is the probability of loss arising out of circumstances from an existing environment and in the absence of any control or precautionary action to modify those circumstances. This is a nice definition from Business Dictionary [ LINK ].

**INHERENT RISK**Inherent risk is the probability of loss arising out of circumstances from an existing environment and in the absence of any control or precautionary action to modify those circumstances. This is a nice definition from Business Dictionary [ LINK ].

**RESIDUAL RISK**
Residual risk can be seen as the exposure to loss remaining after other known risks have been countered, factored in or eliminated. Another wonderful and clear definition from Business Dictionary [ LINK ].

To make it easy to understand, I have pulled together a schematic of Inherent and Residual risk below.

Controlled and Uncontrolled Operating Environment (Click image to enlarge)

As you can see, case 1 has no control and case 2 has a control that is operating at 50%. Given its operating level, two of the fraudulent sales are filtered out from all sales but it is also probable that the control will suffer type I and type II errors.

Type I error: Leads to a False Positive, that is we see a good sale as fraudulent

Type II error: Leads to a False Negative, where we see a fraudulent sale as good

The combination of type I and type II errors together, adds to the total potential loss arising from ineffective controls and consequently the residual risk. It is possible given such mathematics that the residual risk can be higher than inherent exposure, however we will leave that issue to the side for the time being because it will complicate the fundamental ideology here and open up a huge debate.

**Interaction Modelling**

Please note that it is straightforward to model the interaction between inherent and residual risk without asking staff to imagine the entire world of controls blowing-up.

Crazy thought, who would expect the entire control framework to fail and who would consider pondering on such a thing. I am making this comment right up front because a lot of risk analysts are rejecting the whole concept of inherent risk under the premise that it is too difficult to assess a world without controls. The problem is that these analysts are attempting to answer the "inherent risk gap" through a soft approach of soliciting opinion from control operators. Such a method for obtaining inherent risk numbers is likely to be tumultuous to execute and you won't have any disagreement from me there.

Just a historical note, in the 31000 working group we avoided inherent risk like the plague. We, instead, decided to evaluate the existing risk with the existing controls.

ISO working group comment | G31000

Scary stuff going on there but it is a mathematical or precisely a canonical or substitution solution to the requirement of assessing inherent risk. What is concerning is that ISO 31000 dropped the term on the whim that it might just be too hard to solicit opinion from people. Yet, the solution to inherent risk analysis is relatively straight forward and right there in front of us to grab.

Sometimes in life the answer is right in front of us, all we have to do is see the wood through the trees.

Sometimes in life the answer is right in front of us, all we have to do is see the wood through the trees.

All this aside, let's take a look at the canonical substitution solution. If Z is the outcome, A is the number of good sales and B is the number of fraudulent ones, it follows that Z=A+B. If you know Z and you know A, you can work out B through substitution ie: B=Z-A. This is very simple and I question why it is tripping up so many risk analysts in the commercial world today.

We could use a Bayesian approach along with other factors to identify inherent risk, this a bit more exciting.

So we have solved the problem of discovering inherent risk without having to gather anything other than the 4 things we have listed above. However, before thinking that all is fine with the world, we need to keep in mind the following considerations.

[1] The control's efficiency or standard error may change as the volume of sales is increased or decreased.

[2] The control may be made up of many sub elements which are all operating together as one unit. You can bet your bottom dollar on the fact that a good risk analyst is invariably going to want to know how each element contributes to the overall effectiveness of the controls' performance.

[3] Then there is the issue of those type I and type II "nasties" but these concerns were already present before we commenced with our basic model, so we haven't introduced anything new here. Nonetheless, we might want to quantify the "nasties" either way.

[4] Finally there will be the financing of the control. This adds another layer of complexity because different sales might carry different revenue values. Our model on the other hand, could be extended and merged with a Monte Carlo algorithm to generate a stochastic outcome and this will take in the financial aspect of the control operation.

Problem solved but the question then begs; why did I complicate the answer by introducing Bayes Theorem?

Quite simply, the inherent risk will change as the volume does through time. Given this, I might want to use the success rate factor of a control operating, to quantify its failure rates and consequently the change in our inherent risk through time.

Problem solved but the question then begs; why did I complicate the answer by introducing Bayes Theorem?

Quite simply, the inherent risk will change as the volume does through time. Given this, I might want to use the success rate factor of a control operating, to quantify its failure rates and consequently the change in our inherent risk through time.

Inherent risk was taken out because it doesn't serve any purpose measuring a theoretical attribute. If you know your the effectiveness of your control in the chain of controls towards meeting objectives you are already there. Why waste time measuring inherent risk. Use Bayes to learn about your residual risk instead.

ReplyDeleteIn my opinion, inherent risk was created because folks didn't know how to measure residual risk. They needed a benchmark in inherent/gross risk. Removing inherent risk will 'force' risk managers to think harder and apply purer risk management techniques rather than soliciting arbitrary opinions that wouldn't stand more than a few chain of events in a causal-impact tree.