Our recent article on the risk matrix [LINK] is all fine but the concept of a risk matrix from the outset seems to embody much of what is wrong in risk management today, let me elaborate on all of this for a moment.
I believe there is a slight misalignment or common conception around the way people perceive uncertainty that is highly erroneous and potentially dangerous when not fully understood. Don’t be fooled by randomness [LINK] as the author Nassim Taleb often states and I will take a few moments here to explain the typical human errors that risk reports such as a risk matrix are encouraging.
Perhaps one of the biggest problems people have when working in risk management, certainly as it is being sold to the world as a management discipline today is; when you ‘state’ a risk has a specific likelihood of an event or magnitude for that matter, you have to ask yourself some very important questions and stipulate some caveat emptor boundaries with your 'risk statement’.
"Uncertainty is the state, even partial, of deficiency of information related to, understanding or knowledge of an event, its consequence, or likelihood"
ISO 31000 | Principles and Guidelines
From the outset, risks that are being monitored for aleatory uncertainty are generally distributions.
This is better explained with an example: Let’s say in a given period of seven days, the 'average' temperature is 25 degrees. So then, the probability that the temperature will reach 26 degrees is going to be inevitably higher than the temperature reaching 35 degrees over the same period of time. Notwithstanding that to reach a 35 degree position, the temperature has to pass across the 26, 27, 28 … degree barriers.
You can apply this thinking to a huge array of threats from storms to earthquakes.
Beyond all of this, we have to ask ourselves what are the implications of high temperatures on our objectives because a high temperature by itself may be pure uncertainty but it is not necessarily a risk unless it is tied in some way to outcomes on an objective. An objective's zones of fragility should be identified along with the assessment of the effects of randomness that are applied to it.
As popular as the risk matrix is, it doesn’t accommodate reporting risk in this manner and consequently it has dangerous limitations.
Secondly, when we identify a specific risk, we go about gathering information to fortify our ‘belief’ on unique criteria the risk may contain. This can include the risk's causes, driving factors, potential consequences and so on. When we attempt to measure uncertainty by gathering proof, there is a threat the observer will apply bias or framing to their observations. There are many biases the average person suffers from, confirmation biases, informal fallacies, right through the spectrum to parabolic discounting and they all need to be kept in check.
Finally, as we go about gathering information to prove that a risk is significant or not, we need to be aware that the types of data we use to describe a risk and their sources are a form of uncertainty in their own right!!!
Perhaps one of the most superior sources of data that can be gathered for measuring any risk is incident data but even still, this form of data needs to be modelled to turn the data into information. Models nearly always have constraints and inject error or framing on what is observed but further still, historical incident data is backward looking and it assumes the future is likely to play out in the same way as it has done so in the past. We have to be real for a moment and accept that certainty is potentially uncertain yet again.
Five questions practitioners need to answer
Before we go off reporting anything in a risk matrix, a risk analyst must ask themselves some very important questions for each risk they have listed in their matrix or risk registry or anywhere else for that matter:
Expanding into ISO 31000, five questions that need answers
 What do you know about this risk?
Do you know some of its causes, which causal paths don’t know you know too well even though you may be aware of them?
Do you know which common factors appear to be present when the risk strikes?
What type of signature data can you gather to prove any of this?
 Where did you source your data from to fortify your belief around a specific risk and is that source of data timely, is the accuracy or standard error in the data known?
 Do you know which objectives are impacted by a risk and how?
Where are these objectives fragile and what are your objectives fragile to from the outset?
 Do you know the time between now and when the objectives are to complete?
Some objectives are perpetuities for sure but many objectives have a threat window that will pass.
 How does this risk benchmark against other uncertain threats and opportunities?
This is an important consideration because measuring and managing any single risk is not effortless, it consumes resources and we perform better when we know where best to expend our valuable resources. Some risks will be minor, even insignificant while others less so and whether we like it or not, there is going to be an appetite or threshold (call it what you wish) that generally has to be breached before a risk becomes an ongoing concern for management. We need to understand where this threshold place is before we can design any suitable risk treatment for the uncertainty we are measuring.
Too often I see risk managers list risks alongside some spurious arbitrary likelihood and magnitude mean estimates, estimates that are gathered from nothing other than an assessment exercise. If risk managers leave their risk registries in such an incomplete state, as so many do, this is a task undone and as it stands, it is not useful for the commercial practice of risk management.
If you can't answer any of the five questions I have listed above for any risk you have tallied in your risk registry, you need to accept that your uncertainty is extremely high for the risk you itemise. Risk is not just described as frequency x magnitude, it is but it should also be far more than that.