Risk managers and consultants often facilitate "risk management workshops" in an effort to capture a registry or list of threats that their businesses may face. The way these workshops are delivered can vary substantially between practitioners but all that aside, a common problem found by many risk managers is; How to propose the likelihood of occurrence for a specific incident and how to identify its unbiased impact.
Let's take a look at this.
Let's take a look at this.
We have talked about Scenario Analysis (SA) before on this blog [ LINK ] and more than once. We have also published a presentation that mirrors the sbAMA best practice method for facilitating Scenario Analysis and that document will lead a risk analyst through the various detailed manoeuvres for facilitating this grand exercise [ LINK ].
Today however, we are going to review one of the largest inhibitors found in the majority of all Scenario Analysis programs but before we pull the teeth out of these workshop tea parties, there are a few questions on the G31000 forum ( Likelihood and Consequences [ LINK ] ) that incited this post and we should curiously make mention to them.
Question 1 : Since the 9/11 twin tower attack, has the risk of airplanes being intentionally flown into buildings increased or decreased?
Question 2 : If New York businesses have become more 'resilient' as a positive outcome of 9/11, why did so many of them struggle with the hurricane Sandy catastrophe?
The answer to both these questions will not be found by following the route of a typical Scenario Analysis exercise, especially where risk analysts pontificate in a linear manner on what types of individual threats their organisations face. Being able to propose the likelihood for a specific event or its magnitude in such circumstances will nearly always result in a fuzzy outcome that doesn't support the purpose it was designed for.
Where does SA go wrong
Before we can fully understand where these Scenario Analysis workshops go astray, perhaps it is ideal to review the typical process under which they carried out.
Three Critical Steps for Scenario Analysis | Causal Capital
Step 1 and step 2 are ideal, even recommend practice but at step 3, we find ourselves trapped in a dilemma that will furnish us with data that isn't coherent or cognisant and this misinformation won't help us manage the risk events we identify in step 2. In fact, there is so much wrong with the way risk analysts commonly execute step 3, that we should highlight some of these flaws before we attempt to address them.
FAILURE 1 Risk events aren't decomposed
When looking a single catastrophe, risk analysts have a tendency to fast forward through time to the end point, rather than attempt to understand how they reached that conclusion. For example; there was an earthquake, tsunami and a nuclear power plant meltdown. So many iterative steps here are missing, there is so much data paucity in what is written in that single sentence, that it is nearly impossible to understand the true frequency or magnitude of the end point in time.
FAILURE 2 No event plays out twice the same way
So then, has the risk of airplanes intentionally striking buildings increased and why would it increase. This is the wrong question but surely it would have decreased because society is aware of the potential calamity and prepared for it. Either position can be argued intelligently, and most rationalizations will end up taking us into a Reductio ad absurdum situation before we find the solution. This is a Zeno paradox which we will eventually correct.
Black swans (an unknown risk from an unknown quarter, unknowns unknowns) don't play out in the same way twice and they aren't black swans if they do. More importantly, to develop a robust business model or process, you need to understand the weaknesses within your business not the threats facing it.
To be truly accountable for failure is an endogenous problem not an exogenous excuse. | Martin Davies, Causal Capital.
FAILURE 3 Quantifying with speculation
So we have a single threat which sits out in the tail, this is the most ridiculous postulation. The earthquake could cost us fifty million, why fifty, why not forty eight million but then forty eight million may not be a problem when it happens a year ahead, only if it happens tomorrow.
What we are failing to ascribe to here is how we reached forty eight million and the likelihood of how each iteration occurred through time to land up at the end point in time. The most nasty risks you could encounter probably already sit in your risk or scenario register, you more than likely already know about them but alas they never quite make it to the designed end point you had in mind.
No comments:
Post a Comment