What's Hot

Nassim Nicholas Taleb's blog, an inspiring read | Incerto


Monday, August 6, 2012

ISO 31010 and Loss Modeling

One of the most concerning trends that continually persists in operational risk management, is the lack of interest analysts have for attempting to quantify this risk exposure coherently.

In this blog we look at operational risk from the perspective of the normal and the extreme.

Why the PDF is paramount
If an analyst was to refer to the ISO 31010 guide for quantifying operational risk, they will discover a huge list of measurement techniques in the standard to choose from.  In table A.1 on page 22 alone, there are over thirty different methods itemized but loss data modelling is not one of them. Don't misunderstand me, ISO 31010 is an awesome publication and I wish such a document existed when I first started learning risk management because many of the concepts it describes took me months, perhaps years of research and reading to fully understand.

All this aside, I believe the most important of all risk measures is the Parametric Distribution Function (PDF) and in the world of banking, a financial institution which is unable to generate such a thing will find that it is disqualified from the Advanced Measurement Approach for operational risk management.

The PDF is a simple concept to grasp, although properly creating a fitted distribution has its difficulties. I have described how to go about this in many places on the Causal Capital blog site but this [ LINK ] encapsulates the concept quite concisely.

Business view of the loss distribution | Martin Davies

So why is the PDF so important to understand?

Well, the reasons are numerous but if a risk analyst really wants to drive out value from their risk management efforts, they will need to apply limited resources to where they are going to have the greatest results.  That place is generally neither on the left or the right side of the curve but around the loss threshold.

The Loss Threshold
There are several ways of calculating the threshold where normal is no longer so and today there are also plenty of software packages that will do all the heavy lifting work for you. One example would be the POT package in R-Project [ LINK ].

Where is normal not normal anymore | Martin Davies

The threshold is important because it tells us many things:


[1] Where events propagate or cluster to cause a massive outage and high impacting loss.

[2] What is significant for a business to monitor but not catastrophic.

[3] Where to manage operational risk so that it is valuable to the business but not a nanny state rule book and there are many risk managers out there that give this endeavour a bad name. They say NO to most new business ideas but at the same time, they often miss the really important problems they are supposed to manage.

[4] By understanding how events move across the threshold and into the tail, will give the risk analyst a better comprehension on which controls become barriers, why small issues become huge events and how management should react when presented with a major problem that might result in a game changing situation. If you want to prioritize loss event management, having a good handle on causality for events growing across this threshold and into the tail is a fantastic place to start.

I believe risk management truly begins once the Probability Distribution Function is plotted and before that is achieved, risk management is at best a compliance exercise with a hit and miss response to hazard. Certainly tick-box question and answer exercises around control self-assessment is a waste of time if you have no idea on the potential loss a control is shielding the business from.

No comments:

Post a Comment