What's Hot

Nassim Nicholas Taleb's blog, an inspiring read | Incerto


Monday, March 12, 2012

Frequency x Magnitude - the wrong measure

In the world of operational risk, there are a lot of analysts who believe that they can dimension the impacts from uncertainty by counting the number of events they experience over a period of time and then multiply that count by the average loss amount for the total event horizon. This approach for quantifying the impacts from uncertainty is full of error and it should be avoided.  In fact, let's be clear, it is so fundamentally wrong as a measure of exposure that it isn't even a good estimate of how much operational risk may cost us in the future.

In this article we will look at why F x M = Exposure, doesn't equal the true potential loss for operational risk and what can be done to improve this measure of risk.

Why F x M is busted
Before we can disband the F x M = Exposure concept, perhaps we need to understand where this measuring idea originated from.

The logic under F x M = Exposure is based on a straight forward theory. In short, if a risk analyst knows how many risk events they might have in a year and they capture the average loss from these events, then it would follow that multiplying the two variables together would give them a spot estimate on how much a risk event in a business unit may cost them.   


There are several problems with this kind of thinking but most critically, simply multiplying F1 x M1 will only give you a single position on a potential loss curve and a curve that has many positions. We need to accept that this dodgy expression of loss is equivalent to looking at one pixel in a newspaper photograph, the rest of the picture is actually made up of thousands of pixels and will be lost to the observer.

Now, most analysts will come to this conclusion that they are simply seeing a single splotch of paint in a painting very quickly, although there are some who don't even make that connection of thinking. However those that do, try to improve this measure of potential magnitude by averaging it. This is fine but we now have to acknowledge that the number of events per year (the count) is also a parametric measure of frequency, rather than a fixed spot count in time. So the solution to this counting problem is to average the frequency of events as well.

If an analyst is averaging these numbers on their x and y axis and they are multiplying them together, then they are moving in the right direction for correcting the errors in their loss measure. Really, why not just take this process one more step forwards and do it properly.

So what is that next step?

Parametric measure of loss
Building a parametric expression of loss is actually easier to do than one would expect and it is statistically sound. Templates in a statistical tool, even in Microsoft Excel, can be setup once to do this calculation and then reused to model different loss data sets that may be experienced in alternate parts of a business. In effect the frequency of potential loss (which is normally represented on the y-Axis) is modeled separately to the magnitude distribution (the x-axis) by generating curve estimates of both the x and y planes from expected values of actual observation. Hypothetical frequency and magnitude curves are then generated from these estimates and passed into a Monte Carlo engine to create a final potential probability loss distribution.


This is of course the next step of evolution for modelling potential loss but it isn't the last evolutionary leap and there are hitches with this model which will eventually need to be treated.  

Firstly the frequency and magnitude distributions may not be normally distributed and they may  have skews or tails which the analyst will need to understand. The hypothetical loss distributions may also not fit observed data they are created for and a goodness of fit test needs to be carried out.  Finally, the right side tail of the curve for extreme losses needs to be generated and a process such as Extreme Value Theory should be considered but discussions on this end of the game are best left for another article.

All-in-all, even with these additional concerns, we are in a much better place using this parametric approach for quantifying business risk losses than simply doing a F x M = exposure.  

Value at Risk
By building these hypothetical distributions we can also answer lots of different and important risk questions:

[1] We now have an OpVaR measure which can be benchmarked
[2] We can track regime shifts overtime
[3] We can ask questions such as what is the probability of losing a 100 dollars
[4] What is the probability that a loss might occur greater than a million
[5] We can correlate our loss function with business variables, say transaction volume
[6] Transaction vol to loss curve can be used to infer expected loss from business growth
[7] These curves can be shocked for stress testing

 .. and the list goes on

Don't misunderstand me, Value at Risk has lots of tweaks that need to be kept in mind when it is used, but it is a giant leap forwards from the F x M = exposure. One last thing, don't believe it when people tell you that VaR was the cause of the Global Financial Crisis because that is just such a gross oversimplification of the failure of banking and hanging VaR on the cause of the Global Financial Crisis is like a tradesmen blaming his tools for a poor job.

4 comments:

  1. I think a big reason behind the widespread use of F x M = Risk Exposure is that business units feel they can understand what is going on.

    A risk manager can try to calculate the loss distribution by risk and business line based on records of loss data, but when discussing self risk assessment with business the concept usually does not fly. With operational risk, every part of an organization is involved in the risk assessment and it is going to need a very good operational risk managers to communicate the parametric distribution method to, say, a branch manager. At least that's my experience.

    ReplyDelete
    Replies
    1. Chris,

      I agree with you totally. I think the F x M = Risk Exposure is an outcome of usable explanation as well. Imagine sitting in front of the average business manager and try to explain to them the concept of parametric loss. There is often a verbal push back that comes forward from the manager when you say:

      50% probability of loss is say 100,000
      75% probability of cumulative loss coverage is say 180,000
      90% probability of cumulative loss coverage is say 400,000

      The numbers in the mind of the business manager don't add up, they usually retort with the 90%, 95%, 99% is ridiculous, it is too large a potential loss and very unlikely. Comments such as we never operate at 1% success rate, what is this kind of reading are very common.

      Delete
  2. How to generate a credit loss distribution?
    What should be on Y-axis, and what should be on x-Axis?

    ReplyDelete
  3. Very good question, for credit risk it would go the following way: The Y-axis would represent the Probability of Default and the X-axis becomes the Loss Given Default & The Exposure at Default equivalently. I should write an article on this because investigating credit risk is very interesting.

    ReplyDelete