What's Hot

"Risk Dashboards should serve the stakeholder" | Advanced Risk Dashboards

Monday, August 13, 2012

ROC Control Optimization

In the world of risk, analysts and managers alike try to reduce the likelihood of an event occurring by inserting controls between the event's driving factors and its outcome. Additionally, these analysts often regularly monitor specific indicators they believe will give them insight into something unwanted happening.

While the logic around this is sound, not all controls are equal and more often than not, some key risk indicators emit erroneous measures which mislead entire risk teams.

In this short post, we look at a method for weeding out erroneous control signals.

The Fuzzy Control Problem 
There are two conditions a risk manager will want to avoid when they are investigating a lead for the occurrence of a risk event, disease or hazard.

The type I Error
The type I error is more commonly known as the false positive. A risk analyst believes the indicator alarm is real and reacts to the event only to find out that it was a false alarm.

The type II error
It follows, that the reverse can occur.  An alarm signal is ignored because it is deemed to be nothing more than noise and the risk analyst fails to respond to the warning signal quickly enough. In such circumstances the risk analyst is likely to suffer adversely from the unwanted condition.

Both types of unwanted conditions are as serious as each other. The type I error will result in missed opportunities and higher control costs and, the type II error is what keeps some people up at night.  In fact, if the type II error is severe enough, we often learn about it from a forensic audit into why a risk analyst didn't react to a problem when the warning signs were present and obvious.

Control and Indicator Effectiveness | Martin Davies [click image to enlarge]

The problem with most measures of just about anything including temperature, wind speed, delivery times, pressure gauges, volumes produced, number of quality control failures and so on, is that these numbers are the outcome of assessing one random thing in a complex system against another. Body temperature for example can rise for many reasons but a slightly higher body temperature in a patient doesn't mean they are sick.

Controls are and always will be incomplete, capturing noise and real signals of disorder together.

Controls are consequently unlikely to be a pure and accurate indication of a specific event occurring and generally work in ranges; where a real positive alarm starts when a confidence level raises above a specific breach value.

So how do we calculate this breach value?

Receiver Operating Characteristic
There is statistical solution to this problem known as Receiver Operating Characteristic which is used heavily in solving strategic military problems; medical prognosis, engineering monitors, power management, credit risk and its more frequent use in operational risk would be welcomed.

ROC assists analysts solve the issues around discriminating between two or more characteristics such as an alarm signal and a fire. It does this by counting the number of times that data values from one condition exceeds a cut-off when an unknown outcome may be present. It is a graphical tool that summarizes data across a horizon, to reveal specific events which would not normally be visible to an observer reviewing the data points alone.  In the end however, ROC aims to locate the cut-off or benchmark of significance for an indicator, it tries to prove there is evidence that an event is starting to occur and it does this within a confidence level.

ROC Example from rocplus R-Project | Robert Wheeler [click image to enlarge]

The ROC example shown in the diagram above was taken from the rocplus library that is available in the R-Project statistical tool. The entire example took a matter of minutes to setup before executing in R-Project and more information can be read about this procedure by following this link.

ROC Analysis or Bayesian Predicted Values broadly offer lots of advantages to risk analysts and I have taken to summarise a few of these key benefits below:

[1] ROC can be used to show whether enough data is received to draw a conclusion about a risk event.

[2] ROC filters out noise in key risk indicators and can monitor many indicators at once.

[3] Control networks which are sensitive to ROC monitors will be more cost effective for a business.  This is because additional follow-up controls are only engaged when confidence levels fall towards the total control reference line (see the ROC diagram by clicking on it).

[4] ROC is ideal for setting control limits for breaches, rather than having a typical arbitrary threshold.

[5] ROC can be used with a continuous stream of data that builds up a cleaner picture of control performance and risk potential overtime. This is particularly useful as control failures often happen in a sporadic manner where control results deteriorate overtime rather than simultaneously.

Personally I am such a big fan of ROC that it would be great to see this statistical method written up and included in the ISO 31010 guide.

1 comment:

  1. If you noticed the reactions of some individuals to simple discuss of the phrase disturbance, nearly all of them would feel distressed, frustrated, confused, or frustrated. No wonder individuals think that all disturbance causes problems of some type or another. But nothing could be further from the fact. It's regular to amorously dislike any disturbance that concerns us, but we should also avoid lack of knowledge by studying more about the sounds and understanding that white-noise devices advantage people. White-colored noise: what is it?