What's Hot

"Risk Dashboards should serve the stakeholder" | Advanced Risk Dashboards

Friday, March 30, 2012

What is wrong with VaR

Value at Risk (VaR) is often criticised. This is especially the case from those who don't use it, no surprise there and I label such propaganda as statistical xenophobia by the masses. There is even a mainstream following that claims in some respect that the use of Value at Risk should be scrapped. Interestingly, I have never met anyone of this thinking who is able to suggest a viable and cognitive alternative. Well, not quite yet that is.

In this post we look at the problems with VaR and what can be done to improve this measure of potential downside.

What is VaR
Before we can pull Value at Risk apart, it is probably apt that we explain what this creature actually is. In short, it is a simple and elegant way of quantifying risk and it kind of goes something like the following:
VaR is defined as a threashold value such that the probability that a mark to market loss on a portfolio or a single position will not over a given time horizon exceed a specific value.  For example, if a portfolio of stock has a one day 5% VaR of 1 million, there is a 0.05 probability that the portfolio will fall in value by more than 1 million.
Value at Risk | Wikipedia Definition
Its use within the banking community has been prolific to say the least. This is partly an outcome of Basel II which encourages loss reserves to be expressed in parametric manners and for all risk disciplines; Market Risk, Credit Risk and Operational Risk. 

VaR is also a transparent measure of potential risk because it summarizes and aggregates unwanted variant outcomes for all floating values without expressing any reference to what is being measured.

Positives aside, its implementation in the financial sector hasn't come without major difficulties. Huge budgets have been expended on parametric loss modelling and in many respects what has been turned out has resulted in a cure that in some instances is worse than the ailment it is supposed to resolve.

Let's look at five VaR traps so many businesses seem to fall into: 

[1] What does this represent
Perhaps the biggest flaw in VaR is not the measure itself but the decision the business unit doesn't make from it. VaR needs to be thought of as more than simply a ruler of exposure which a business unit reserves against but a trigger for forming a strategic response to calamity.


If the VaR increases, if it decreases, if the shape of the probability distribution function changes, the causal factors for this change not only need to be understood, they should stimulate a business action. If the business response is to stand on, hold, wait and see, that choice should be an active decision not accidental. 

Really good use of VaR will result in planned responses before catastrophe occurs.

[2] Post or Pre Trade
VaR means different things to alternate risk people and the measure of loss itself is often built up using abstract event data, even more so as one moves between the risk disciplines of Market, Credit and Operational Risk. 


If the modeler doesn't have a loss experience, the VaR number might be created through the inference of business factors such as Probability of Default convoluted with Exposure At Default less Loss Given Default. This kind of measure of risk is a very different loss horizon to actualized loss experience from the market risk team. Contextually the differences need to be clearly understood.

[3] Incompleteness
VaR should not be thought of as the end game even though it is often treated as such. Monitoring the potential loss from a price shock in a portfolio alone won't capture the liquidity cost of exiting these positions unless that cost has been factored into the model.  


This incompleteness can include Counterparty Default, Liquidity Risk and Funding Costs just to begin with. Incompleteness errors are substantially impacted by leverage when covered positions are experiencing correlated write downs in collateral lines along with price erosion on the investment headline itself. 

The simple question we need to ask ourselves is, what is captured in the scope of our VaR model and what is missing?

[4] Backward Looking
VaR has a tendency to be very backward looking. Monte Carlo or Panjers recursive algorithms are fine, they should be used but they don't resolve cyclical modelling issues. What I am saying is that most markets have cycles of growth, decay or volatility and, VaR is at least one cycle out much of the time.


During periods of low volatility the VaR numbers will drop and so will our reserve. When volatility does occur, it can be so sudden or sharp in the time line that we end up with a VaR threshold violation and then our reserve simply doesn't provide a large enough stop gap. If we can, we increase our reserve and pull back on our risk appetite, however often these mitigation controls are enacted too late to be useful or valuable. The next issue that clobbers us is that as market conditions stabilize, the risk team is then left modelling in the previous volatility window and generally reserves too much.

A method to address this basis issue around misalignment in market cycles needs to be considered by risk teams but this necessity is rarely taken on board. 

[5] Tail Dependency and Correlation
Now for the nasty modeling issue. VaR is simple, it's unique quality to make risk measurement cohesive and straightforward has been the fundamental reason for its wide acceptance. This simplicity is also the main factor for its failings and unfortunately if we believe the models are too sophisticated today, well I am sorry to say we are going to have to move to the next order of sophistication if we are ever going to solve the Value at Risk conundrum.


Let's be clear here, assuming that a variable or price variance or loss value, whatever, is normally distributed or log normal is a gross oversight. This is a Gaussian curse and risk teams should look to improve their curve fitting approaches. Sadly though, doing so will not be enough. Tail dependency and internal correlation between variables needs to also be quantified and this will require the modelling team to consider statistical methods such Copula theory - More can be found here [The Copula].  

Original notes
The original notes for this post can be enjoyed below and lists many more issues with VaR. Hand written I know, but sometimes inspiration and insight comes to us unexpectedly when we are simply moving through life and all we have handy is a pen and a piece of paper.

The Problems with VaR | Martin Davies

I finish up with the following point for the risk community that use VaR: We need to stop blaming our tools, accept that they are not always complete, often laden with flaws and that common sense needs to prevail when making decisions based around risk metrics. Blaming say, the Global Financial Crisis on a curve is absurd. A curve has no emotion, it is just a set of numbers and it is our fault for choosing the wrong set of numbers for modelling reality downside. Worse, it is also our fault for discounting the errors in our probability tests and we all know that management may push us to do this at times. So be it but don't blame a series of centrally clustered points in the universe on our own human error, that is a feeble excuse for failing.

4 comments:

  1. Yes - great Martin. Perhaps add more high-level explanation around tail behavior? Perhaps a blurb about Expected Shortfall (tail loss) which is all the rage but from what I've read - suffers from many of the same challenges that VAR has.

    Regarding your VAR description you took the Wikipedia def. I recently saw it explained a different way which gave me a laugh: VAR is that it is the equivalent of a guy telling me that there is a 95% chance that only .02 people are expected to die on a flight. Not too bad, I'm not worried. But - I might be concerned about that "5%" - beyond the pale (VAR quantile). So the guy says, well... within that tail end of the curve you have a certain chance of the airplane blowing up. So ETL / ES answers how many on avg will die - specifically in the tail.

    ReplyDelete
  2. Jason,

    Thank you for the positive comments ...

    On your point here "Perhaps add more high-level explanation around tail behavior?". Interesting you say this because I am pulling together a blog on the differences between expected loss and unexpected loss or tail behavior as it is in the next couple of days.

    I took the Wikipedia definition because I didn't want readers to debate a definition I made up but to focus more on the problems with VaR and as we can see there are so many.

    ReplyDelete
  3. Hi Martin, thanks for your post which gives us more insight in possible shortcomings of VaR.

    One question please: is it conceptually OK to state that the solvency II SCR for risk modules can be compared to VaR against a probabilty of 99,5%?

    ReplyDelete