Home » Research Blog » The risks in statistical risk measures

The risks in statistical risk measures

A DNB paper warns that financial market risk models (such as value-at-risk or expected shortfall) are unreliable. Small variations in assumptions cause large differences in risk forecasts. At commonly used small samples of data forecasts are close to random noise. It would take half a century of daily data for estimates to reach their theoretical asymptotic properties.

Danielsson, Jon and Chen Zhou (2016), “Why risk is hard to measure”, DNB Working Paper no. 494, January 2016. http://www.dnb.nl/en/news/dnb-publications/dnb-working-papers-series/dnb-working-papers/working-papers-2016/dnb335907.jsp

The below are excerpts from the paper. Headings, links and cursive text have been added.

On how reliance on statistical risk models can increase financial crisis risk also view post here.
On the important difference between volatility and the consequences of mixing them up view post here.
On how risk management rules can trigger price distortions and trading opportunities view summary page here.

The forecasting problem

“Financial [market] risk is usually forecast with sophisticated statistical methods. However, in spite of their prevalence in industry applications and financial regulations, the performance of such methods is poorly understood. This is a concern since minor variations in model assumptions can lead to vastly different risk forecasts for the same portfolio, forecasts that are all equally plausible ex–ante.”

“The results in our study indicate that it is not advisable…to rely solely on point estimates of risk forecasts. It is important to also report the confidence bounds. Furthermore, given the highly asymmetric nature of these bounds, the actual bounds should be reported, rather than reporting the standard error only.”

The small sample problem

“We study whether the estimation of risk measures is robust when considering small – and typical in practical use- sample sizes. Although the asymptotic properties of risk measures can be established using statistical theories, known asymptotic properties of the risk forecast estimators might be very different in typical sample sizes… We need half a century of daily data for the estimators to reach their asymptotic properties, with the uncertainty increasing rapidly with lower sample sizes.”

“As the sample size gets smaller, the estimation uncertainty of both value-at-risk and expected shortfall becomes extremely large. At the smallest samples, often the most commonly used in practice, the uncertainty is so large that the risk forecast is essentially indistinguishable from random noise.…It is a concern that vast amounts of resources are allocated based on such flimsy evidence. This is especially problematic in the calculation of banks’ capital requirement for market risk.”

The method problem

“In the latest Basel III market risk proposals, the Basel committee suggests replacing 99% value-at-risk with 97.5% expected shortfall. Our results indicate that this will lead to less accurate risk forecasts.”

N.B.: Value-at-risk the (VaR) estimates a loss threshold that would not be exceeded with a specified (high) probability for a certain period of time in the future. For example, if the 99% VaR number of a position is USD1 million it suggests that in 99% of all cases the loss would not exceed USD1 million. Meanwhile expected shortfall is the estimated loss if the price exceeds a specific percentile of the loss distribution, i.e. if falls within the x% of “worst cases”. Thus, if the 97.5% expected shortfall is USD1 million the estimate suggests that if the loss falls within the 2.5% of worst drawdowns it is expected to be USD 1 million. Expected shortfall is more sensitive to tail risks.

“A common view holds that value-at-risk is…inferior to expected shortfall…First, value-at-risk is not a coherent measure…Second, as a quantile, value-at-risk is unable to capture the risk in the tails beyond the specific probability, while expected shortfall accounts for all tail events. Finally, it is easier for financial institutions to manipulate value-at-risk than expected shortfall…expected shortfall appears increasingly preferred both by practitioners and regulators, most significantly expressed in Basel III.”

“The theoretical superiority of expected shortfall over value-at-risk comes at the cost of higher estimation error…The estimation of expected shortfall requires more steps and more assumptions than the estimation of value-at-risk, giving rise to more estimation uncertainty… expected shortfall is estimated with more uncertainty than value-at-risk, both when estimated each at the same probability levels and also when using the Basel III combination, expected shortfall (97.5%) and value-at-risk (99%).”

N.B.: The simple point is that since tail risks manifest very rarely statistical models that seek to estimate them do not have much data to go on.


Related articles