The two tables could then be used to answer questions such as:
- The probability of making a loss? = 0.12 + 0.12 + 0.16 = 0.40
- The probability of making a profit of more than $3,500? = 0.08 + 0.12 + 0.06 = 0.26
Value-at-Risk (VaR)
Although financial risk management has been a concern of regulators and financial executives for a long time, Value-at-Risk (VaR) did not emerge as a distinct concept until the late 1980s. The triggering event was the stock market crash of 1987 which was so unlikely, given standard statistical models, that it called the entire basis of quantitative finance into account.
VaR is a widely used measure of the risk of loss on a specific portfolio of financial assets. For a given portfolio, probability, and time horizon, VaR is defined as a threshold value such that the probability that the mark-to-market loss on the portfolio over the given time horizon exceeds this value (assuming normal markets and no trading) is the given probability level. Such information can be used to answer questions such as 'What is the maximum amount that I can expect to lose over the next month with 95%/99% probability?'
For example, large investors, interested in the risk associated with the FT100 index, may have gathered information regarding actual returns for the past 100 trading days. VaR can then be calculated in three different ways:
1. The historical method
This method simply ranks the actual historical returns in order from worst to best, and relies on the assumption that history will repeat itself. The largest five (one) losses can then be identified as the threshold values when identifying the maximum loss with 5% (1%) probability.
2. The variance-covariance method
This relies upon the assumption that the index returns are normally distributed, and uses historical data to estimate an expected value and a standard deviation. It is then a straightforward task to identify the worst 5 or 1% as required, using the standard deviation and known confidence intervals of the normal distribution - ie -1.65 and -2.33 standard deviations respectively.
3. Monte Carlo simulation
While the historical and variance-covariance methods rely primarily upon historical data, the simulation method develops a model for future returns based on randomly generated trials.
Admittedly, historical data is utilised in identifying possible returns but hypothetical, rather than actual, returns provide the data for the confidence levels.
Of these three methods, the variance-covariance is probably the easiest as the historical method involves crunching historical data and the Monte Carlo simulation is more complex to use.
VaR can also be adjusted for different time periods, since some users may be concerned about daily risk whereas others may be more interested in weekly, monthly, or even annual risk. We can rely on the idea that the standard deviation of returns tends to increase with the square root of time to convert from one time period to another. For example, if we wished to convert a daily standard deviation to a monthly equivalent then the adjustment would be :
σ monthly = σ daily x √T where T = 20 trading days
For example, assume that after applying the variance-covariance method we estimate that the daily standard deviation of the FT100 index is 2.5%, and we wish to estimate the maximum loss for 95 and 99% confidence intervals for daily, weekly, and monthly periods assuming five trading days each week and four trading weeks each month:
95% confidence
Daily = -1.65 x 2.5% = -4.125%
Weekly = -1.65 x 2.5% x √5 = -9.22%
Monthly = -1.65 x 2.5% x √20 = -18.45%
99% confidence
Daily = -2.33 x 2.5% = -5.825%
Weekly = -2.33 x 2.5% x √5 = -13.03%
Monthly = -2.33 x 2.5% x √20 = -26.05%
Therefore we could say with 95% confidence that we would not lose more than 9.22% per week, or with 99% confidence that we would not lose more than 26.05% per month.
On a cautionary note, New York Times reporter Joe Nocera published an extensive piece entitled Risk Mismanagement on 4 January 2009, discussing the role VaR played in the ongoing financial crisis. After interviewing risk managers, the author suggests that VaR was very useful to risk experts, but nevertheless exacerbated the crisis by giving false security to bank executives and regulators. A powerful tool for professional risk managers, VaR is portrayed as both easy to misunderstand, and dangerous when misunderstood.
Conclusion
These two articles have provided an introduction to the topic of risk present in decision making, and the available techniques used to attempt to make appropriate adjustments to the information provided. Adjustments and allowances for risk also appear elsewhere in the ACCA syllabus, such as sensitivity analysis, and risk-adjusted discount rates in investment appraisal decisions where risk is probably at its most obvious. Moreover in the current economic climate, discussion of risk management, stress testing and so on is an everyday occurrence.
Written by a member of the APM examining team
References
- Jorion, P (2006), Value at Risk: The New Benchmark for Managing Financial Risk, 3rd edition, McGraw Hill
- Nocera, J (2009), Risk Management, New York Times
- Taleb, N (2007), Black Swan, Random House Publishing