Volatility is key to several financial applications, such as stock options and hedging strategies to manage currency risk. And volatility is important in its own right – as witnessed by stories of the role of misapplication of the Black-Scholes theorem in the financial crisis of 2008.
I won’t dwell on alternative metrics, but focus on a conventional measure of volatility – the standard deviation of the squared day-over-day growth rates. This is close, for smaller growth rates, to the standard deviation of the natural log of daily returns, frequently mentioned in finance literature.
Squared daily growth rates or log returns are the raw material of the volatility calculation, so we can look at these to get an idea what is happening in a series – rather than fuss with annualizing standard deviations over rolling windows.
Looking at squared growth rates for the US dollar/Canadian dollar exchange rate, January 1 2007 to March 29, 2013, produces the following chart, indicating periods of high volatility.
The chart supports several stylized facts about volatility. There is thus the tendency of volatile movements of stock prices and currency rates to cluster. Periods of higher variance in these time series also give rise to fat-tailed distributions.
Clustering. So if the variability of the time series is larger in time t, it is likely that the variability in time t+1 also will be larger. The first order autocorrelation coefficient provides a metric. So for the 7666 observations of the US/Canadian dollar rates from April 1992 to March 2013, the first order autocorrelation coefficient is 0.275. This is 24 standard deviations from the mean value of an identically and independently distributed series.
The same type of calculation can be made for the S&P 500 Stock Index. For example, Rob Reider states,
Although the stock market crash on October 19, 1987 is an extreme example, we can see anecdotally that large moves in prices lead to more large moves…..Before the stock market crash[of October 19, 1987], the standard deviation of returns was about 1% per day. On October 19, the S&P 500 was down 20%, which as a 20 standard deviation move, would not be expected to occur in over 4.5 billion years (the age of the earth) if returns were truly normally distributed. But in 4 of the 5 following days, the market moved over 4%, so it appears that volatility increased after the stock market crash, rather than remaining at 1% per day.
Fat-tailed distributions are a concomitant of volatility of returns. For example, the US/Canadian dollar daily log returns, from 1992 to the current date, are not normally distributed, although their graph shown below looks like a bell-shaped curve.
This is easily shown by calculating the sample excess kurtosis for this period, which is defined as the ratio of the fourth moment of the log daily returns, divided by the second moment squared, all minus 3. This works out to 8.31, several orders of magnitude greater than the standard deviation of this metric, implying a leptokurtic or “fat-tailed” distribution. The expectation for a normal distribution is an excess kurtosis of zero.
Fat-tailed distributions are too-often ignored in financial risk calculations, incidentally. An article article in The Actuary comments on short-comings of VAR (value-at-risk) metrics, originally developed by JP Morgan, in the context of asset price distributions. Thus, VaR ..
measures the maximum loss from a given portfolio position at a certain confidence level over a certain period of time. For example, if a bank’s 10-day 99% VaR is $3m, there is considered to be only a 1% chance that losses will exceed $3m over 10 days. In this role, VaR can be used to help set risk limits for traders’ portfolios. It can also be used to set regulatory capital standards for these. However, The simplicity of VaR has led to its ubiquitous use. But VaR has a fatal flaw: it is essentially silent about risks in the tail beyond the confidence interval. Even if a trader’s 99% VaR-based limit is $10m, there is nothing to stop them constructing a portfolio that delivers a 1% chance of a $1bn loss. VaR would be blind to that risk. On 10 May, J P Morgan announced losses totalling $2bn on a portfolio of corporate credit exposures. This took the world, and the management of J P Morgan, by surprise. After all, the 95% VaR on this portfolio in the first quarter of 2012 had been a mere $67m. This VaR measure was revised upwards to $129m on the day of the announcement. So far, the accumulated losses on this portfolio are said to exceed $5bn.
Can volatility be forecast? That question is raised in an widely-cited survey article on forecasting volatility by Poon and Granger.
It’s good to ponder too, since it is obvious that a solid success in forecasting volatility of, say, stock market prices would imply the ability to forecast asset bubbles. And, as previous posts have noted, there is resistance in the conventional economics community to this idea. [Thus, if it is possible to forecast the cadence of asset bubbles, there might be calls for more coordination of financial regulation and monetary policy.]
So at the outset one would expect that the existing methods of forecasting volatility – historical volatility measurement, moving averages, exponential smoothing, and the alphabet soup of ARCH, GARCH, TGARCH, STGARCH models would tend to move behind actual developments in volatility in real time series. Otherwise, we could claim to predict the rise and fall of asset bubbles.
Note that this is a meta-prediction, or a prediction about a prediction or the ability to predict.
Let’s continue this inquiry soon.