Category Archives: ARIMA models

Pvar Models for Forecasting Stock Prices

When I began this blog three years ago, I wanted to deepen my understanding of technique – especially stuff growing up alongside Big Data and machine learning.

I also was encouraged by Malcolm Gladwell’s 10,000 hour idea – finding it credible from past study of mathematical topics. So maybe my performance as a forecaster would improve by studying everything about the subject.

Little did I suspect I would myself stumble on a major forecasting discovery.

But, as I am wont to quote these days, even a blind pig uncovers a truffle from time to time.

Forecasting Stock Prices

My discovery pertains to forecasting stock prices.

Basically, I have stumbled on a method of developing much more accurate forecasts of high and low stock prices, given the opening price in a period. These periods can be days, groups of days, weeks, months, and, based on what I present here – quarters.

Additionally, I have discovered a way to translate these results into much more accurate forecasts of closing prices over long forecast horizons.

I would share the full details, except I need some official acknowledgement for my work (in process) and, of course, my procedures lead to profits, so I hope to recover some of what I have invested in this research.

Having struggled through a maze of ways of doing this, however, I feel comfortable sharing a key feature of my approach – which is that it is based on the spreads between opening prices and the high and low of previous periods. Hence, I call these “Pvar models” for proximity variable models.

There is really nothing in the literature like this, so far as I am able to determine – although the discussion of 52 week high investing captures some of the spirit.

S&P 500 Quarterly Forecasts

Let’s look an example – forecasting quarterly closing prices for the S&P 500, shown in this chart.

S&PQ

We are all familiar with this series. And I think most of us are worried that after the current runup, there may be another major correction.

In any case, this graph compares out-of-sample forecasts of ARIMA(1,1,0) and Pvar models. The ARIMA forecasts are estimated by the off-the-shelf automatic forecast program Forecast Pro. The Pvar models are estimated by ordinary least squares (OLS) regression, using Matlab and Excel spreadsheets.

CompPvarARIMA

The solid red line shows the movement of the S&P 500 from 2005 to just recently. Of course, the big dip in 2008 stands out.

The blue line charts out-of-sample forecasts of the Pvar model, which are from visual inspection, clearly superior to the ARIMA forecasts, in orange.

And note the meaning of “out-of-sample” here. Parameters of the Pvar and ARIMA models are estimated over historic data which do not include the prices in the period being forecast. So the results are strictly comparable with applying these models today and checking their performance over the next three months.

The following bar chart shows the forecast errors of the Pvar and ARIMA forecasts.

PvarARIMAcomp

Thus, the Pvar model forecasts are not always more accurate than ARIMA forecasts, but clearly do significantly better at major turning points, like the 2008 recession.

The mean absolute percent errors (MAPE) for the two approaches are 7.6 and 10.2 percent, respectively.

This comparison is intriguing, since Forecast Pro automatically selected an ARIMA(1,1,0) model in each instance of its application to this series. This involves autoregressions on differences of a time series, to some extent challenging the received wisdom that stock prices are random walks right there. But Pvar poses an even more significant challenge to versions of the efficient market hypothesis, since Pvar models pull variables from the time series to predict the time series – something you are really not supposed to be able to do, if markets are, as it were, “efficient.” Furthermore, this price predictability is persistent, and not just a fluke of some special period of market history.

I will have further comments on the scalability of this approach soon. Stay tuned.

Forecasting Holiday Retail Sales

Holiday retail sales are a really “spikey” time series, illustrated by the following graph (click to enlarge).

HolidayRetailSales

These are monthly data from FRED and are not seasonally adjusted.

Following the National Retail Federation (NRF) convention, I define holiday retail sales to exclude retail sales by automobile dealers, gasoline stations and restaurants. The graph above includes all months of the year, but we can again follow the NRF convention and define “sales from the Holiday period” as being November and December sales.

Current Forecasts

The National Retail Federation (NRF) issues its forecast for the Holiday sales period in late October.

This year, it seems they were a tad optimistic, opting for

..sales in November and December (excluding autos, gas and restaurant sales) to increase a healthy 4.1 percent to $616.9 billion, higher than 2013’s actual 3.1 percent increase during that same time frame.

As the news release for this forecast observed, this would make the Holiday Season 2014 the first time in many years to see more than 4 percent growth – comparing to the year previous holiday periods.

The NRF is still holding to its bet (See https://nrf.com/news/retail-sales-increase-06-percent-november-line-nrf-holiday-forecast), noting that November 2014 sales come in around 3.2 percent over the total for November in 2013.

This means that December sales have to grow by about 4.8 percent on a month-over-year-previous-month basis to meet the overall, two month 4.1 percent growth.

You don’t get to this number by applying univariate automatic forecasting software. Forecast Pro, for example, suggests overall year-over-year growth this holiday season will be more like 3.3 percent, or a little lower than the 2013 growth of 3.7 percent.

Clearly, the argument for higher growth is the extra cash in consumer pockets from lower gas prices, as well as the strengthening employment outlook.

The 4.1 percent growth, incidentally, is within the 97.5 percent confidence interval for the Forecast Pro forecast, shown in the following chart.

FPHolidaySales

This forecast follows from a Box-Jenkins model with the parameters –

ARIMA(1, 1, 3)*(0, 1, 2)

In other words, Forecast Pro differences the “Holiday Sales” Retail Series and finds moving average and autoregressive terms, as well as seasonality. For a crib on ARIMA modeling and the above notation, a Duke University site is good.

I guess we will see which is right – the NRF or Forecast Pro forecast.

Components of US Retail Sales

The following graphic shows the composition of total US retail sales, and the relative sizes of the main components.

USRETAILPIE 

Retail and food service sales totaled around $5 trillion in 2012. Taking out motor vehicle and parts dealers, gas stations, and food services and drinking places considerably reduces the size of the relevant Holiday retail time series.

Forecasting Issues and Opportunities

I have not yet done the exercise, but it would be interesting to forecast the individual series in the above pie chart, and compare the sum of those forecasts with a forecast of the total.

For example, if some of the component series are best forecast with exponential smoothing, while others are best forecast with Box-Jenkins time series models, aggregation could be interesting.

Of course, in 2007-09, application of univariate methods would have performed poorly. What we cry out for here is a multivariate model, perhaps based on the Kalman filter, which specifies leading indicators. That way, we could get one or two month ahead forecasts without having to forecast the drivers or explanatory variables.

In any case, barring unforeseen catastrophes, this Holiday Season should show comfortable growth for retailers, especially online retail (more on that in a subsequent post.)

Heading picture from New York Times

Cycles -1

I’d like  to focus on cycles in business and economic forecasting for the next posts.

The Business Cycle

“Cycles” – in connection with business and economic time series – evoke the so-called business cycle.

Immediately after World War II, Burns and Mitchell offered the following characterization –

Business cycles are a type of fluctuation found in the aggregate economic activity of nations that organize their work mainly in business enterprises: a cycle consists of expansions occurring at about the same time in many economic activities, followed by similarly general recessions, contractions, and revivals which merge into the expansion phase of the next cycle

Earlier, several types of business and economic cycles were hypothesized, based on their average duration. These included the 3 to 4 year Kitchin inventory investment cycle, a 7 to 11 year Juglar cycle associated with investment in machines, the 15 to 25 year Kuznets cycle, and the controversial Kondratieff cycle of from 48 to 60 years.

Industry Cycles

I have looked at industry cycles relating to movements of sales and prices in semiconductor and computer markets. While patterns may be changing, there is clear evidence of semi-regular pulses of activity in semiconductors and related markets. These stochastic cycles probably are connected with Moore’s Law and the continuing thrust of innovation and new product development.

Methods

Spectral analysis, VAR modeling, and standard autoregressive analysis are tools for developing evidence for time series cycles. STAMP, now part of the Oxmetrics suite of software, fits cycles with time-varying parameters.

Sometimes one hears of estimations in the time domain moving into the frequency domain. Time series, as normally graphed with time on the horizontal axis, are in the “time domain.” This is where VAR and autoregressive models operate. The frequency domain is where we get indications of the periodicity of cycles and semi-cycles in a time series.

Cycles as Artifacts

There is something roughly analogous to spurious correlation in regression analysis in the identification of cyclical phenomena in time series. Eugen Slutsky, a Russian mathematical economist and statistician, wrote a famous “unknown” paper on how moving averages of random numbers can create the illusion of cycles. Thus, if we add or average together elements of a time series in a moving window, it is easy to generate apparently cyclical phenomena. This can be demonstrated with the digits in the irrational number π, for example, since the sequence of digits 1 through 9 in its expansion is roughly random.

Significances

Cycles in business have sort of reassuring effect, it seems to me. And, of course, we are all very used to any number of periodic phenomena, ranging from the alternation of night and day, the phases of the moon, the tides, and the myriad of biological cycles.

As a paradigm, however, they probably used to be more important in business and economic circles, than they are today. There is perhaps one exception, and that is in rapidly changing high tech fields of which IT (information technology) is still in many respects a subcategory.

I’m looking forward to exploring some estimations, putting together some quantitative materials on this.

Wrap on Exponential Smoothing

Here are some notes on essential features of exponential smoothing.

  1. Name. Exponential smoothing (ES) algorithms create exponentially weighted sums of past values to produce the next (and subsequent period) forecasts. So, in simple exponential smoothing, the recursion formula is Lt=αXt+(1-α)Lt-1 where α is the smoothing constant constrained to be within the interval [0,1], Xt is the value of the time series to be forecast in period t, and Lt is the (unobserved) level of the series at period t. Substituting the similar expression for Lt-1 we get Lt=αXt+(1-α) (αXt-1+(1-α)Lt-2)= αXt+α(1-α)Xt-1+(1-α)2Lt-2, and so forth back to L1. This means that more recent values of the time series X are weighted more heavily than values at more distant times in the past. Incidentally, the initial level L1 is not strongly determined, but is established by one ad hoc means or another – often by keying off of the initial values of the X series in some manner or another. In state space formulations, the initial values of the level, trend, and seasonal effects can be included in the list of parameters to be established by maximum likelihood estimation.
  2. Types of Exponential Smoothing Models. ES pivots on a decomposition of time series into level, trend, and seasonal effects. Altogether, there are fifteen ES methods. Each model incorporates a level with the differences coming as to whether the trend and seasonal components or effects exist and whether they are additive or multiplicative; also whether they are damped. In addition to simple exponential smoothing, Holt or two parameter exponential smoothing is another commonly applied model. There are two recursion equations, one for the level Lt and another for the trend Tt, as in the additive formulation, Lt=αXt+(1-α)(Lt-1+Tt-1) and Tt=β(Lt– Lt-1)+(1-β)Tt-1 Here, there are now two smoothing parameters, α and β, each constrained to be in the closed interval [0,1]. Winters or three parameter exponential smoothing, which incorporates seasonal effects, is another popular ES model.
  3. Estimation of the Smoothing Parameters. The original method of estimating the smoothing parameters was to guess their values, following guidelines like “if the smoothing parameter is near 1, past values will be discounted further” and so forth. Thus, if the time series to be forecast was very erratic or variable, a value of the smoothing parameter which was closer to zero might be selected, to achieve a longer period average. The next step is to set up a sum of the squared differences of the within sample predictions and minimize these. Note that the predicted value of Xt+1 in the Holt or two parameter additive case is Lt+Tt, so this involves minimizing the expression Sqerroreq Currently, the most advanced method of estimating the value of the smoothing parameters is to express the model equations in state space form and utilize maximum likelihood estimation. It’s interesting, in this regard, that the error correction version of ES recursion equations are a bridge to this approach, since the error correction formulation is found at the very beginnings of the technique. Advantages of using the state space formulation and maximum likelihood estimation include (a) the ability to estimate confidence intervals for point forecasts, and (b) the capability of extending ES methods to nonlinear models.
  4. Comparison with Box-Jenkins or ARIMA models. ES began as a purely applied method developed for the US Navy, and for a long time was considered an ad hoc procedure. It produced forecasts, but no confidence intervals. In fact, statistical considerations did not enter into the estimation of the smoothing parameters at all, it seemed. That perspective has now changed, and the question is not whether ES has statistical foundations – state space models seem to have solved that. Instead, the tricky issue is to delineate the overlap and differences between ES and ARIMA models. For example, Gardner makes the statement that all linear exponential smoothing methods have equivalent ARIMA models. Hyndman points out that the state space formulation of ES models opens the way for expressing nonlinear time series – a step that goes beyond what is possible in ARIMA modeling.
  5. The Importance of Random Walks. The random walk is a forecasting benchmark. In an early paper, Muth showed that a simple exponential smoothing model provided optimal forecasts for a random walk. The optimal forecast for a simple random walk is the current period value. Things get more complicated when there is an error associated with the latent variable (the level). In that case, the smoothing parameter determines how much of the recent past is allowed to affect the forecast for the next period value.
  6. Random Walks With Drift. A random walk with drift, for which a two parameter ES model can be optimal, is an important form insofar as many business and economic time series appear to be random walks with drift. Thus, first differencing removes the trend, leaving ideally white noise. A huge amount of ink has been spilled in econometric investigations of “unit roots” – essentially exploring whether random walks and random walks with drift are pretty much the whole story when it comes to major economic and business time series.
  7. Advantages of ES. ES is relatively robust, compared with ARIMA models, which are sensitive to mis-specification. Another advantage of ES is that ES forecasts can be up and running with only a few historic observations. This comment applied to estimation of the level and possibly trend, but does not apply in the same degree to the seasonal effects, which usually require more data to establish. There are a number of references which establish the competitive advantage in terms of the accuracy of ES forecasts in a variety of contexts.
  8. Advanced Applications.The most advanced application of ES I have seen is the research paper by Hyndman et al relating to bagging exponential smoothing forecasts.

The bottom line is that anybody interested in and representing competency in business forecasting should spend some time studying the various types of exponential smoothing and the various means to arrive at estimates of their parameters.

For some reason, exponential smoothing reaches deep into actual process in data generation and consistently produces valuable insights into outcomes.

More Blackbox Analysis – ARIMA Modeling in R

Automatic forecasting programs are seductive. They streamline analysis, especially with ARIMA (autoregressive integrated moving average) models. You have to know some basics – such as what the notation ARIMA(2,1,1) or ARIMA(p,d,q) means. But you can more or less sidestep the elaborate algebra – the higher reaches of equations written in backward shift operators – in favor of looking at results. Does the automatic ARIMA model selection predict out-of-sample, for example?

I have been exploring the Hyndman R Forecast package – and other contributors, such as George Athanasopoulos, Slava Razbash, Drew Schmidt, Zhenyu Zhou, Yousaf Khan, Christoph Bergmeir, and Earo Wang, should be mentioned.

A 76 page document lists the routines in Forecast, which you can download as a PDF file.

This post is about the routine auto.arima(.) in the Forecast package. This makes volatility modeling – a place where Box Jenkins or ARIMA modeling is relatively unchallenged – easier. The auto.arima(.) routine also encourages experimentation, and highlights the sharp limitations of volatility modeling in a way that, to my way of thinking, is not at all apparent from the extensive and highly mathematical literature on this topic.

Daily Gold Prices

I grabbed some data from FRED – the Gold Fixing Price set at 10:30 A.M (London time) in London Bullion Market, based in U.S. Dollars.

GOLDAMGBD228NLBM

Now the price series shown in the graph above is a random walk, according to auto.arima(.).

In other words, the routine indicates that the optimal model is ARIMA(0,1,0), which is to say that after differencing the price series once, the program suggests the series reduces to a series of independent random values. The automatic exponential smoothing routine in Forecast is ets(.). Running this confirms that simple exponential smoothing, with a smoothing parameter close to 1, is the optimal model – again, consistent with a random walk.

Here’s a graph of these first differences.

1stdiffgold

But wait, there is a clustering of volatility of these first differences, which can be accentuated if we square these values, producing the following graph.

volatilityGP

Now in a more or less textbook example, auto.arima(.) develops the following ARIMA model for this series

model

Thus, this estimate of the volatility of the first differences of gold price is modeled as a first order autoregressive process with two moving average terms.

Here is the plot of the fitted values.

Rplot1

Nice.

But of course, we are interested in forecasting, and the results here are somewhat more disappointing.

Basically, this type of model makes a horizontal line prediction at a certain level, which is higher when the past values have been higher.

This is what people in quantitative finance call “persistence” but of course sometimes new things happen, and then these types of models do not do well.

From my research on the volatility literature, it seems that short period forecasts are better than longer period forecasts. Ideally, you update your volatility model daily or at even higher frequencies, and it’s likely your one or two period ahead (minutes, hours, a day) will be more accurate.

Incidentally, exponential smoothing in this context appears to be a total fail, again suggesting this series is a simple random walk.

Recapitulation

There is more here than meets the eye.

First, the auto.arima(.) routines in the Hyndman R Forecast package do a competent job of modeling the clustering of higher first differences of the gold price series here. But, at the same time, they highlight a methodological point. The gold price series really has nonlinear aspects that are not adequately commanded by a purely linear model. So, as in many approximations, the assumption of linearity gets us some part of the way, but deeper analysis indicates the existence of nonlinearities. Kind of interesting.

Of course, I have not told you about the notation ARIMA(p,d,q). Well, p stands for the order of the autoregressive terms in the equation, q stands for the moving average terms, and d indicates the times the series is differenced to reduce it to a stationary time series. Take a look at Forecasting: principles and practice – the free forecasting text of Hyndman and Athanasopoulos – in the chapter on ARIMA modeling for more details.

Incidentally, I think it is great that Hyndman and some of his collaborators are providing an open source, indeed free, forecasting package with automatic forecasting capabilities, along with a high quality and, again, free textbook on forecasting to back it up. Eventually, some of these techniques might get dispersed into the general social environment, potentially raising the level of some discussions and thinking about our common future.

And I guess also I have to say that, ultimately, you need to learn the underlying theory and struggle with the algebra some. It can improve one’s ability to model these series.

More on Automatic Forecasting Packages – Autobox Gold Price Forecasts

Yesterday, my post discussed the statistical programming language R and Rob Hyndman’s automatic forecasting package, written in R – facts about this program, how to download it, and an application to gold prices.

In passing, I said I liked Hyndman’s disclosure of his methods in his R package and “contrasted” that with leading competitors in the automatic forecasting market space –notably Forecast Pro and Autobox.

This roused Tom Reilly, currently Senior Vice-President and CEO of Automatic Forecast Systems – the company behind Autobox.

62_tom

Reilly, shown above, wrote  –

You say that Autobox doesn’t disclose its methods.  I think that this statement is unfair to Autobox.  SAS tried this (Mike Gilliland) on the cover of his book showing something purporting to a black box.  We are a white box.  I just downloaded the GOLD prices and recreated the problem and ran it. If you open details.htm it walks you through all the steps of the modeling process.  Take a look and let me know your thoughts.  Much appreciated!

AutoBox Gold Price Forecast

First, disregarding the issue of transparency for a moment, let’s look at a comparison of forecasts for this monthly gold price series (London PM fix).

A picture tells the story (click to enlarge).

ABFPHcomp

So, for this data, 2007 to early 2011, Autobox dominates. That is, all forecasts are less than the respective actual monthly average gold prices. Thus, being linear, if one forecast method is more inaccurate than another for one month, that method is less accurate than the forecasts generated by this other approach for the entire forecast horizon.

I guess this does not surprise me. Autobox has been a serious contender in the M-competitions, for example, usually running just behind or perhaps just ahead of Forecast Pro, depending on the accuracy metric and forecast horizon. (For a history of these “accuracy contests” see Markridakis and Hibon’s article on M3).

And, of course, this is just one of many possible forecasts that can be developed with this time series, taking off from various ending points in the historic record.

The Issue of Transparency

In connection with all this, I also talked with Dave Reilly, a founding principal of Autobox, shown below.

DaveReilly

Among other things, we went over the “printout” Tom Reilly sent, which details the steps in the estimation of a final time series model to predict these gold prices.

A blog post on the Autobox site is especially pertinent, called Build or Make your own ARIMA forecasting model? This discussion contains two flow charts which describe the process of building a time series model, I reproduce here, by kind permission.

The first provides a plain vanilla description of Box-Jenkins modeling.

Rflowchart1

The second flowchart adds steps revised for additions by Tsay, Tiao, Bell, Reilly & Gregory Chow (ie chow test).

Rflowchart2

Both start with plotting the time series to be analyzed and calculating the autocorrelation and partial autocorrelation functions.

But then additional boxes are added for accounting for and removing “deterministic” elements in the time series and checking for the constancy of parameters over the sample.

The analysis run Tom Reilly sent suggests to me that “deterministic” elements can mean outliers.

Dave Reilly made an interesting point about outliers. He suggested that the true autocorrelation structure can be masked or dampened in the presence of outliers. So the tactic of specifying an intervention variable in the various trial models can facilitate identification of autoregressive lags which otherwise might appear to be statistically not significant.

Really, the point of Autobox model development is to “create an error process free of structure.” That a Dave Reilly quote.

So, bottom line, Autobox’s general methods are well-documented. There is no problem of transparency with respect to the steps in the recommended analysis in the program. True, behind the scenes, comparisons are being made and alternatives are being rejected which do not make it to the printout of results. But you can argue that any commercial software has to keep some kernel of its processes proprietary.

I expect to be writing more about Autobox. It has a good track record in various forecasting competitions and currently has a management team that actively solicits forecasting challenges.

Automatic Forecasting Programs – the Hyndman Forecast Package for R

I finally started learning R.

It’s a vector and matrix-based statistical programming language, a lot like MathWorks Matlab and GAUSS. The great thing is that it is free. I have friends and colleagues who swear by it, so it was on my to-do list.

The more immediate motivation, however, was my interest in Rob Hyndman’s automatic time series forecast package for R, described rather elegantly in an article in the Journal of Statistical Software.

This is worth looking over, even if you don’t have immediate access to R.

Hyndman and Exponential Smoothing

Hyndman, along with several others, put the final touches on a classification of exponential smoothing models, based on the state space approach. This facilitates establishing confidence intervals for exponential smoothing forecasts, for one thing, and provides further insight into the modeling options.

There are, for example, 15 widely acknowledged exponential smoothing methods, based on whether trend and seasonal components, if present, are additive or multiplicative, and also whether any trend is damped.

15expmethods

When either additive or multiplicative error processes are added to these models in a state space framewoprk, the number of modeling possibilities rises from 15 to 30.

One thing the Hyndman R Package does is run all the relevant models from this superset on any time series provided by the user, picking a recommended model for use in forecasting with the Aikaike information criterion.

Hyndman and Khandakar comment,

Forecast accuracy measures such as mean squared error (MSE) can be used for selecting a model for a given set of data, provided the errors are computed from data in a hold-out set and not from the same data as were used for model estimation. However, there are often too few out-of-sample errors to draw reliable conclusions. Consequently, a penalized method based on the in-sample  t is usually better.One such approach uses a penalized likelihood such as Akaike’s Information Criterion… We select the model that minimizes the AIC amongst all of the models that are appropriate for the data.

Interestingly,

The AIC also provides a method for selecting between the additive and multiplicative error models. The point forecasts from the two models are identical so that standard forecast accuracy measures such as the MSE or mean absolute percentage error (MAPE) are unable to select between the error types. The AIC is able to select between the error types because it is based on likelihood rather than one-step forecasts.

So the automatic forecasting algorithm, involves the following steps:

1. For each series, apply all models that are appropriate, optimizing the parameters (both smoothing parameters and the initial state variable) of the model in each case.

2. Select the best of the models according to the AIC.

3. Produce point forecasts using the best model (with optimized parameters) for as many steps ahead as required.

4. Obtain prediction intervals for the best model either using the analytical results of Hyndman et al. (2005b), or by simulating future sample paths..

This package also includes an automatic forecast module for ARIMA time series modeling.

One thing I like about Hyndman’s approach is his disclosure of methods. This, of course, is in contrast with leading competitors in the automatic forecasting market space –notably Forecast Pro and Autobox.

Certainly, go to Rob J Hyndman’s blog and website to look over the talk (with slides) Automatic time series forecasting. Hyndman’s blog, mentioned previously in the post on bagging time series, is a must-read for statisticians and data analysts.

Quick Implementation of the Hyndman R Package and a Test

But what about using this package?

Well, first you have to install R on your computer. This is pretty straight-forward, with the latest versions of the program available at the CRAN site. I downloaded it to a machine using Windows 8 as the OS. I downloaded both the 32 and 64-bit versions, just to cover my bases.

Then, it turns out that, when you launch R, a simple menu comes up with seven options, and a set of icons underneath. Below that there is the work area.

Go to the “Packages” menu option. Scroll down until you come on “forecast” and load that.

That’s the Hyndman Forecast Package for R.

So now you are ready to go, but, of course, you need to learn a little bit of R.

You can learn a lot by implementing code from the documentation for the Hyndman R package. The version corresponding to the R file that can currently be downloaded is at

http://cran.r-project.org/web/packages/forecast/forecast.pdf

Here are some general tutorials:

http://cran.r-project.org/doc/contrib/Verzani-SimpleR.pdf

http://cyclismo.org/tutorial/R/

http://cran.r-project.org/doc/manuals/R-intro.html#Simple-manipulations-numbers-and-vectors

http://www.statmethods.net/

And here is a discussion of how to import data into R and then convert it to a time series – which you will need to do for the Hyndman package.

I used the exponential smoothing module to forecast monthly averages from London gold PM fix price series, comparing the results with a ForecastPro run. I utilized data from 2007 to February 2011 as a training sample, and produced forecasts for the next twelve months with both programs.

The Hyndman R package and exponential smoothing module outperformed Forecast Pro in this instance, as the following chart shows.

RFPcomp

Another positive about the R package is it is possible to write code to produce a whole number of such out-of-sample forecasts to get an idea of how the module works with a time series under different regimes, e.g. recession, business recovery.

I’m still caging together the knowledge to put programs like that together and appropriately save results.

But, my introduction to this automatic forecasting package and to R has been positive thus far.

Interest Rates – 3

Can interest rates be nonstationary?

This seems like a strange question, since interest rates are bounded, except in circumstances, perhaps, of total economic collapse.

“Standard” nonstationary processes, by contrast, can increase or decrease without limit, as can conventional random walks.

But, be careful. It’s mathematically possible to define and study random walks with reflecting barriers –which, when they reach a maximum or minimum, “bounce” back from the barrier.

This is more than esoteric, since the 30 year fixed mortgage rate monthly averages series discussed in the previous post has a curious property. It can be differenced many times, and yet display first order autocorrelation of the resulting series.

This contrasts with the 10 year fixed maturity Treasury bond rates (also monthly averages). After first differencing this Treasury bond series, the resulting residuals do not show statistically significant first order autocorrelation.

Here a stationary stochastic process is one in which the probability distribution of the outcomes does not shift with time, so the conditional mean and conditional variance are, in the strict case, constant. A classic example is white noise, where each element can be viewed as an independent draw from a Gaussian distribution with zero mean and constant variance.

30 Year Fixed Mortgage Monthly Averages – a Nonstationary Time Series?

Here are some autocorrelation functions (ACF’s) and partial autocorrelation functions (PACF’s) of the 30 year fixed mortgage monthly averages from April 1971 to January 2014, first differences of this series, and second differences of this series – altogether six charts produced by MATLAB’s plot routines.

Data for this and the following series are downloaded from the St. Louis Fed FRED site.

MLmort0

Here the PACF appears to cut off after 4 periods, but maybe not quite, since there are values for lags which touch the statistical significance boundary further out.

MLmort1

This seems more satisfactory, since there is only one major spike in the ACF and 2-3 initial spikes in the PACF. Again, however, values for lags far out on the horizontal axis appear to touch the boundary of statistical significance.

MLmort2

Here are the ACF and PACF’s of the “difference of the first difference” or the second difference, if you like. This spike at period 2 for the ACF and PACF is intriguing, and, for me, difficult to interpret.

The data series includes 514 values, so we are not dealing with a small sample in conventional terms.

I also checked for seasonal variation – either additive or multiplicative seasonal components or factors. After taking steps to remove this type of variation, if it exists, the same pattern of repeated significance of autocorrelations of differences and higher order differences persists.

Forecast Pro, a good business workhorse for automatic forecasting, selects ARIMA(0,1,1) as the optimal forecast model for this 30 year fixed interest mortgage monthly averages. In other words, Forecast Pro glosses over the fact that the residuals from an ARIMA(0,1,1) setup still contain significant autocorrelation.

Here is a sample of the output (click to enlarge)

FP30yr

10 Year Treasury Bonds Constant Maturity

The situation is quite different for 10 year Treasury Bonds monthly averages, where the downloaded series starts April 1953 and, again, ends January 2014.

Here is the ordinary least squares (OLS) regression of the first order autocorrelation.

10yrTreasregHere the R2 or coefficient of determination is much lower than for the 30 year fixed mortgage monthly averages, but the first order lagged rate is highly significant statistically.

On the other hand, the residuals of this regression do not exhibit a high degree of first order autocorrelation, falling below the 80 percent significance level.

What Does This Mean?

The closest I have come to formulating an explanation for this weird difference between these two “interest rates” is the discussion in a paper from 2002 –

On Mean Reversion in Real Interest Rates: An Application of Threshold Cointegration

The authors of this research paper from the Institute for Advanced Studies in Vienna acknowledge findings that some interests rates may be nonstationary, at least over some periods of time. Their solution is a nonlinear time series approach, but they highlight several of the more exotic statistical features of interest rates in passing – such as evidence of non-normal distributions, excess kurtosis, conditional heteroskedasticity, and long memory.

In any case, I wonder whether the 30 year fixed mortgage monthly averages might be suitable for some type of boosting model working on residuals and residuals of residuals.

I’m going to try that later on this Spring.

Forecasting the Price of Gold – 2

Searching “forecasting gold prices” on Google lands on a number of ARIMA (autoregressive integrated moving average) models of gold prices. Ideally, researchers focus on shorter term forecast horizons with this type of time series model.

I take a look at this approach here, moving onto multivariate approaches in subsequent posts.

Stylized Facts

These ARIMA models support stylized facts about gold prices such as: (1) gold prices constitute a nonstationary time series, (2) first differencing can reduce gold price time series to a stationary process, and, usually, (3) gold prices are random walks.

For example, consider daily gold prices from 1978 to the present.

DailyGold

This chart, based World Gold Council data and the London PM fix, shows gold prices do not fluctuate about a fixed level, but can move in patterns with a marked trend over several years.

The trick is to reduce such series to a mean stationary series through appropriate differencing and, perhaps, other data transformations, such as detrending and taking out seasonal variation. Guidance in this is provided by tools such as the autocorrelation function (ACF) and partial autocorrelation function (PACF) of the time series, as well as tests for unit roots.

Some Terminology

I want to talk about specific ARIMA models, such as ARIMA(0,1,1) or ARIMA(p,d,q), so it might be a good idea to review what this means.

Quickly, ARIMA models are described by three parameters: (1) the autoregressive parameter p, (2) the number of times d the time series needs to be differenced to reduce it to a mean stationary series, and (3) the moving average parameter q.

ARIMA(0,1,1) indicates a model where the original time series yt is differenced once (d=1), and which has one lagged moving average term.

If the original time series is yt, t=1,2,..n, the first differenced series is zt=yt-yt-1, and an ARIMA(0,1,1) model looks like,

zt = θ1εt-1

or converting back into the original series yt,

yt = μ + yt-1 + θ1εt-1

This is a random walk process with a drift term μ, incidentally.

As a note in the general case, the p and q parameters describe the span of the lags and moving average terms in the model.  This is often done with backshift operators Lk (click to enlarge)  

LagOperator

So you could have a sum of these backshift operators of different orders operating against yt or zt to generate a series of lags of order p. Similarly a sum of backshift operators of order q can operate against the error terms at various times. This supposedly provides a compact way of representing the general model with p lags and q moving average terms.

Similar terminology can indicate the nature of seasonality, when that is operative in a time series.

These parameters are determined by considering the autocorrelation function ACF and partial autocorrelation function PACF, as well as tests for unit roots.

I’ve seen this referred to as “reading the tea leaves.”

Gold Price ARIMA models

I’ve looked over several papers on ARIMA models for gold prices, and conducted my own analysis.

My research confirms that the ACF and PACF indicates gold prices (of course, always defined as from some data source and for some trading frequency) are, in fact, random walks.

So this means that we can take, for example, the recent research of Dr. M. Massarrat Ali Khan of College of Computer Science and Information System, Institute of Business Management, Korangi Creek, Karachi as representative in developing an ARIMA model to forecast gold prices.

Dr. Massarrat’s analysis uses daily London PM fix data from January 02, 2003 to March 1, 2012, concluding that an ARIMA(0,1,1) has the best forecasting performance. This research also applies unit root tests to verify that the daily gold price series is stationary, after first differencing. Significantly, an ARIMA(1,1,0) model produced roughly similar, but somewhat inferior forecasts.

I think some of the other attempts at ARIMA analysis of gold price time series illustrate various modeling problems.

For example there is the classic over-reach of research by Australian researchers in An overview of global gold market and gold price forecasting. These academics identify the nonstationarity of gold prices, but attempt a ten year forecast, based on a modeling approach that incorporates jumps as well as standard ARIMA structure.

A new model proposed a trend stationary process to solve the nonstationary problems in previous models. The advantage of this model is that it includes the jump and dip components into the model as parameters. The behaviour of historical commodities prices includes three differ- ent components: long-term reversion, diffusion and jump/dip diffusion. The proposed model was validated with historical gold prices. The model was then applied to forecast the gold price for the next 10 years. The results indicated that, assuming the current price jump initiated in 2007 behaves in the same manner as that experienced in 1978, the gold price would stay abnormally high up to the end of 2014. After that, the price would revert to the long-term trend until 2018.

As the introductory graph shows, this forecast issued in 2009 or 2010 was massively wrong, since gold prices slumped significantly after about 2012.

So much for long-term forecasts based on univariate time series.

Summing Up

I have not referenced many ARIMA forecasting papers relating to gold price I have seen, but focused on a couple – one which “gets it right” and another which makes a heroically wrong but interesting ten year forecast.

Gold prices appear to be random walks in many frequencies – daily, monthly average, and so forth.

Attempts at superimposing long term trends or even jump patterns seem destined to failure.

However, multivariate modeling approaches, when carefully implemented, may offer some hope of disentangling longer term trends and changes in volatility. I’m working on that post now.

Forecasting the Price of Gold – 1

I’m planning posts on forecasting the price of gold this week. This is an introductory post.

The Question of Price

What is the “price” of gold, or, rather, is there a single, integrated global gold market?

This is partly an anthropological question. Clearly in some locales, perhaps in rural India, people bring their gold jewelry to some local merchant or craftsman, and get widely varying prices. Presumably, though this merchant negotiates with a broker in a larger city of India, and trades at prices which converge to some global average. Very similar considerations apply to interest rates, which are significantly higher at pawnbrokers and so forth.

The World Gold Council uses the London PM fix, which at the time of this writing was $1,379 per troy ounce.

The Wikipedia article on gold fixing recounts the history of this twice daily price setting, dating back, with breaks for wars, to 1919.

One thing is clear, however. The “price of gold” varies with the currency unit in which it is stated. The World Gold Council, for example, supplies extensive historical data upon registering with them. Here is a chart of the monthly gold prices based on the PM or afternoon fix, dating back to 1970.

Goldprices

Another insight from this chart is that the price of gold may be correlated with the price of oil, which also ramped up at the end of the 1970’s and again in 2007, recovering quickly from the Great Recession in 2008-09 to surge up again by 2010-11.

But that gets ahead of our story.

The Supply and Demand for Gold

Here are two valuable tables on gold supply and demand fundamentals, based on World Gold Council sources, via an  An overview of global gold market and gold price forecasting. I’ve more to say about the forecasting model in that article, but the descriptive material is helpful (click to enlarge).

Tab1and2These tables give an idea of the main components of gold supply and demand over a several years recently.

Gold is an unusual commodity in that one of its primary demand components – jewelry – can contribute to the supply-side. Thus, gold is in some sense renewable and recyclable.

Table 1 above shows the annual supplies in this period in the last decade ran on the order of three to four thousand tonnes, where a tonne is 2,240 pounds and equal conveniently to 1000 kilograms.

Demand for jewelry is a good proportion of this annual supply, with demands by ETF’s or exchange traded funds rising rapidly in this period. The industrial and dental demand is an order of magnitude lower and steady.

One of the basic distinctions is between the monetary versus nonmonetary uses or demands for gold.

In total, central banks held about 30,000 tonnes of gold as reserves in 2008.

Another estimated 30,000 tonnes was held in inventory for industrial uses, with a whopping 100,000 tonnes being held as jewelry.

India and China constitute the largest single countries in terms of consumer holdings of gold, where it clearly functions as a store of value and hedge against uncertainty.

Gold Market Activity

In addition to actual purchases of gold, there are gold futures. The CME Group hosts a website with gold future listings. The site states,

Gold futures are hedging tools for commercial producers and users of gold. They also provide global gold price discovery and opportunities for portfolio diversification. In addition, they: Offer ongoing trading opportunities, since gold prices respond quickly to political and economic events, Serve as an alternative to investing in gold bullion, coins, and mining stocks

Some of these contracts are recorded at exchanges, but it seems the bulk of them are over-the-counter.

A study by the London Bullion Market Association estimates that 10.9bn ounces of gold, worth $15,200bn, changed hands in the first quarter of 2011 just in London’s markets. That’s 125 times the annual output of the world’s gold mines – and twice the quantity of gold that has ever been mined.

The Forecasting Problem

The forecasting problem for gold prices, accordingly, is complex. Extant series for gold prices do exist and underpin a lot of the market activity at central exchanges, but the total volume of contracts and gold exchanging hands is many times the actual physical quantity of the product. And there is a definite political dimension to gold pricing, because of the monetary uses of gold and the actions of central banks increasing and decreasing their reserves.

But the standard approaches to the forecasting problem are the same as can be witnessed in any number of other markets. These include the usual time series methods, focused around arima or autoregressive moving average models and multivariate regression models. More up-to-date tactics revolve around tests of cointegration of time series and VAR models. And, of course, one of the fundamental questions is whether gold prices in their many incarnations are best considered to be a random walk.