Category Archives: autoregressive model

Stock Market Price Predictability, Random Walks, and Market Efficiency

Can stock market prices be predicted? Can they be predicted with enough strength to make profits?

The current wisdom may be that market predictability is like craps. That is, you might win (correctly predict) for a while, maybe walking away with nice winnings, if you are lucky. But over the long haul, the casino is the winner.

This seems close to the view in Andrew P. Lo and Craig MacKinlay’s A NonRandom Walk Down Wall Street (NRW), a foil to Burton Malkiel’s A Random Walk Down Wall Street, perhaps.

Lo and MacKinlay (L&M) collect articles from the 1980’s and 1990’s – originally published in the “very best journals” – in a 2014 compilation with interesting intoductions and discussions.

Their work more or less conclusively demonstrates that US stock market prices are not, for identified historic periods, random walks.

The opposite idea – that stock prices are basically random walks – has a long history, “recently” traceable to the likes of Paul Samuelson, as well as Burton Malkiel. Supposedly, any profit opportunity in a deeply traded market will be quickly exploited, leaving price movements largely random.

The ringer for me in this whole argument is the autocorrelation (AC) coefficient.

The first order autocorrelation coefficient of a random walk is 1, but metrics derived from stock price series have positive first order autocorrelations less than 1 over daily or weekly data. In fact, L&M were amazed to discover the first order autocorrelation coefficient of weekly stock returns, based on CRSP data, was 30 percent and statistically highly significant. In terms of technical approach, a key part of their analysis involves derivation of asymptotic limits for distributions and confidence intervals, based on assumptions which encompass nonconstant (heteroskedastic) error processes.

Finding this strong autocorrelation was somewhat incidental to their initial attack on the issue of the randomness, which is based on variance ratios.

L&M were really surprised to discover significant AC in stock market returns, and, indeed, several of their articles explore ways they could be wrong, or things could be different than what they appear to be.

All this is more than purely theoretical, as Morgan Stanley and D.P. Shaw’s development of “high frequency equity trading strategies” shows. These strategies exploit this autocorrelation or time dependency through “statistical arbitrage.” By now, though, according to the authors, this is a thin-margin business, because of the “proliferation of hedge funds engaged in these activities.”

Well, there are some great, geeky lines for cocktail party banter, such as “rational expectations equilibrium prices need not even form a martingale sequence, of which the random walk is special case.”

By itself, the “efficient market hypothesis” (EFM) is rather nebulous, and additional contextualization is necessary to “test” the concept. This means testing several joint hypotheses. Accordingly, negative results can simply be attributed to failure of one or more collateral assumptions. This builds a protective barrier around the EFM, allowing it to retain its character as an article of faith among many economists.

AWL

Andrew W. Lo is a Professor of Finance at MIT and Director of the Laboratory for Financial Engineering. His site through MIT lists other recent publications, and I would like to draw readers’ attention to two:

Can Financial Engineering Cure Cancer?

Reading About the Financial Crisis: A Twenty-One-Book Review

Predicting the High Reached by the SPY ETF 30 Days in Advance – Some Results

Here are some backtests of my new stock market forecasting procedures.

Here, for example, is a chart showing the performance of what I call the “proximity variable approach” in predicting the high price of the exchange traded fund SPY over 30 day forward periods (click to enlarge).

3oDaySPY

So let’s be clear what the chart shows.

The proximity variable approach- which so far I have been abbreviating as “PVar” – is able to identify the high prices reached by the SPY in the coming 30 trading days with forecast errors mostly under 5 percent. In fact, the MAPE for this approximately ten year period is 3 percent. The percent errors, of course, are charted in red with their metric on the axis to the right.

The blue line traces out the predictions, and the grey line shows the actual highs by 30 trading day period.

These results far surpass what can be produced by benchmark models, such as the workhorse No Change model, or autoregressive models.

Why not just do this month-by-month?

Well, months have varying numbers of trading days, and I have found I can boost accuracy by stabilizing the number of trading days considered in the algorithm.

Comments

Realize, of course, that a prediction of the high price that a stock or ETF will reach in a coming period does not tell you when the high will be reached – so it does not immediately translate to trading profits. The high in question could come with the opening price of the period, for example, leaving you out of the money, if you hear there is this big positive prediction of growth and then jump in the market.

However, I do think that market participants react to anticipated increases or decreases in the high or low of a security.

You might explain these results as follows. Traders react to fairly simple metrics predicting the high price which will be reached in the next period – and let this concept be extensible from a day to a month in this discussion. In so reacting, these traders tend to make such predictive models self-fulfilling.

Therefore, daily prices – the opening, the high, the low, and the closing prices – encode a lot more information about trader responses than is commonly given in the literature on stock market forecasting.

Of course, increasingly, scholars and experts are chipping away at the “efficient market hypothesis” and showing various ways in which stock market prices are predictable, or embody an element of predictability.

However, combing Google Scholar and other sources, it seems almost no one has taken the path to modeling stock market prices I am developing here. The focus in the literature is on closing prices and daily returns, for example, rather than high and low prices.

I can envision a whole research program organized around this proximity variable approach, and am drawn to taking this on, reporting various results on this blog.

If any readers would like to join with me in this endeavor, or if you know of resources which would be available to support such a project – feel free to contact me via the Comments and indicate, if you wish, whether you want your communication to be private.

More on the “Efficiency” of US Stock Markets – Evidence from 1871 to 2003

In a pivotal article, Andrew Lo writes,

Many of the examples that behavioralists cite as violations of rationality that are inconsistent with market efficiency loss aversion, overconfidence, overreaction, mental accounting, and other behavioral biases are, in fact, consistent with an evolutionary model of individuals adapting to a changing environment via simple heuristics.

He also supplies an intriguing graph of the rolling first order autocorrelation of monthly returns of the S&P Composite Index from January 1971 to April 2003.

LoACchart

Lo notes the Random Walk Hypothesis implies that returns are serially uncorrelated, so the serial correlation coefficient ought to be zero – or at least, converging to zero over time as markets move into equilibrium.

However, the above chart shows this does not happen, although there are points in time when the first order serial correlation coefficient is small in magnitude, or even zero.

My point is that the first order serial correlation in daily returns for the S&P 500 is large enough for long enough periods to generate profits above a Buy-and-Hold strategy – that is, if one can negotiate the tricky milliseconds of trading at the end of each trading day.

The King Has No Clothes or Why There Is High Frequency Trading (HFT)

I often present at confabs where there are engineers with management or executive portfolios. You start the slides, but, beforehand, prepare for the tough question. Make sure the numbers in the tables add up and that round-off errors or simple typos do not creep in to mess things up.

To carry this on a bit, I recall a Hewlett Packard VP whose preoccupation during meetings was to fiddle with their calculator – which dates the story a little. In any case, the only thing that really interested them was to point out mistakes in the arithmetic. The idea is apparently that if you cannot do addition, why should anyone believe your more complex claims?

I’m bending this around to the theory of efficient markets and rational expectations, by the way.

And I’m playing the role of the engineer.

Rational Expectations

The theory of rational expectations dates at least to the work of Muth in the 1960’s, and is coupled with “efficient markets.”

Lim and Brooks explain market efficiency in – The Evolution of Stock Market Efficiency Over Time: A Survey of the Empirical Literature

The term ‘market efficiency’, formalized in the seminal review of Fama (1970), is generally referred to as the informational efficiency of financial markets which emphasizes the role of information in setting prices.. More specifically, the efficient markets hypothesis (EMH) defines an efficient market as one in which new information is quickly and correctly reflected in its current security price… the weak-form version….asserts that security prices fully reflect all information contained in the past price history of the market.

Lim and Brooks focus, among other things, on statistical tests for random walks in financial time series, noting this type of research is giving way to approaches highlighting adaptive expectations.

Proof US Stock Markets Are Not Efficient (or Maybe That HFT Saves the Concept)

I like to read mathematically grounded research, so I have looked a lot of the papers purporting to show that the hypothesis that stock prices are random walks cannot be rejected statistically.

But really there is a simple constructive proof that this literature is almost certainly wrong.

STEP 1: Grab the data. Download daily adjusted closing prices for the S&P 500 from some free site (e,g, Yahoo Finance). I did this again recently, collecting data back to 1990. Adjusted closing prices, of course, are based on closing prices for the trading day, adjusted for dividends and stock splits. Oh yeah, you may have to resort the data from oldest to newest, since a lot of sites present the newest data on top, originally.

Here’s a graph of the data, which should be very familiar by now.

adjCLPS&P

STEP 2: Create the relevant data structure. In the same spreadsheet, compute the trading-day-over-treading day growth in the adjusted closing price (ACP). Then, side-by-side with this growth rate of the ACP, create another series which, except for the first value, maps the growth in ACP for the previous trading day onto the growth of the ACP for any particular day. That gives you two columns of new data.

STEP 3: Run adaptive regressions. Most spreadsheet programs include an ordinary least squares (OLS) regression routine. Certainly, Excel does. In any case, you want to setup up a regression to predict the growth in the ACP, based on one trading lags in the growth of the ACP.

I did this, initially, to predict the growth in ACP for January 3, 2000, based on data extending back to January 3, 1990 – a total of 2528 trading days. Then, I estimated regressions going down for later dates with the same size time window of 2528 trading days.

The resulting “predictions” for the growth in ACP are out-of-sample, in the sense that each prediction stands outside the sample of historic data used to develop the regression parameters used to forecast it.

It needs to be said that these predictions for the growth of the adjusted closing price (ACP) are marginal, correctly predicting the sign of the ACP only about 53 percent of the time.

An interesting question, though, is whether these just barely predictive forecasts can be deployed in a successful trading model. Would a trading algorithm based on this autoregressive relationship beat the proverbial “buy-and-hold?”

So, for example, suppose we imagine that we can trade at closing each trading day, close enough to the actual closing prices.

Then, you get something like this, if you invest $100,000 at the beginning of 2000, and trade through last week. If the predicted growth in the ACP is positive, you buy at the previous day’s close. If not, you sell at the previous day’s close. For the Buy-and-Hold portfolio, you just invest the $100,000 January 3, 2000, and travel to Tahiti for 15 years or so.

BandHversusAR

So, as should be no surprise, the Buy-and-Hold strategy results in replicating the S&P 500 Index on a $100,000 base.

The trading strategy based on the simple first order autoregressive model, on the other hand, achieves more than twice these cumulative earnings.

Now I suppose you could say that all this was an accident, or that it was purely a matter of chance, distributed over more than 3,810 trading days. But I doubt it. After all, this trading interval 2000-2015 includes the worst economic crisis since before World War II.

Or you might claim that the profits from the simple AR trading strategy would be eaten up by transactions fees and taxes. On this point, there were 1,774 trades, for an average of $163 per trade. So, worst case, if trading costs $10 a transaction, and there is a tax rate of 40 percent, that leaves $156K over these 14-15 years in terms of take-away profit, or about $10,000 a year.

Where This May Go Wrong

This does sound like a paen to stock market investing – even “day-trading.”

What could go wrong?

Well, I assume here, of course, that exchange traded funds (ETF’s) tracking the S&P 500 can be bought and sold with the same tactics, as outlined here.

Beyond that, I don’t have access to the data currently (although I will soon), but I suspect high frequency trading (HFT) may stand in the way of realizing this marvelous investing strategy.

So remember you have to trade some small instant before market closing to implement this trading strategy. But that means you get into the turf of the high frequency traders. And, as previous posts here observe, all kinds of unusual things can happen in a blink of an eye, faster than any human response time.

So – a conjecture. I think that the choicest situations from the standpoint of this more or less macro interday perspective, may be precisely the places where you see huge spikes in the volume of HFT. This is a proposition that can be tested.

I also think something like this has to be appealed to in order to save the efficient markets hypothesis, or rational expectations. But in this case, it is not the rational expectations of human subjects, but the presumed rationality of algorithms and robots, as it were, which may be driving the market, when push comes to shove.

Top picture from CommSmart Global.

Pvar Models for Forecasting Stock Prices

When I began this blog three years ago, I wanted to deepen my understanding of technique – especially stuff growing up alongside Big Data and machine learning.

I also was encouraged by Malcolm Gladwell’s 10,000 hour idea – finding it credible from past study of mathematical topics. So maybe my performance as a forecaster would improve by studying everything about the subject.

Little did I suspect I would myself stumble on a major forecasting discovery.

But, as I am wont to quote these days, even a blind pig uncovers a truffle from time to time.

Forecasting Stock Prices

My discovery pertains to forecasting stock prices.

Basically, I have stumbled on a method of developing much more accurate forecasts of high and low stock prices, given the opening price in a period. These periods can be days, groups of days, weeks, months, and, based on what I present here – quarters.

Additionally, I have discovered a way to translate these results into much more accurate forecasts of closing prices over long forecast horizons.

I would share the full details, except I need some official acknowledgement for my work (in process) and, of course, my procedures lead to profits, so I hope to recover some of what I have invested in this research.

Having struggled through a maze of ways of doing this, however, I feel comfortable sharing a key feature of my approach – which is that it is based on the spreads between opening prices and the high and low of previous periods. Hence, I call these “Pvar models” for proximity variable models.

There is really nothing in the literature like this, so far as I am able to determine – although the discussion of 52 week high investing captures some of the spirit.

S&P 500 Quarterly Forecasts

Let’s look an example – forecasting quarterly closing prices for the S&P 500, shown in this chart.

S&PQ

We are all familiar with this series. And I think most of us are worried that after the current runup, there may be another major correction.

In any case, this graph compares out-of-sample forecasts of ARIMA(1,1,0) and Pvar models. The ARIMA forecasts are estimated by the off-the-shelf automatic forecast program Forecast Pro. The Pvar models are estimated by ordinary least squares (OLS) regression, using Matlab and Excel spreadsheets.

CompPvarARIMA

The solid red line shows the movement of the S&P 500 from 2005 to just recently. Of course, the big dip in 2008 stands out.

The blue line charts out-of-sample forecasts of the Pvar model, which are from visual inspection, clearly superior to the ARIMA forecasts, in orange.

And note the meaning of “out-of-sample” here. Parameters of the Pvar and ARIMA models are estimated over historic data which do not include the prices in the period being forecast. So the results are strictly comparable with applying these models today and checking their performance over the next three months.

The following bar chart shows the forecast errors of the Pvar and ARIMA forecasts.

PvarARIMAcomp

Thus, the Pvar model forecasts are not always more accurate than ARIMA forecasts, but clearly do significantly better at major turning points, like the 2008 recession.

The mean absolute percent errors (MAPE) for the two approaches are 7.6 and 10.2 percent, respectively.

This comparison is intriguing, since Forecast Pro automatically selected an ARIMA(1,1,0) model in each instance of its application to this series. This involves autoregressions on differences of a time series, to some extent challenging the received wisdom that stock prices are random walks right there. But Pvar poses an even more significant challenge to versions of the efficient market hypothesis, since Pvar models pull variables from the time series to predict the time series – something you are really not supposed to be able to do, if markets are, as it were, “efficient.” Furthermore, this price predictability is persistent, and not just a fluke of some special period of market history.

I will have further comments on the scalability of this approach soon. Stay tuned.

Forecasting Google’s Stock Price (GOOG) On 20-Trading-Day Horizons

Google’s stock price (GOOG) is relatively volatile, as the following chart shows.

GOOG

So it’s interesting that a stock market forecasting algorithm can produce the following 20 Trading-Day-Ahead forecasts for GOOG, for the recent period.

GOG20

The forecasts in the above chart, as are those mentioned subsequently, are out-of-sample predictions. That is, the parameters of the forecast model – which I call the PVar model – are estimated over one set of historic prices. Then, the forecasts from PVar are generated with values for the explanatory variables that are “outside” or not the same as this historic data.

How good are these forecasts and how are they developed?

Well, generally forecasting algorithms are compared with benchmarks, such as an autoregressive model or a “no-change” forecast.

So I constructed an autoregressive (AR) model for the Google closing prices, sampled at 20 day frequencies. This model has ten lagged versions of the closing price series, so I do not just rely here on first order autocorrelations.

Here is a comparison of the 20 trading-day-ahead predictions of this AR model, the above “proximity variable” (PVar) model which I take credit for, and the actual closing prices.

compGOOG

As you can see, the AR model is worse in comparison to the PVar model, although they share some values at the end of the forecast series.

The mean absolute percent errors (MAPE) of the AR model for a period more extended than shown in the graph is 7.0, compared with 5.1 for PVar. This comparison is calculated over data from 4/20/2011.

So how do I do it?

Well, since these models show so much promise, it makes sense to keep working on them, making improvements. However, previous posts here give broad hints, indeed pretty well laying out the framework, at least on an introductory basis.

Essentially, I move from predicting highs and lows to predicting closing prices.

To predict highs and lows, my post “further research” states

Now, the predictive models for the daily high and low stock price are formulated, as before, keying off the opening price in each trading day. One of the key relationships is the proximity of the daily opening price to the previous period high. The other key relationship is the proximity of the daily opening price to the previous period low. Ordinary least squares (OLS) regression models can be developed which do a good job of predicting the direction of change of the daily high and low, based on knowledge of the opening price for the day.

Other posts present actual regression models, although these are definitely prototypes, based on what I know now.

Why Does This Work?

I’ll bet this works because investors often follow simple rules such as “buy when the opening price is sufficiently greater than the previous period high” or “sell, if the opening price is sufficiently lower than the previous period low.”

I have assembled evidence, based on time variation in the predictive coefficients of the PVar variables, which I probably will put out here sometime.

But the point is that momentum trading is a major part of stock market activity, not only in the United States, but globally. There’s even research claiming to show that momentum traders do better than others, although that’s controversial.

This means that the daily price record for a stock, the opening, high, low, and closing prices, encode information that investors are likely to draw upon over different investing horizons.

I’m pleased these insights open up many researchable questions. I predict all this will lead to wholly new generations of models in stock market analysis. And my guess, and so far it is largely just that, is that these models may prove more durable than many insights into patterns of stock market prices – due to a sort of self-confirming aspect.

Mapping High Frequency Data Onto Aggregated Variables – Monthly and Quarterly Data

A lot of important economic data only are available in quarterly installments. The US Gross Domestic Product (GDP) is one example.

Other financial series and indexes, such as the Chicago Fed National Activity Index, are available in monthly, or even higher frequencies.

Aggregation is a common tactic in this situation. So monthly data is aggregated to quarterly data, and then mapped against quarterly GDP.

But there are alternatives.

One is what Elena Andreou, Eric Ghysels and Andros Kourtellos call a naïve specification –

MIDASsim0ple

With daily (D) and quarterly (Q) data, there typically are a proliferation of parameters to estimate – 66 if you allow 22 trading days per month. Here ND in the above equation is the number of days in the quarterly period.

The usual workaround is a weighting scheme. Thus, two parameter exponential Almon lag polynomials are identified with MIDAS, or Mixed Data Sampling.

However, other researchers note that with the monthly and quarterly data, direct estimation of expressions such as the one above (with XM instead of XD ) is more feasible.

The example presented here shows that such models can achieve dramatic gains in accuracy.

Quarterly and Monthly Data Example

Let’s consider forecasting releases of the US nominal Gross Domestic Product by the Bureau of Economic Analysis.

From the BEA’s 2014 News Release Schedule for the National Economic Accounts, one can see that advance estimates of GDP occur a minimum of one month after the end of the quarter being reported. So, for example, the advance estimate for the Third Quarter was released October 30 of this year.

This means the earliest quarter updates on US GDP become available fully a month after the end of the quarter in question.

The Chicago Fed National Activity Index (CFNAI), a monthly guage of overall economic activity, is released three weeks after the month being measured.

So, by the time the preliminary GDP for the latest quarter (analyzed or measured) is released, as many as four CFNAI recent monthly indexes are available, three of which pertain to the months constituting this latest measured quarter.

Accordingly, I set up an equation with a lagged term for GDP growth and fifteen terms or values for CFNAImonthly indexes. For each case, I regress a value for GDP growth for quarter t onto GDP growth for quarter t-1 and values for all the monthly CFNAI indices for quarter t, except for the most recent or last month, and twelve other values for the CFNAI index for the three quarters preceding the final quarter to be estimated – quarter t-1, quarter t-2, and quarter t-3.

One of the keys to this data structure is that the monthly CFNAI values do not “stack,” as it were. Instead the most recent lagged CFNAI value for a case always jumps by three months. So, for the 3rd quarter GDP in, say, 2006, the CFNAI value starts with the value for August 2006 and tracks back 14 values to July 2005. Then for the 4th quarter of 2006, the CFNAI values start with November 2006, and so forth.

This somewhat intricate description supports the idea that we are estimating current quarter GDP just at the end of the current quarter before the preliminary measurements are released.

Data and Estimation

I compile BEA quarterly data for nominal US GDP dating from the first Quarter of 1981 or 1981:1 to the 4th Quarter of 2011. I also download monthly data from the Chicago Fed National Activity Index from October 1979 to December 2011.

For my dependent or target variable, I calculate year-over-year GDP growth rates by quarter, from the BEA data.

I estimate an equation, as illustrated initially in this post, by ordinary least squares (OLS). For quarters, I use the sample period 1981:2 to 2006:4. The monthly data start earlier to assure enough lagged terms for the CFNAI index, and run from 1979:10 to 2006:12.

Results

The results are fairly impressive. The regression equation estimated over quarterly and monthly data to the end of 2006 performs much better than a simple first order autocorrelation during the tremendous dip in growth characterizing the Great Recession. In general, even after stabilization of GDP growth in 2010 and 2011, the high frequency data regression produces better out-of-sample forecasts.

Here is a graph comparing the out-of-sample forecast accuracy of the high frequency regression and a simple first order autocorrelation relationship.

MIDAScomp

What’s especially interesting is that the high frequency data regression does a good job of capturing the drop in GDP and the movement at the turning point in 2009 – the depth of the Great Recession.

I throw this chart up as a proof-of-concept. More detailed methods, using a specially-constructed Chicago Fed index, are described in a paper in the Journal of Economic Perspectives.

Video Friday – Volatility

Here are a couple of short YouTube videos from Bionic Turtle on estimating a GARCH (generalized autoregressive conditional heteroskedasticity) model and the simpler exponentially weighted moving average (EWMA) model.

GARCH models are designed to capture the clustering of volatility illustrated in the preceding post.

Forecast volatility with GARCH(1,1)

The point is that the parameters of a GARCH model are estimated over historic data, so the model can be utilized prospectively, to forecast future volatility, usually in the near term.

EWMA models, insofar as they put more weight on recent values, than on values more distant back in time, also tend to capture clustering phenomena.

Here is a comparison.

EWMA versus GARCH(1,1) volatility

Several of the Bionic Turtle series on estimating financial metrics are worth checking out.

Semiconductor Cycles

I’ve been exploring cycles in the semiconductor, computer and IT industries generally for quite some time.

Here is an exhibit I prepared in 2000 for a magazine serving the printed circuit board industry.

semicycle

The data come from two sources – the Semiconductor Industry Association (SIA) World Semiconductor Trade Statistics database and the Census Bureau manufacturing series for computer equipment.

This sort of analytics spawned a spate of academic research, beginning more or less with the work of Tan and Mathews in Australia.

One of my favorites is a working paper released by DRUID – the Danish Research Unit for Industrial Dynamics called Cyclical Dynamics in Three Industries. Tan and Mathews consider cycles in semiconductors, computers, and what they call the flat panel display industry. They start with quoting “industry experts” and, specifically, some of my work with Economic Data Resources on the computer (PC) cycle. These researchers went on to publish in the Journal of Business Research and Technological Forecasting and Social Change in 2010. A year later in 2011, Tan published an interesting article on the sequencing of cyclical dynamics in semiconductors.

Essentially, the appearance of cycles and what I have called quasi-cycles or pseudo-cycles in the semiconductor industry and other IT categories, like computers, result from the interplay of innovation, investment, and pricing. In semiconductors, for example, Moore’s law – which everyone always predicts will fail at some imminent future point – indicates that continuing miniaturization will lead to periodic reductions in the cost of information processing. At some point in the 1980’s, this cadence was firmly established by introductions of new microprocessors by Intel roughly every 18 months. The enhanced speed and capacity of these microprocessors – the “central nervous system” of the computer – was complemented by continuing software upgrades, and, of course, by the movement to graphical interfaces with Windows and the succession of Windows releases.

Back along the supply chain, semiconductor fabs were retooling periodically to produce chips with more and more transitors per volume of silicon. These fabs were, simply put, fabulously expensive and the investment dynamics factors into pricing in semiconductors. There were famous gluts, for example, of memory chips in 1996, and overall the whole IT industry led the recession of 2001 with massive inventory overhang, resulting from double booking and the infamous Y2K scare.

Statistical Modeling of IT Cycles

A number of papers, summarized in Aubrey deploy VAR (vector autoregression) models to capture leading indicators of global semiconductor sales. A variant of these is the Bayesian VAR or BVAR model. Basically, VAR models sort of blindly specify all possible lags for all possible variables in a system of autoregressive models. Of course, some cutoff point has to be established, and the variables to be included in the VAR system have to be selected by one means or another. A BVAR simply reduces the number of possibilities by imposing, for example, sign constraints on the resulting coefficients, or, more ambitiously, employs some type of prior distribution for key variables.

Typical variables included in these models include:

  • WSTS monthly semiconductor shipments (now by subscription only from SIA)
  • Philadelphia semiconductor index (SOX) data
  • US data on various IT shipments, orders, inventories from M3
  • data from SEMI, the association of semiconductor equipment manufacturers

Another tactic is to filter out low and high frequency variability in a semiconductor sales series with something like the Hodrick-Prescott (HP) filter, and then conduct a spectral analysis.

Does the Semiconductor/Computer/IT Cycle Still Exist?

I wonder whether academic research into IT cycles is a case of “redoubling one’s efforts when you lose sight of the goal,” or more specifically, whether new configurations of forces are blurring the formerly fairly cleanly delineated pulses in sales growth for semiconductors, computers, and other IT hardware.

“Hardware” is probably a key here, since there have been big changes since the 1990’s and early years of this brave new century.

For one thing, complementarities between software and hardware upgrades seem to be breaking down. This began in earnest with the development of virtual servers – software which enabled many virtual machines on the same hardware frame, in part because the underlying circuitry was so massively powerful and high capacity now. Significant declines in the growth of sales of these machines followed on wide deployment of this software designed to achieve higher efficiencies of utilization of individual machines.

Another development is cloud computing. Running the data side of things is gradually being taken away from in-house IT departments in companies and moved over to cloud computing services. Of course, critical data for a company is always likely to be maintained in-house, but the need for expanding the number of big desktops with the number of employees is going away – or has indeed gone away.

At the same time, tablets, Apple products and Android machines, created a wave of destructive creation in people’s access to the Internet, and, more and more, for everyday functions like keeping calendars, taking notes, even writing and processing photos.

But note – I am not studding this discussion with numbers as of yet.

I suspect that underneath all this change it should be possible to identify some IT invariants, perhaps in usage categories, which continue to reflect a kind of pulse and cycle of activity.

Some Cycle Basics

A Fourier analysis is one of the first steps in analyzing cycles.

Take sunspots, for example,

There are extensive historic records on the annual number of sunspots, dating back to 1700. The annual data shown in the following graph dates back to 1700, and is currently maintained by the Royal Belgium Observatory.

sunspots

This series is relatively stationary, although there may be a slight trend if you cut this span of data off a few years before the present.

In any case, the kind of thing you get with a Fourier analysis looks like this.

spectralsunspots

This shows the power or importance of the cycles/year numbers, and maxes out at around 0.09.

These data can be recalibrated into the following chart, which highlights the approximately 11 year major cycle in the sunspot numbers.

sunspotsperiodogramyr

Now it’s possible to build a simple regression model with a lagged explanatory variable to make credible predictions. A lag of eleven years produces the following in-sample and out-of-sample fits. The regression is estimated over data to 1990, and, thus, the years 1991 through 2013 are out-of-sample.

LaggedModel

It’s obvious this sort of forecasting approach is not quite ready for prime-time television, even though it performs OK on several of the out-of-sample years after 1990.

But this exercise does highlight a couple of things.

First, the annual number of sunspots is broadly cyclical in this sense. If you try the same trick with lagged values for the US “business cycle” the results will be radically worse. At least with the sunspot data, most of the fluctuations have timing that is correctly predicted, both in-sample (1990 and before) and out-of-sample (1991-2013).

Secondly, there are stochastic elements to this solar activity cycle. The variation in amplitude is dramatic, and, indeed, the latest numbers coming in on sunspot activity are moving to much lower levels, even though the cycle is supposedly at its peak.

I’ve reviewed several papers on predicting the sunspot cycle. There are models which are more profoundly inspired by the possible physics involved – dynamo dynamics for example. But for my money there are basic models which, on a one-year-ahead basis, do a credible job. More on this forthcoming.