Category Archives: cointegration in time series

Forecasting the Downswing in Markets – II

Because the Great Recession of 2008-2009 was closely tied with asset bubbles in the US and other housing markets, I have a category for asset bubbles in this blog.

In researching the housing and other asset bubbles, I have been surprised to discover that there are economists who deny their existence.

By one definition, an asset bubble is a movement of prices in a market away from fundamental values in a sustained manner. While there are precedents for suggesting that bubbles can form in the context of rational expectations (for example, Blanchard’s widely quoted 1982 paper), it seems more reasonable to consider that “noise” investors who are less than perfectly informed are part of the picture. Thus, there is an interesting study of the presence and importance of “out-of-town” investors in the recent run-up of US residential real estate prices which peaked in 2008.

The “deviations from fundamentals” approach in econometrics often translates to attempts to develop or show breaks in cointegrating relationships, between, for example, rental rates and housing prices. Let me just say right off that the problem with this is that the whole subject of cointegration of nonstationary time series is fraught with statistical pitfalls – such as weak tests to reject unit roots. To hang everything on whether or not Granger causation can or cannot be shown is to really be subject to the whims of random influences in the data, as well as violations of distributional assumptions on the relevant error terms.

I am sorry if all that sounds kind of wonkish, but it really needs to be said.

Institutionalist approaches seem more promising – such as a recent white paper arguing that the housing bubble and bust was the result of a ..

supply-side phenomenon, attributable to an excess of mispriced mortgage finance: mortgage-finance spreads declined and volume increased, even as risk increased—a confluence attributable only to an oversupply of mortgage finance.

But what about forecasting the trajectory of prices, both up and then down, in an asset bubble?

What can we make out of charts such as this, in a recent paper by Sornette and Cauwels?

negativebubble

Sornett and the many researchers collaborating with him over the years are working with a paradigm of an asset bubble as a faster than exponential increase in prices. In an as yet futile effort to extend the olive branch to traditional economists (Sornette is a geophysicist by training), Sornette evokes the “bubbles following from rational expectations meme.” The idea is that it could be rational for an investor to participate in a market that is in the throes of an asset bubble, providing that the investor believes that his gains in the near future adequately compensate for the increased risk of a collapse of prices. This is the “greater fool” theory to a large extent, and I always take delight in pointing out that one of the most intelligent of all human beings – Isaac Newton – was burned by exactly such a situation hundreds of years ago.

In any case, the mathematics of the Sornette et al approach are organized around the log-periodic power law, expressed in the following equation with the Sornette and Cauwels commentary (click to enlarge).

LPPL

From a big picture standpoint, the first thing to observe is that there is a parameter tc in the equation which is the “critical time.”

The whole point of this mathematical apparatus, which derives in part from differential equations and some basic modeling approaches common in physics, is that faster than exponential growth is destined to reach a point at which it basically goes ballistic. That is the critical point. The purpose of forecasting in this context then is to predict when this will be, when will the asset bubble reach its maximum price and then collapse?

And the Sornette framework allows for negative as well as positive price movements according to the dynamics in this equation. So, it is possible, if we can implement this, to predict how far the market will fall after the bubble pops, so to speak, and when it will turn around.

Pretty heady stuff.

The second big picture feature is to note the number of parameters to be estimated in fitting this model to real price data – minimally constants A, B, and C, an exponent m, the angular frequency ω and phase φ, plus the critical time.

For the mathematically inclined, there is a thread of criticism and response, more or less culminating in Clarifications to questions and criticisms on the Johansen–Ledoit–Sornette financial bubble model which used to be available as a PDF download from ETC Zurich.

In brief, the issue is whether the numerical analysis methods fitting the data to the LPPL model arrive at local, instead of global maxima. Obviously, different values for the parameters can lead to wholly different forecasts for the critical time tc.

To some extent, this issue can be dealt with by running a great number of estimations of the parameters, or by developing collateral metrics for adequacy of the estimates.

But the bottom line is – regardless of the extensive applications of this approach to all manner of asset bubbles internationally and in different markets – the estimation of the parameters seems more in the realm of art, than science, at the present time.

However, it may be that mathematical or computational breakthroughs are possible.

I feel these researchers are “very close.”

In any case, it would be great if there were a package in R or the like to gin up these estimates of the critical time, applying the log-periodic power law.

Then we could figure out “how low it can go.’

And, a final note to this post – it is ironic that as I write and post this, the stock markets have recovered from their recent swoon and are setting new records. So I guess I just want to be prepared, and am not willing to believe the runup can go on forever.

I’m also interested in methodologies that can keep forecasters usefully at work, during the downswing.

Oil and Gas Prices II

One of the more interesting questions in applied forecasting is the relationship between oil and natural gas prices in the US market, shown below.

OIlGasPrices

Up to the early 1990’s, the interplay between oil and gas prices followed “rules of thumb” – for example, gas prices per million Btu were approximately one tenth oil prices.

There is still some suggestion of this – for example, peak oil prices recently hit nearly $140 a barrel, at the same time gas prices were nearly $14 per million Btu’s.

However, generally, ratio relationships appear to break down around 2009, if not earlier, during the first decade of the century.

A Longer Term Relationship?

Perhaps oil and gas prices are in a longer term relationship, but one disturbed in many cases in short run time periods.

One way economists and ecommetricians think of this is in terms of “co-integrating relationships.” That’s a fancy way of saying that regressions of the form,

Gas price in time t = constant + α(oil price in time t) + (residual in time t)

are predictive. Here, α is a coefficient to be estimated.

Now this looks like a straight-forward regression, so you might say – “what’s the problem?”

Well, the catch is that gas prices and oil prices might be nonstationary – that is, one or another form of a random walk.

If this is so – and positive results on standard tests such as the augmented Dickey Fuller (ADR) and Phillips-Peron are widely reported – there is a big potential problem. It’s easy to regress one completely unrelated nonstationary time series onto another, getting an apparently significant result, only to find this relationship disappears in the forecast. In other words two random series can, by chance, match up to each other over closely, but that’s no guarantee they will continue to do so.

Here’s where the concept of a co-integrating relationship comes into play.

If you can show, by various statistical tests, that variables are cointegrated, regressions such as the one above are more likely to be predictive.

Well, several econometric studies show gas and oil prices are in a cointegrated relationship, using data from the 1990’s through sometime in the first decade of the 2000’s. The more sophisticated specify auxiliary variables to account for weather or changes in gas storage. You might download and read, for example, a study published in 2007 under the auspices of the Dallas Federal Reserve Bank – What Drives Natural Gas Prices?

But it does not appear that this cointegrated relationship is fixed. Instead, it changes over time, perhaps exemplifying various regimes, i.e. periods of time in which the underlying parameters switch to new values, even though a determinate relationship can still be demonstrated.

Changing parameters are shown in the excellent 2012 study by Ramberg and Parsons in the Energy Journal – The Weak Tie Between Natural Gas and Oil Prices.

The Underlying Basis

Anyway, there are facts relating to production and use of oil and natural gas which encourage us to postulate a relationship in their prices, although the relationship may shift over time.

This makes sense since oil and gas are limited or completely substitutes in various industrial processes. This used to be more compelling in electric power generation, than it is today. According to the US Department of Energy, there are only limited amounts of electric power still produced by generators running on oil, although natural gas turbines have grown in importance.

Still, natural gas is often produced alongside of and is usually dissolved in oil, so oil and natural gas are usually joint products.

Recently, technology has changed the picture with respect to gas and oil.

On the demand side, the introduction of the combined-cycle combustion turbine made natural gas electricity generation more cost effective, thereby making natural gas in electric power generation even more dominant.

On the demand side, the new technologies of extracting shale oil and natural gas – often summarized under the rubric of “fracking” or hydraulic fracturing – have totally changed the equation, resulting in dramatic increases in natural gas supplies in the US.

This leaves the interesting question of what sort of forecasting model for natural gas might be appropriate.

Semiconductor Cycles

I’ve been exploring cycles in the semiconductor, computer and IT industries generally for quite some time.

Here is an exhibit I prepared in 2000 for a magazine serving the printed circuit board industry.

semicycle

The data come from two sources – the Semiconductor Industry Association (SIA) World Semiconductor Trade Statistics database and the Census Bureau manufacturing series for computer equipment.

This sort of analytics spawned a spate of academic research, beginning more or less with the work of Tan and Mathews in Australia.

One of my favorites is a working paper released by DRUID – the Danish Research Unit for Industrial Dynamics called Cyclical Dynamics in Three Industries. Tan and Mathews consider cycles in semiconductors, computers, and what they call the flat panel display industry. They start with quoting “industry experts” and, specifically, some of my work with Economic Data Resources on the computer (PC) cycle. These researchers went on to publish in the Journal of Business Research and Technological Forecasting and Social Change in 2010. A year later in 2011, Tan published an interesting article on the sequencing of cyclical dynamics in semiconductors.

Essentially, the appearance of cycles and what I have called quasi-cycles or pseudo-cycles in the semiconductor industry and other IT categories, like computers, result from the interplay of innovation, investment, and pricing. In semiconductors, for example, Moore’s law – which everyone always predicts will fail at some imminent future point – indicates that continuing miniaturization will lead to periodic reductions in the cost of information processing. At some point in the 1980’s, this cadence was firmly established by introductions of new microprocessors by Intel roughly every 18 months. The enhanced speed and capacity of these microprocessors – the “central nervous system” of the computer – was complemented by continuing software upgrades, and, of course, by the movement to graphical interfaces with Windows and the succession of Windows releases.

Back along the supply chain, semiconductor fabs were retooling periodically to produce chips with more and more transitors per volume of silicon. These fabs were, simply put, fabulously expensive and the investment dynamics factors into pricing in semiconductors. There were famous gluts, for example, of memory chips in 1996, and overall the whole IT industry led the recession of 2001 with massive inventory overhang, resulting from double booking and the infamous Y2K scare.

Statistical Modeling of IT Cycles

A number of papers, summarized in Aubrey deploy VAR (vector autoregression) models to capture leading indicators of global semiconductor sales. A variant of these is the Bayesian VAR or BVAR model. Basically, VAR models sort of blindly specify all possible lags for all possible variables in a system of autoregressive models. Of course, some cutoff point has to be established, and the variables to be included in the VAR system have to be selected by one means or another. A BVAR simply reduces the number of possibilities by imposing, for example, sign constraints on the resulting coefficients, or, more ambitiously, employs some type of prior distribution for key variables.

Typical variables included in these models include:

  • WSTS monthly semiconductor shipments (now by subscription only from SIA)
  • Philadelphia semiconductor index (SOX) data
  • US data on various IT shipments, orders, inventories from M3
  • data from SEMI, the association of semiconductor equipment manufacturers

Another tactic is to filter out low and high frequency variability in a semiconductor sales series with something like the Hodrick-Prescott (HP) filter, and then conduct a spectral analysis.

Does the Semiconductor/Computer/IT Cycle Still Exist?

I wonder whether academic research into IT cycles is a case of “redoubling one’s efforts when you lose sight of the goal,” or more specifically, whether new configurations of forces are blurring the formerly fairly cleanly delineated pulses in sales growth for semiconductors, computers, and other IT hardware.

“Hardware” is probably a key here, since there have been big changes since the 1990’s and early years of this brave new century.

For one thing, complementarities between software and hardware upgrades seem to be breaking down. This began in earnest with the development of virtual servers – software which enabled many virtual machines on the same hardware frame, in part because the underlying circuitry was so massively powerful and high capacity now. Significant declines in the growth of sales of these machines followed on wide deployment of this software designed to achieve higher efficiencies of utilization of individual machines.

Another development is cloud computing. Running the data side of things is gradually being taken away from in-house IT departments in companies and moved over to cloud computing services. Of course, critical data for a company is always likely to be maintained in-house, but the need for expanding the number of big desktops with the number of employees is going away – or has indeed gone away.

At the same time, tablets, Apple products and Android machines, created a wave of destructive creation in people’s access to the Internet, and, more and more, for everyday functions like keeping calendars, taking notes, even writing and processing photos.

But note – I am not studding this discussion with numbers as of yet.

I suspect that underneath all this change it should be possible to identify some IT invariants, perhaps in usage categories, which continue to reflect a kind of pulse and cycle of activity.

What is a Market Bubble?

Let’s ask what might seem to be a silly question, but which turns out to be challenging. What is an asset bubble? How can asset bubbles be identified quantitatively?

Let me highlight two definitions – major in terms of the economics and analytical literature. And remember when working through “definitions” that the last major asset bubbles that burst triggered the recessions of 2008-2009 globally, resulting in the loss of tens of trillions of dollars.

You know, a trillion here and a trillion there, and pretty soon you are talking about real money.

Bubbles as Deviations from Values Linked to Economic Fundamentals

The first is simply that –

An asset price bubble is a price acceleration that cannot be explained in terms of the underlying fundamental economic variables

This comes from Dreger and Zhang, who cite earlier work by Case and Shiller, including their historic paper – Is There A Bubble in the Housing Market (2003)

Basically, you need a statistical or an econometric model which “explains” price movements in an asset market. While prices can deviate from forecasts produced by this model on a temporary basis, they will return to the predicted relationship to the set of fundamental variables at some time in the future, or eventually, or in the long run.

The sustained speculative distortions of the asset market then can be measured with reference to benchmark projections with this type of relationship and current values of the “fundamentals.”

This is the language of co-integrating relationships. The trick, then, is to identify a relationship between the asset price and its fundamental drivers which net out residuals that are white noise, or at least, ARMA – autoregressive moving average – residuals. Good luck with that!

Bubbles as Faster-Than-Exponential Growth

The second definition comes from Didier Sornette and basically is that an asset bubble exists when prices or values are accelerating at a faster-than-exponential rate.

This phenomenon is generated by behaviors of investors and traders that create positive feedback in the valuation of assets and unsustainable growth, leading to a finite-time singularity at some future time… From a technical view point, the positive feedback mechanisms include (i) option hedging, (ii) insurance portfolio strategies, (iii) market makers bid-ask spread in response to past volatility, (iv) learning of business networks and human capital build-up,(v) procyclical financing of firms by banks (boom vs contracting times), (vi) trend following investment strategies, (vii) asymmetric information on hedging strategies viii) the interplay of mark-to-market accounting and regulatory capital requirements. From a behavior viewpoint, positive feedbacks emerge as a result of the propensity of humans to imitate, their social gregariousness and the resulting herding.

Fundamentals still benchmark asset prices in this approach, as illustrated by this chart.

 Sornett2012            

Here GDP and U.S. stock market valuation grow at approximately the same rate, suggesting a “cointegrated relationship,” such as suggested with the first definition of a bubble introduced above.

However, the market has shown three multiple-year periods of excessive valuation, followed by periods of consolidation.

These periods of bubbly growth in prices are triggered by expectations of higher prices and the ability to speculate, and are given precise mathematical expression in the JLS (Johansen-Ledoit-Sornette) model.

The behavioral underpinnings are familiar and can explained with reference to housing, as follows.

The term “bubble” refers to a situation in which excessive future expectations cause prices to rise. For instance, during a house-price bubble, buyers think that a home that they would normally consider too expensive is now an acceptable purchase because they will be compensated by significant further \price increases. They will not need to save as much as they otherwise might, because they expect the increased value of their home to do the saving for them. First-time homebuyers may also worry during a bubble that if they do not buy now, they will not be able to afford a home later. Furthermore, the expectation of large price increases may have a strong impact on demand if people think that home prices are very unlikely to fall, and certainly not likely to fall for long, so that there is little perceived risk associated with an investment in a home.

The concept of “faster-than-exponential” growth also is explicated in this chart from a recent article (2011), and originally from Why Stock Markets Crash, published by Princeton.

Sornette2012B

In a recent methodological piece, Sornette and co-authors cite an extensive list of applications of their approach.

..the JLS model has been used widely to detect bubbles and crashes ex-ante (i.e., with advanced documented notice in real time) in various kinds of markets such as the 2006-2008 oil bubble [5], the Chinese index bubble in 2009 [6], the real estate market in Las Vegas [7], the U.K. and U.S. real estate bubbles [8, 9], the Nikkei index anti-bubble in 1990-1998 [10] and the S&P 500 index anti-bubble in 2000-2003 [11]. Other recent ex-post studies include the Dow Jones Industrial Average historical bubbles [12], the corporate bond spreads [13], the Polish stock market bubble [14], the western stock markets [15], the Brazilian real (R$) – US dollar (USD) exchange rate [16], the 2000-2010 world major stock indices [17], the South African stock market bubble [18] and the US repurchase agreements market [19].

I refer readers to the above link for the specifics of these references. Note, in general, most citations in this post are available as PDF files from a webpage maintained by the Swiss Federal Institute of Technology.

The Psychology of Asset Bubbles

After wrestling with this literature for several months, including some advanced math and econometrics, it seems to me that it all comes down, in the heat of the moment just before the bubble crashes, to psychology.

How does that go?

A recent paper coauthored by Sornette and Cauwels and others summarize the group psychology behind asset bubbles.

In its microeconomic formulation, the model assumes a hierarchical organization of the market, comprised of two groups of agents: a group with rational expectations (the value investors), and a group of “noise” agents, who are boundedly rational and exhibit herding behavior (the trend followers). Herding is assumed to be self-reinforcing, corresponding to a nonlinear trend following behavior, which creates price-to-price positive feedback loops that yield an accelerated growth process. The tension and competition between the rational agents and the noise traders produces deviations around the growing prices that take the form of low-frequency oscillations, which increase in frequency due to the acceleration of the price and the nonlinear feedback mechanisms, as the time of the crash approaches.

Examples of how “irrational” agents might proceed to fuel an asset bubble are given in a selective review of the asset bubble literature developed recently by Anna Scherbina from which I take several extracts below.

For example, there is “feedback trading” involving traders who react solely to past price movements (momentum traders?). Scherbina writes,

In response to positive news, an asset experiences a high initial return. This is noticed by a group of feedback traders who assume that the high return will continue and, therefore, buy the asset, pushing prices above fundamentals. The further price increase attracts additional feedback traders, who also buy the asset and push prices even higher, thereby attracting subsequent feedback traders, and so on. The price will keep rising as long as more capital is being invested. Once the rate of new capital inflow slows down, so does the rate of price growth; at this point, capital might start flowing out, causing the bubble to deflate.

Other mechanisms are biased self-attribution and the representativeness heuristic. In biased self-attribution,

..people to take into account signals that confirm their beliefs and dismiss as noise signals that contradict their beliefs…. Investors form their initial beliefs by receiving a noisy private signal about the value of a security.. for example, by researching the security. Subsequently, investors receive a noisy public signal…..[can be]  assumed to be almost pure noise and therefore should be ignored. However, since investors suffer from biased self-attribution, they grow overconfident in their belief after the public signal confirms their private information and further revise their valuation in the direction of their private signal. When the public signal contradicts the investors’ private information, it is appropriately ignored and the price remains unchanged. Therefore, public signals, in expectation, lead to price movements in the same direction as the initial price response to the private signal. These subsequent price moves are not justified by fundamentals and represent a bubble. The bubble starts to deflate after the accumulated public signals force investors to eventually grow less confident in their private signal.

Scherbina describes the representativeness heuristic as follows.

 The fourth model combines two behavioral phenomena, the representativeness heuristic and the conservatism bias. Both phenomena were previously documented in psychology and represent deviations from optimal Bayesian information processing. The representativeness heuristic leads investors to put too much weight on attention-grabbing (“strong”) news, which causes overreaction. In contrast, conservatism bias captures investors’ tendency to be too slow to revise their models, such that they underweight relevant but non-attention-grabbing (routine) evidence, which causes underreaction… In this setting, a positive bubble will arise purely by chance, for example, if a series of unexpected good outcomes have occurred, causing investors to over-extrapolate from the past trend. Investors make a mistake by ignoring the low unconditional probability that any company can grow or shrink for long periods of time. The mispricing will persist until an accumulation of signals forces investors to switch from the trending to the mean-reverting model of earnings.

Interesting, several of these “irrationalities” can generate negative, as well as positive bubbles.

Finally, Scherbina makes an important admission, namely that

 The behavioral view of bubbles finds support in experimental studies. These studies set up artificial markets with finitely-lived assets and observe that price bubbles arise frequently. The presence of bubbles is often attributed to the lack of common knowledge of rationality among traders. Traders expect bubbles to arise because they believe that other traders may be irrational. Consequently, optimistic media stories and analyst reports may help create bubbles not because investors believe these views but because the optimistic stories may indicate the existence of other investors who do, destroying the common knowledge of rationality.

And let me pin that down further here.

Asset Bubbles – the Evidence From Experimental Economics

Vernon Smith is a pioneer in experimental economics. One of his most famous experiments concerns the genesis of asset bubbles.

Here is a short video about this widely replicated experiment.

Stefan Palan recently surveyed these experiments, and also has a downloadable working paper (2013) which collates data from them.

This article is based on the results of 33 published articles and 25 working papers using the experimental asset market design introduced by Smith, Suchanek and Williams (1988). It discusses the design of a baseline market and goes on to present a database of close to 1600 individual bubble measure observations from experiments in the literature, which may serve as a reference resource for the quantitative comparison of existing and future findings.

A typical pattern of asset bubble formation emerges in these experiments.

bubleexperimental

As Smith relates in the video, the experimental market is comprised of student subjects who can both buy and sell and asset which declines in value to zero over a fixed period. Students can earn real money at this, and cannot communicate with others in the experiment.

Noahpinion has further discussion of this type of bubble experiment, which, as Palan writes, is the best-documented experimental asset market design in existence and thus offers a superior base of comparison for new work.

There are convergent lines of evidence about the reality and dynamics of asset bubbles, and a growing appreciation that, empirically, asset bubbles share a number of characteristics.

That may not be enough to convince the mainstream economics profession, however, as a humorous piece by Hirshleifer (2001), quoted by a German researcher a few years back, suggests –

In the muddled days before the rise of modern finance, some otherwise-reputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices. What if the creators of asset pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately reflect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model (or DAPM), in which proxies for market misevaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies (such as book/market, earnings/price, and past returns) and mood indicators such as amount of sunlight, turned out to be strong predictors of future returns. At this point, it would seem that the deficient markets hypothesis was the best-confirmed theory in the social sciences.