# Predicting the Stock Market, Making Profits in the Stock Market

Often, working with software and electronics engineers, a question comes up – “if you are so good at forecasting (company sales, new product introductions), why don’t you forecast the stock market?” This might seem to be a variant of “if you are so smart, why aren’t you rich?” but I think it usually is asked more out of curiosity, than malice.

In any case, my standard reply has been that basically you could not forecast the stock market; that the stock market was probably more or less a random walk. If it were possible to forecast the stock market, someone would have done it. And the effect of successful forecasts would be to nullify further possibility of forecasting. I own an early edition of Burton Malkiel’s Random Walk Down Wall Street.

Today, I am in the amazing position of earnestly attempting to bring attention to the fact that, at least since 2008, a major measure of the stock market – the SPY ETF which tracks the S&P 500 Index, in fact, can be forecast. Or, more precisely, a forecasting model for daily returns of the SPY can lead to sustainable, increasing returns over the past several years, despite the fact the forecasting model, is, by many criteria, a weak predictor.

I think this has to do with special features of this stock market time series which have not, heretofore, received much attention in econometric modeling.

So here are the returns from applying this SPY from early 2008 to early 2014 (click to enlarge).

I begin with a \$1000 investment 1/22/2008 and trade potentially every day, based on either the Trading Program or a Buy & Hold strategy.

First, the regression model is a most unlikely candidate for making money in the stock market. The R2 or coefficient of determination is 0.0238, implying that the 60 regressors predict only 2.38 percent of the variation in the SPY rates of return. And it’s possible to go on in this vein – for example, the F-statistic indicating whether there is a relation between the regressors and the dependent variable is 1.42, just marginally above the 1 percent significance level, according to my reading of the Tables.

And the regression with 60 regressors correctly predicts the correct sign of the next days’ SPY rates of return only 50.1 percent of the time.

This, of course, is a key fact, since the Trading Program (see below) is triggered by positive predictions of the next day’s rate of return. When the next day rate of return is predicted to be positive and above a certain minimum value, the Trading Program buys SPY with the money on hand from previous sales – or, if the investor is already holding SPY because the previous day’s prediction also was positive, the investor stands pat.

The Conventional Wisdom

Professor Jim Hamilton, one of the principals (with Menzie Chin) in Econbrowser had a post recently On R-squared and economic prediction which makes the sensible point that R2 or the coefficient of determination in a regression is not a great guide to predictive performance. The post shows, among other things, that first differences of the daily S&P 500 index values regressed against lagged values of these first differences have low R2 – almost zero.

Hamilton writes,

Actually, there’s a well-known theory of stock prices that claims that an R-squared near zero is exactly what you should find. Specifically, the claim is that everything anybody could have known last month should already have been reflected in the value of pt -1. If you knew last month, when pt-1 was 1800, that this month it was headed to 1900, you should have bought last month. But if enough savvy investors tried to do that, their buy orders would have driven pt-1 up closer to 1900. The stock price should respond the instant somebody gets the news, not wait around a month before changing.

That’s not a bad empirical description of stock prices– nobody can really predict them. If you want a little fancier model, modern finance theory is characterized by the more general view that the product of today’s stock return with some other characteristics of today’s economy (referred to as the “pricing kernel”) should have been impossible to predict based on anything you could have known last month. In this formulation, the theory is confirmed– our understanding of what’s going on is exactly correct– only if when regressing that product on anything known at t – 1 we always obtain an R-squared near zero.

Well, I’m in the position here of seeking to correct one of my intellectual mentors. Although Professor Hamilton and I have never met nor communicated directly, I did work my way through Hamilton’s seminal book on time series analysis – and was duly impressed.

I am coming to the opinion that the success of this fairly low-power regression model on the SPY must have to do with special characteristics of the underlying distribution of rates of return.

For example, it’s interesting that the correlations between the (61) regressors and the daily returns are higher, when the absolute values of the dependent variable rates of return are greater. There is, in fact, a lot of meaningless buzz at very low positive and negative rates of return. This seems consistent with the odd shape of the residuals of the regression, shown below.

I’ve made this point before, most recently in a post-2014 post Predicting the S&P 500 or the SPY Exchange-Traded Fund, where I actually provide coefficients for a autoregressive model estimated by Matlab’s arima procedure. That estimation, incidentally, takes more account of the non-normal characteristics of the distribution of the rates of return, employing a t-distribution in maximum likelihood estimates of the parameters. It also only uses lagged values of SPY daily returns, and does not include any contribution from the VIX.

I guess in the remote possibility Jim Hamilton glances at either of these posts, it might seem comparable to reading claims of a perpetual motion machine, a method to square the circle, or something similar- quackery or wrong-headedness and error.

A colleague with a Harvard Ph.D in applied math, incidentally, has taken the trouble to go over my data and numbers, checking and verifying I am computing what I say I am computing.

Further details follow on this simple ordinary least squares (OLS) regression model I am presenting here.

Data and the Model

The focus of this modeling effort is on the daily returns of the SPDR S&P 500 (SPY), calculated with daily closing prices, as -1+(today’s closing price/the previous trading day’s closing price). The data matrix includes 30 lagged values of the daily returns of the SPY (SPYRR) along with 30 lagged values of the daily returns of the VIX volatility index (VIXRR). The data span from 11/26/1993 to 1/16/2014 – a total of 5,072 daily returns.

There is enough data to create separate training and test samples, which is good, since in-sample performance can be a very poor guide to out-of-sample predictive capabilities. The training sample extends from 11/26/1993 to 1/18/2008, for a total of 3563 observations. The test sample is the complement of this, extending from 1/22/2008 to 1/16/2014, including 1509 cases.

So the basic equation I estimate is of the form

SPYRRt=a0+a1SPYRRt-1…a30SPYRRt-30+b1VIXRRt-1+..+b30VIXRRt-30

Thus, the equation has 61 parameters – 60 coefficients multiplying into the lagged returns for the SPY and VIX indices and a constant term.

Estimation Technique

To make this simple, I estimate the above equation with the above data by ordinary least squares, implementing the standard matrix equation b = (XTX)-1XTY, where T indicates ‘transpose.’ I add a leading column of ‘1’s’ to the data matrix X to allow for a constant term a0. I do not mean center or standardize the observations on daily rates of return.

Rule for Trading Program and Summing UP

The Trading Program is the same one I described in earlier blog posts on this topic. Basically, I update forecasts every day and react to the forecast of the next day’s daily return. If it is positive, and now above a certain minimum, I either buy or hold. If it is not, I sell or do not enter the market. Oh yeah, I start out with \$1000 in all these simulations and only trade with proceeds from this initial investment.

The only element of unrealism is that I have to predict the closing price of the SPY some short period before the close of the market to be able to enter my trade. I have not looked closely at this, but I am assuming volatility in the last few seconds is bounded, except perhaps in very unusual circumstances.

I take the trouble to present the results of an OLS regression to highlight the fact that what looks like a weak model in this context can work to achieve profits. I don’t think that point has ever been made. There are, of course, all sorts of possibilities for further optimizing this model.

I also suspect that monetary policy has some role in the success of this Trading Program over this period – so it would be interesting to look at similar models at other times and perhaps in other markets.

# Mergers and Acquisitions

Are we on the threshold of a rise in corporate mergers and acqusitions (M&A)?

According to the KPMA Mergers & Acquisitions Predictor, the answer is ‘yes.’

The world’s largest corporates are expected to show a greater appetite for deals in 2014 compared to 12 months ago, according to analyst predictions. Predicted forward P/E ratios (our measure of corporate appetite) in December 2013 were 16 percent higher than in December 2012. This reflects the last half of the year, which saw a 17 percent increase in forward P/E between June and December 2013. This was compared to a 1 percent fall in the previous 6 months, after concerns over the anticipated mid-year tapering of quantitative easing in the US. The increase in appetite is matched by an anticipated increase of capacity of 12 percent over the next year.

This prediction is based on

..tracking and projecting important indicators 12 months forward. The rise or fall of forward P/E (price/earnings) ratios offers a good guide to the overall market confidence, while net debt to EBITDA (earnings before interest, tax, depreciation and amortization) ratios helps gauge the capacity of companies to fund future acquisitions.

Waves and Patterns in M&A Activity

Mergers and acquisitions tend to occur in waves, or clusters.

It’s not exactly clear what the underlying drivers of M&A waves are, although there is a rich literature on this.

Riding the wave, for example – an Economist article – highlights four phases of merger activity, based on a recent book Masterminding the Deal: Breakthroughs in M&A Strategy and Analysis,

In the first phase, usually when the economy is in poor shape, just a handful of deals are struck, often desperation sales at bargain prices in a buyer’s market. In the second, an improving economy means that finance is more readily available and so the volume of M&A rises—but not fast, as most deals are regarded as risky, scaring away all but the most confident buyers. It is in the third phase that activity accelerates sharply, because the “merger boom is legitimised; chief executives feel it is safe to do a deal, that no one is going to criticise them for it,” says Mr Clark.

This is when the premiums that acquirers are willing to pay over the target’s pre-bid share price start to rise rapidly. In the merger waves since 1980, bid premiums in phase one have averaged just 10-18%, rising in phase two to 20-35%. In phase three, they surge past 50%, setting the stage for the catastrophically frothy fourth and final phase. This is when premiums rise above 100%, as bosses do deals so bad they are the stuff of legend. Thus, the 1980s merger wave ended soon after the disastrous debt-fuelled hostile bid for RJR Nabisco by KKR, a private-equity fund. A bestselling book branded the acquirers “Barbarians at the Gate”. The turn-of-the-century boom ended soon after Time Warner’s near-suicidal (at least for its shareholders) embrace of AOL.

This typology comes from Clark And Mills book’ ‘Masterminding The Deal’, which suggests that two-thirds of mergers fail.

In their attempt to assess why some mergers succeed while most fail, the authors offer a ranking scheme by merger type. The most successful deals are made by bottom trawlers (87%-92%). Then, in decreasing order of success, come bolt-ons, line extension equivalents, consolidation mature, multiple core related complementary, consolidation-emerging, single core related complementary, lynchpin strategic, and speculative strategic (15%-20%). Speculative strategic deals, which prompt “a collective financial market response of ‘Is this a joke?’ have included the NatWest/Gleacher deal, Coca-Cola’s purchase of film producer Columbia Pictures, AOL/Time Warner, eBay/Skype, and nearly every deal attempted by former Vivendi Universal chief executive officer Jean-Marie Messier.” (pp. 159-60)

More simply put, acquisitions fail for three key reasons. The acquirer could have selected the wrong target (Conseco/Green Tree, Quaker Oats/Snapple), paid too much for it (RBS Fortis/ABN Amro, AOL/Huffington Press), or poorly integrated it (AT&T/NCR, Terra Firma/EMI, Unum/Provident).

Be all this as it may, the signs point to a significant uptick in M&A activity in 2014. Thus, Dealogic reports that Global Technology M&A volume totals \$22.4bn in 2014 YTD, up from \$6.4bn in 2013 YTD and the highest YTD volume since 2006 (\$34.8bn).

# Predicting Financial Crisis – the Interesting Case of Nassim Taleb

Note: This is good post from the old series, and I am re-publishing it with the new citation to Taleb’s book in progress Hidden Risk and a new video.

———————————————–

One of the biggest questions is whether financial crises can be predicted in any real sense. This is a major concern of mine. I was deep in the middle of forecasting on an applied basis during 2008-2010, and kept hoping to find proxies to indicate, for example, when we were coming out of it, or whether it would “double-dip.”

Currently, as noted in this blog, a chorus of voices (commentators, analysts, experts) says that all manner of asset bubbles are forming globally, beginning with the US stock and the Chinese real estate markets.

But does that mean that we can predict the timing of this economic and financial crisis, or are we all becoming “Chicken Littles?”

What we want is well-described by Mark Buchanan, when he writes

The challenge for economists is to find those indicators that can provide regulators with reliable early warnings of trouble. It’s a complicated task. Can we construct measures of asset bubbles, or devise ways to identify “too big to fail” or “too interconnected to fail” institutions? Can we identify the architectural features of financial networks that make them prone to cascades of distress? Can we strike the right balance between the transparency needed to make risks evident, and the privacy required for markets to function?

And, ah yes – there is light at the end of the tunnel –

Work is racing ahead. In the U.S., the newly formed Office of Financial Research has published various papers on topics such as stress tests and data gaps — including one that reviews a list of some 31 proposed systemic-risk measures. The economists John Geanakoplos and Lasse Pedersen have offered specific proposals on measuring the extent to which markets are driven by leverage, which tends to make the whole system more fragile.

The Office of Financial Research (OFR) in the Treasury Department was created by the Dodd-Frank legislation, and it is precisely here Nassim Taleb enters the picture, at a Congressional hearing on formation of the OFR.

Mr. Chairman, Ranking Member, Members of the Committee, thank you for giving me the opportunity to testify on the analytical ambitions and centralized risk-management plans of Office of Financial Research (OFR). I am here primarily as a practitioner of risk —not as an analyst but as a decision-maker, an eyewitness of the poor, even disastrous translation of risk research into practice. I spent close to two decades as a derivatives trader before becoming a full-time scholar and researcher in the areas of risk and probability, so I travelled the road between theory and practice in the opposite direction of what is commonly done. Even when I was in full-time practice I specialized in errors linked to theories, and the blindness from the theories of risk management. Allow me to present my conclusions upfront and in no uncertain terms: this measure, if I read it well, aims at the creation of an omniscient Soviet-style central risk manager. It makes us fall into the naive illusion of risk management that got us here —the same illusion has led in the past to the blind accumulation of Black Swan risks. Black Swans are these large, consequential, but unpredicted deviations in the eyes of a given observer —the observer does not see them coming, but, by some mental mechanism, thinks that he predicted them. Simply, there are limitations to our ability to measure the risks of extreme events and throwing government money on it will carry negative side effects. 1) Financial risks, particularly those known as Black Swan events cannot be measured in any possible quantitative and predictive manner; they can only be dealt with nonpredictive ways.  The system needs to be made robust organically, not through centralized risk management. I will keep repeating that predicting financial risks has only worked on computers so far (not in the real world) and there is no compelling reason for that to change—as a matter of fact such class of risks is becoming more unpredictable

A reviewer in the Harvard Business Review notes Taleb is a conservative with a small c. But this does not mean that he is a toady for the Koch brothers or other special interests. In fact, in this Congressional testimony, Taleb also recommends, as his point #3

..risks need to be handled by the entities themselves, in an organic way, paying for their mistakes as they go. It is far more effective to make bankers accountable for their mistakes than try the central risk manager version of Soviet-style central planner, putting hope ahead of empirical reality.

Taleb’s argument has a mathematical side. In an article in the International Journal of Forecasting appended to his testimony, he develops infographics to suggest that fat-tailed risks are intrinsically hard to evaluate. He also notes, correctly, that in 2008, despite manifest proof to the contrary, leading financial institutions often applied risk models based on the idea that outcomes followed a normal or Gaussian probability distribution. It’s easy to show that this is not the case for daily stock and other returns. The characteristic distributions exhibit excess kurtosis, and are hard to pin down in terms of specific distributions. As Taleb points out, the defining events that might tip the identification one way or another are rare. So mistakes are easy to make, and possibly have big effects.

But, Taleb’s extraordinary talent for exposition is on full view in an recent article How To Prevent Another Financial Crisis, coauthored with George Martin. The first paragraphs give us the conclusion,

We believe that “less is more” in complex systems—that simple heuristics and protocols are necessary for complex problems as elaborate rules often lead to “multiplicative branching” of side effects that cumulatively may have first order effects. So instead of relying on thousands of meandering pages of regulation, we should enforce a basic principle of “skin in the game” when it comes to financial oversight: “The captain goes down with the ship; every captain and every ship.” In other words, nobody should be in a position to have the upside without sharing the downside, particularly when others may be harmed. While this principle seems simple, we have moved away from it in the finance world, particularly when it comes to financial organizations that have been deemed “too big to fail.”

Then, the authors drive this point home with a salient reference –

The best risk-management rule was formulated nearly 4,000 years ago. Hammurabi’s code specifies: “If a builder builds a house for a man and does not make its construction firm, and the house which he has built collapses and causes the death of the owner of the house, that builder shall be put to death.” Clearly, the Babylonians understood that the builder will always know more about the risks than the client, and can hide fragilities and improve his profitability by cutting corners—in, say, the foundation. The builder can also fool the inspector (or the regulator). The person hiding risk has a large informational advantage over the one looking for it.

My hat’s off to Taleb. A brilliant example, and the rest of the article bears reading too.

While I have not thrown in the towel when it comes to devising metrics to signal financial crisis, I have to say that thoughts like Taleb’s probability argument occurred to me recently, when considering the arguments over extreme weather events.

Here’s a recent video.

# What is a Market Bubble?

Let’s ask what might seem to be a silly question, but which turns out to be challenging. What is an asset bubble? How can asset bubbles be identified quantitatively?

Let me highlight two definitions – major in terms of the economics and analytical literature. And remember when working through “definitions” that the last major asset bubbles that burst triggered the recessions of 2008-2009 globally, resulting in the loss of tens of trillions of dollars.

You know, a trillion here and a trillion there, and pretty soon you are talking about real money.

Bubbles as Deviations from Values Linked to Economic Fundamentals

The first is simply that –

An asset price bubble is a price acceleration that cannot be explained in terms of the underlying fundamental economic variables

This comes from Dreger and Zhang, who cite earlier work by Case and Shiller, including their historic paper – Is There A Bubble in the Housing Market (2003)

Basically, you need a statistical or an econometric model which “explains” price movements in an asset market. While prices can deviate from forecasts produced by this model on a temporary basis, they will return to the predicted relationship to the set of fundamental variables at some time in the future, or eventually, or in the long run.

The sustained speculative distortions of the asset market then can be measured with reference to benchmark projections with this type of relationship and current values of the “fundamentals.”

This is the language of co-integrating relationships. The trick, then, is to identify a relationship between the asset price and its fundamental drivers which net out residuals that are white noise, or at least, ARMA – autoregressive moving average – residuals. Good luck with that!

Bubbles as Faster-Than-Exponential Growth

The second definition comes from Didier Sornette and basically is that an asset bubble exists when prices or values are accelerating at a faster-than-exponential rate.

This phenomenon is generated by behaviors of investors and traders that create positive feedback in the valuation of assets and unsustainable growth, leading to a finite-time singularity at some future time… From a technical view point, the positive feedback mechanisms include (i) option hedging, (ii) insurance portfolio strategies, (iii) market makers bid-ask spread in response to past volatility, (iv) learning of business networks and human capital build-up,(v) procyclical financing of firms by banks (boom vs contracting times), (vi) trend following investment strategies, (vii) asymmetric information on hedging strategies viii) the interplay of mark-to-market accounting and regulatory capital requirements. From a behavior viewpoint, positive feedbacks emerge as a result of the propensity of humans to imitate, their social gregariousness and the resulting herding.

Fundamentals still benchmark asset prices in this approach, as illustrated by this chart.

Here GDP and U.S. stock market valuation grow at approximately the same rate, suggesting a “cointegrated relationship,” such as suggested with the first definition of a bubble introduced above.

However, the market has shown three multiple-year periods of excessive valuation, followed by periods of consolidation.

These periods of bubbly growth in prices are triggered by expectations of higher prices and the ability to speculate, and are given precise mathematical expression in the JLS (Johansen-Ledoit-Sornette) model.

The behavioral underpinnings are familiar and can explained with reference to housing, as follows.

The term “bubble” refers to a situation in which excessive future expectations cause prices to rise. For instance, during a house-price bubble, buyers think that a home that they would normally consider too expensive is now an acceptable purchase because they will be compensated by significant further \price increases. They will not need to save as much as they otherwise might, because they expect the increased value of their home to do the saving for them. First-time homebuyers may also worry during a bubble that if they do not buy now, they will not be able to afford a home later. Furthermore, the expectation of large price increases may have a strong impact on demand if people think that home prices are very unlikely to fall, and certainly not likely to fall for long, so that there is little perceived risk associated with an investment in a home.

The concept of “faster-than-exponential” growth also is explicated in this chart from a recent article (2011), and originally from Why Stock Markets Crash, published by Princeton.

In a recent methodological piece, Sornette and co-authors cite an extensive list of applications of their approach.

..the JLS model has been used widely to detect bubbles and crashes ex-ante (i.e., with advanced documented notice in real time) in various kinds of markets such as the 2006-2008 oil bubble [5], the Chinese index bubble in 2009 [6], the real estate market in Las Vegas [7], the U.K. and U.S. real estate bubbles [8, 9], the Nikkei index anti-bubble in 1990-1998 [10] and the S&P 500 index anti-bubble in 2000-2003 [11]. Other recent ex-post studies include the Dow Jones Industrial Average historical bubbles [12], the corporate bond spreads [13], the Polish stock market bubble [14], the western stock markets [15], the Brazilian real (R\$) – US dollar (USD) exchange rate [16], the 2000-2010 world major stock indices [17], the South African stock market bubble [18] and the US repurchase agreements market [19].

I refer readers to the above link for the specifics of these references. Note, in general, most citations in this post are available as PDF files from a webpage maintained by the Swiss Federal Institute of Technology.

The Psychology of Asset Bubbles

After wrestling with this literature for several months, including some advanced math and econometrics, it seems to me that it all comes down, in the heat of the moment just before the bubble crashes, to psychology.

How does that go?

A recent paper coauthored by Sornette and Cauwels and others summarize the group psychology behind asset bubbles.

In its microeconomic formulation, the model assumes a hierarchical organization of the market, comprised of two groups of agents: a group with rational expectations (the value investors), and a group of “noise” agents, who are boundedly rational and exhibit herding behavior (the trend followers). Herding is assumed to be self-reinforcing, corresponding to a nonlinear trend following behavior, which creates price-to-price positive feedback loops that yield an accelerated growth process. The tension and competition between the rational agents and the noise traders produces deviations around the growing prices that take the form of low-frequency oscillations, which increase in frequency due to the acceleration of the price and the nonlinear feedback mechanisms, as the time of the crash approaches.

Examples of how “irrational” agents might proceed to fuel an asset bubble are given in a selective review of the asset bubble literature developed recently by Anna Scherbina from which I take several extracts below.

For example, there is “feedback trading” involving traders who react solely to past price movements (momentum traders?). Scherbina writes,

In response to positive news, an asset experiences a high initial return. This is noticed by a group of feedback traders who assume that the high return will continue and, therefore, buy the asset, pushing prices above fundamentals. The further price increase attracts additional feedback traders, who also buy the asset and push prices even higher, thereby attracting subsequent feedback traders, and so on. The price will keep rising as long as more capital is being invested. Once the rate of new capital inflow slows down, so does the rate of price growth; at this point, capital might start flowing out, causing the bubble to deflate.

Other mechanisms are biased self-attribution and the representativeness heuristic. In biased self-attribution,

..people to take into account signals that confirm their beliefs and dismiss as noise signals that contradict their beliefs…. Investors form their initial beliefs by receiving a noisy private signal about the value of a security.. for example, by researching the security. Subsequently, investors receive a noisy public signal…..[can be]  assumed to be almost pure noise and therefore should be ignored. However, since investors suffer from biased self-attribution, they grow overconfident in their belief after the public signal confirms their private information and further revise their valuation in the direction of their private signal. When the public signal contradicts the investors’ private information, it is appropriately ignored and the price remains unchanged. Therefore, public signals, in expectation, lead to price movements in the same direction as the initial price response to the private signal. These subsequent price moves are not justified by fundamentals and represent a bubble. The bubble starts to deflate after the accumulated public signals force investors to eventually grow less confident in their private signal.

Scherbina describes the representativeness heuristic as follows.

The fourth model combines two behavioral phenomena, the representativeness heuristic and the conservatism bias. Both phenomena were previously documented in psychology and represent deviations from optimal Bayesian information processing. The representativeness heuristic leads investors to put too much weight on attention-grabbing (“strong”) news, which causes overreaction. In contrast, conservatism bias captures investors’ tendency to be too slow to revise their models, such that they underweight relevant but non-attention-grabbing (routine) evidence, which causes underreaction… In this setting, a positive bubble will arise purely by chance, for example, if a series of unexpected good outcomes have occurred, causing investors to over-extrapolate from the past trend. Investors make a mistake by ignoring the low unconditional probability that any company can grow or shrink for long periods of time. The mispricing will persist until an accumulation of signals forces investors to switch from the trending to the mean-reverting model of earnings.

Interesting, several of these “irrationalities” can generate negative, as well as positive bubbles.

Finally, Scherbina makes an important admission, namely that

The behavioral view of bubbles finds support in experimental studies. These studies set up artificial markets with finitely-lived assets and observe that price bubbles arise frequently. The presence of bubbles is often attributed to the lack of common knowledge of rationality among traders. Traders expect bubbles to arise because they believe that other traders may be irrational. Consequently, optimistic media stories and analyst reports may help create bubbles not because investors believe these views but because the optimistic stories may indicate the existence of other investors who do, destroying the common knowledge of rationality.

And let me pin that down further here.

Asset Bubbles – the Evidence From Experimental Economics

Vernon Smith is a pioneer in experimental economics. One of his most famous experiments concerns the genesis of asset bubbles.

Stefan Palan recently surveyed these experiments, and also has a downloadable working paper (2013) which collates data from them.

This article is based on the results of 33 published articles and 25 working papers using the experimental asset market design introduced by Smith, Suchanek and Williams (1988). It discusses the design of a baseline market and goes on to present a database of close to 1600 individual bubble measure observations from experiments in the literature, which may serve as a reference resource for the quantitative comparison of existing and future findings.

A typical pattern of asset bubble formation emerges in these experiments.

As Smith relates in the video, the experimental market is comprised of student subjects who can both buy and sell and asset which declines in value to zero over a fixed period. Students can earn real money at this, and cannot communicate with others in the experiment.

Noahpinion has further discussion of this type of bubble experiment, which, as Palan writes, is the best-documented experimental asset market design in existence and thus offers a superior base of comparison for new work.

There are convergent lines of evidence about the reality and dynamics of asset bubbles, and a growing appreciation that, empirically, asset bubbles share a number of characteristics.

That may not be enough to convince the mainstream economics profession, however, as a humorous piece by Hirshleifer (2001), quoted by a German researcher a few years back, suggests –

In the muddled days before the rise of modern finance, some otherwise-reputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices. What if the creators of asset pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately reflect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model (or DAPM), in which proxies for market misevaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies (such as book/market, earnings/price, and past returns) and mood indicators such as amount of sunlight, turned out to be strong predictors of future returns. At this point, it would seem that the deficient markets hypothesis was the best-confirmed theory in the social sciences.