More on the “Efficiency” of US Stock Markets – Evidence from 1871 to 2003

In a pivotal article, Andrew Lo writes,

Many of the examples that behavioralists cite as violations of rationality that are inconsistent with market efficiency loss aversion, overconfidence, overreaction, mental accounting, and other behavioral biases are, in fact, consistent with an evolutionary model of individuals adapting to a changing environment via simple heuristics.

He also supplies an intriguing graph of the rolling first order autocorrelation of monthly returns of the S&P Composite Index from January 1971 to April 2003.

LoACchart

Lo notes the Random Walk Hypothesis implies that returns are serially uncorrelated, so the serial correlation coefficient ought to be zero – or at least, converging to zero over time as markets move into equilibrium.

However, the above chart shows this does not happen, although there are points in time when the first order serial correlation coefficient is small in magnitude, or even zero.

My point is that the first order serial correlation in daily returns for the S&P 500 is large enough for long enough periods to generate profits above a Buy-and-Hold strategy – that is, if one can negotiate the tricky milliseconds of trading at the end of each trading day.

The King Has No Clothes or Why There Is High Frequency Trading (HFT)

I often present at confabs where there are engineers with management or executive portfolios. You start the slides, but, beforehand, prepare for the tough question. Make sure the numbers in the tables add up and that round-off errors or simple typos do not creep in to mess things up.

To carry this on a bit, I recall a Hewlett Packard VP whose preoccupation during meetings was to fiddle with their calculator – which dates the story a little. In any case, the only thing that really interested them was to point out mistakes in the arithmetic. The idea is apparently that if you cannot do addition, why should anyone believe your more complex claims?

I’m bending this around to the theory of efficient markets and rational expectations, by the way.

And I’m playing the role of the engineer.

Rational Expectations

The theory of rational expectations dates at least to the work of Muth in the 1960’s, and is coupled with “efficient markets.”

Lim and Brooks explain market efficiency in – The Evolution of Stock Market Efficiency Over Time: A Survey of the Empirical Literature

The term ‘market efficiency’, formalized in the seminal review of Fama (1970), is generally referred to as the informational efficiency of financial markets which emphasizes the role of information in setting prices.. More specifically, the efficient markets hypothesis (EMH) defines an efficient market as one in which new information is quickly and correctly reflected in its current security price… the weak-form version….asserts that security prices fully reflect all information contained in the past price history of the market.

Lim and Brooks focus, among other things, on statistical tests for random walks in financial time series, noting this type of research is giving way to approaches highlighting adaptive expectations.

Proof US Stock Markets Are Not Efficient (or Maybe That HFT Saves the Concept)

I like to read mathematically grounded research, so I have looked a lot of the papers purporting to show that the hypothesis that stock prices are random walks cannot be rejected statistically.

But really there is a simple constructive proof that this literature is almost certainly wrong.

STEP 1: Grab the data. Download daily adjusted closing prices for the S&P 500 from some free site (e,g, Yahoo Finance). I did this again recently, collecting data back to 1990. Adjusted closing prices, of course, are based on closing prices for the trading day, adjusted for dividends and stock splits. Oh yeah, you may have to resort the data from oldest to newest, since a lot of sites present the newest data on top, originally.

Here’s a graph of the data, which should be very familiar by now.

adjCLPS&P

STEP 2: Create the relevant data structure. In the same spreadsheet, compute the trading-day-over-treading day growth in the adjusted closing price (ACP). Then, side-by-side with this growth rate of the ACP, create another series which, except for the first value, maps the growth in ACP for the previous trading day onto the growth of the ACP for any particular day. That gives you two columns of new data.

STEP 3: Run adaptive regressions. Most spreadsheet programs include an ordinary least squares (OLS) regression routine. Certainly, Excel does. In any case, you want to setup up a regression to predict the growth in the ACP, based on one trading lags in the growth of the ACP.

I did this, initially, to predict the growth in ACP for January 3, 2000, based on data extending back to January 3, 1990 – a total of 2528 trading days. Then, I estimated regressions going down for later dates with the same size time window of 2528 trading days.

The resulting “predictions” for the growth in ACP are out-of-sample, in the sense that each prediction stands outside the sample of historic data used to develop the regression parameters used to forecast it.

It needs to be said that these predictions for the growth of the adjusted closing price (ACP) are marginal, correctly predicting the sign of the ACP only about 53 percent of the time.

An interesting question, though, is whether these just barely predictive forecasts can be deployed in a successful trading model. Would a trading algorithm based on this autoregressive relationship beat the proverbial “buy-and-hold?”

So, for example, suppose we imagine that we can trade at closing each trading day, close enough to the actual closing prices.

Then, you get something like this, if you invest $100,000 at the beginning of 2000, and trade through last week. If the predicted growth in the ACP is positive, you buy at the previous day’s close. If not, you sell at the previous day’s close. For the Buy-and-Hold portfolio, you just invest the $100,000 January 3, 2000, and travel to Tahiti for 15 years or so.

BandHversusAR

So, as should be no surprise, the Buy-and-Hold strategy results in replicating the S&P 500 Index on a $100,000 base.

The trading strategy based on the simple first order autoregressive model, on the other hand, achieves more than twice these cumulative earnings.

Now I suppose you could say that all this was an accident, or that it was purely a matter of chance, distributed over more than 3,810 trading days. But I doubt it. After all, this trading interval 2000-2015 includes the worst economic crisis since before World War II.

Or you might claim that the profits from the simple AR trading strategy would be eaten up by transactions fees and taxes. On this point, there were 1,774 trades, for an average of $163 per trade. So, worst case, if trading costs $10 a transaction, and there is a tax rate of 40 percent, that leaves $156K over these 14-15 years in terms of take-away profit, or about $10,000 a year.

Where This May Go Wrong

This does sound like a paen to stock market investing – even “day-trading.”

What could go wrong?

Well, I assume here, of course, that exchange traded funds (ETF’s) tracking the S&P 500 can be bought and sold with the same tactics, as outlined here.

Beyond that, I don’t have access to the data currently (although I will soon), but I suspect high frequency trading (HFT) may stand in the way of realizing this marvelous investing strategy.

So remember you have to trade some small instant before market closing to implement this trading strategy. But that means you get into the turf of the high frequency traders. And, as previous posts here observe, all kinds of unusual things can happen in a blink of an eye, faster than any human response time.

So – a conjecture. I think that the choicest situations from the standpoint of this more or less macro interday perspective, may be precisely the places where you see huge spikes in the volume of HFT. This is a proposition that can be tested.

I also think something like this has to be appealed to in order to save the efficient markets hypothesis, or rational expectations. But in this case, it is not the rational expectations of human subjects, but the presumed rationality of algorithms and robots, as it were, which may be driving the market, when push comes to shove.

Top picture from CommSmart Global.

Scalability of the Pvar Stock Market Forecasting Approach

Ok, I am documenting and extending a method of forecasting stock market prices based on what I call Pvar models. Here Pvar stands for “proximity variable” – or, more specifically, variables based on the spread or difference between the opening price of a stock, ETF, or index, and the high or low of the previous period. These periods can be days, groups of days, weeks, months, and so forth.

I share features of these models and some representative output on this blog.

And, of course, I continue to have wider interests in forecasting controversies, issues, methods, as well as the global economy.

But for now, I’ve got hold of something, and since I appreciate your visits and comments, let’s talk about “scalability.”

Forecast Error and Data Frequency

Years ago, when I first heard of the M-competition (probably later than for some), I was intrigued by reports of how forecast error blows up “three or four periods in the forecast horizon,” almost no matter what the data frequency. So, if you develop a forecast model with monthly data, forecast error starts to explode three or four months into the forecast horizon. If you use quarterly data, you can push the error boundary out three or four quarters, and so forth.

I have not seen mention of this result so much recently, so my memory may be playing tricks.

But the basic concept seems sound. There is irreducible noise in data and in modeling. So whatever data frequency you are analyzing, it makes sense that forecast errors will start to balloon more or less at the same point in the forecast horizon – in terms of intervals of the data frequency you are analyzing.

Well, this concept seems emergent in forecasts of stock market prices, when I apply the analysis based on these proximity variables.

Prediction of Highs and Lows of Microsoft (MSFT) Stock at Different Data Frequencies

What I have discovered is that in order to predict over longer forecast horizons, when it comes to stock prices, it is necessary to look back over longer historical periods.

Here are some examples of scalability in forecasts of the high and low of MSFT.

Forecasting 20 trading days ahead, you get this type of chart for recent 20-day-periods.

MSFT20day

One of the important things to note is that these are out-of-sample forecasts, and that, generally, they encapsulate the actual closing prices for these 20 trading day periods.

Here is a comparable chart for 10 trading days.

MSFTHL10

Same data, forecasts also are out-of-sample, and, of course, there are more closing prices to chart, too.

Finally, here is a very busy chart with forecasts by trading day.

MSFTdaily

Now there are several key points to take away from these charts.

First, the predictions of MSFT high and low prices for these periods are developed by similar forecast models, at least with regard to the specification of explanatory variables. Also, the Pvar method works for specific stocks, as well as for stock market indexes and ETF’s that might track them.

However, and this is another key point, the definitions of these variables shift with the periods being considered.

So the high for MSFT by trading day is certainly different from the MSFT high over groups of 20 trading days, and so forth.

In any case, there is remarkable scalability with Pvar models, all of which suggests they capture some of the interplay between long and shorter term trading.

While I am handing out conjectures, here is another one.

I think it will be possible to conduct a “causal analysis” to show that the Pvar variables reflect or capture trader actions, and that these actions tend to drive the market.

Pvar Models for Forecasting Stock Prices

When I began this blog three years ago, I wanted to deepen my understanding of technique – especially stuff growing up alongside Big Data and machine learning.

I also was encouraged by Malcolm Gladwell’s 10,000 hour idea – finding it credible from past study of mathematical topics. So maybe my performance as a forecaster would improve by studying everything about the subject.

Little did I suspect I would myself stumble on a major forecasting discovery.

But, as I am wont to quote these days, even a blind pig uncovers a truffle from time to time.

Forecasting Stock Prices

My discovery pertains to forecasting stock prices.

Basically, I have stumbled on a method of developing much more accurate forecasts of high and low stock prices, given the opening price in a period. These periods can be days, groups of days, weeks, months, and, based on what I present here – quarters.

Additionally, I have discovered a way to translate these results into much more accurate forecasts of closing prices over long forecast horizons.

I would share the full details, except I need some official acknowledgement for my work (in process) and, of course, my procedures lead to profits, so I hope to recover some of what I have invested in this research.

Having struggled through a maze of ways of doing this, however, I feel comfortable sharing a key feature of my approach – which is that it is based on the spreads between opening prices and the high and low of previous periods. Hence, I call these “Pvar models” for proximity variable models.

There is really nothing in the literature like this, so far as I am able to determine – although the discussion of 52 week high investing captures some of the spirit.

S&P 500 Quarterly Forecasts

Let’s look an example – forecasting quarterly closing prices for the S&P 500, shown in this chart.

S&PQ

We are all familiar with this series. And I think most of us are worried that after the current runup, there may be another major correction.

In any case, this graph compares out-of-sample forecasts of ARIMA(1,1,0) and Pvar models. The ARIMA forecasts are estimated by the off-the-shelf automatic forecast program Forecast Pro. The Pvar models are estimated by ordinary least squares (OLS) regression, using Matlab and Excel spreadsheets.

CompPvarARIMA

The solid red line shows the movement of the S&P 500 from 2005 to just recently. Of course, the big dip in 2008 stands out.

The blue line charts out-of-sample forecasts of the Pvar model, which are from visual inspection, clearly superior to the ARIMA forecasts, in orange.

And note the meaning of “out-of-sample” here. Parameters of the Pvar and ARIMA models are estimated over historic data which do not include the prices in the period being forecast. So the results are strictly comparable with applying these models today and checking their performance over the next three months.

The following bar chart shows the forecast errors of the Pvar and ARIMA forecasts.

PvarARIMAcomp

Thus, the Pvar model forecasts are not always more accurate than ARIMA forecasts, but clearly do significantly better at major turning points, like the 2008 recession.

The mean absolute percent errors (MAPE) for the two approaches are 7.6 and 10.2 percent, respectively.

This comparison is intriguing, since Forecast Pro automatically selected an ARIMA(1,1,0) model in each instance of its application to this series. This involves autoregressions on differences of a time series, to some extent challenging the received wisdom that stock prices are random walks right there. But Pvar poses an even more significant challenge to versions of the efficient market hypothesis, since Pvar models pull variables from the time series to predict the time series – something you are really not supposed to be able to do, if markets are, as it were, “efficient.” Furthermore, this price predictability is persistent, and not just a fluke of some special period of market history.

I will have further comments on the scalability of this approach soon. Stay tuned.

Some Thoughts for Monday

There’s a kind of principle in invention and innovation which goes like this – often the originator of new ideas and approaches is a kind of outsider, stumbling on a discovery by pursuing avenues others thought, through training, would be fruitless. Or at least this innovator pursues a line of research outside of the mainstream – where accolades are being awarded.

You can make too much of this, but it does have wide applicability.

In science, for example, it’s the guy from the out-of-the-way school, who makes the important discovery, then gets recruited to the big time. I recall reading about the migration of young academics from lesser schools to major institutions – Berkeley and the Ivy League – after an important book or discovery.

And, really, a lot of information technology (IT) was launched by college drop-outs, such as the estimable Mr. Bill Gates, or the late Steve Jobs.

This is a happy observation in a way, because it means the drumbeat of bad news from, say, the Ukrainian or Syrian fronts, or insight such as in Satjajit Das’ The Sum of All Our Fears! The Outlook for 2015, is not the whole story. There are “sideways movements” of change which can occur, precisely because they are not obvious to mainstream observers.

Without innovation, our goose is cooked.

I’m going to write more on innovation this week, detailing some of my more recent financial and stock market research under that heading.

But for now, let me comment on  the “libertarian” edge that accompanies a lot of innovation, these days.

The new new peer-to-peer (P2P) “sharing” or social access services provide great examples.

Uber, Lyft, Airbnb – these companies provide access to rides, vehicles, and accommodations. They rely on bidirectional rating systems, background checks, frictionless payment systems, and platforms that encourage buyers and sellers to get to know each other face-to-face before doing business. With venture funding from Wall Street and Silicon Valley, their valuations rise a dramatic way. Uber’s valuation has risen to an estimated $40 billion, making it one of the 150 biggest companies in the world–larger than Delta, FedEx or Viacom. Airbnb coordinates lodging for an estimated 425,000 persons a night, and has an estimated valuation of $13.5 billion, almost half as much as 96-year-old Hilton Worldwide.

There are increased calls for regulation of these companies, as they bite into markets dominated by the traditional hotel and hospitality sector, or taxi-cab companies. Clearly, raising hundreds of millions in venture capital can impart hubris to top management, as in the mad threats coming from a Uber executive against journalists who report, for example, sexual harassment of female customers by Uber drivers.

Noone should attempt to stop the push-and-pull of regulation and disruptive technology, however. Innovations in P2P platforms, pioneered by eBay, pave the way for cultural and institutional innovation. At the same time, I feel better about accepting a ride within the Uber system, if I know the driver is insured and has a safe vehicle.

The Greek Conundrum

I’ve been focused on stock price forecast models, recently, and before that, on dynamics of oil prices.

However, it’s clear that almost any global market these days can be affected by developments in Europe.

There’s an excellent backgrounder to the crisis over restructuring Greek debt. See Greece, Its International Competitors and the Euro by the Turkish financial analyst T. Sabri Öncü – a PDF from the Economic and Political Weekly, an Indian Journal.

According to Öncü, the Greeks got in trouble with loans to finance consumption and nonproductive spending, when and after they joined the Eurozone in 2001. The extent of the problem was masked by accounting smoke and mirrors, only being revealed in 2009. Since then “bailouts” from European banking authorities have been designed to insure steady repayment of this debt to German and French banks, among others, although some Greek financial parties have benefited also.

Still, as Öncü writes,

Fast forward to today, despite two bailouts and adjustment programmes Greece has been in depression since the beginning of 2009. The Greece’s GDP is down about 25% from its peak in 2008, unemployment is at about 25%, youth unemployment is above 50%, Greece’s public debt to GDP ratio is at about a mind-boggling 175% and many Greeks are lining up for soup in front of soup kitchens reminiscent of the soup kitchens of the Great Depression of 1929.

As this post is written, negotiations between the new Syrizia government and European authorities have broken down, but here is an interesting video outlining the opposing positions, to an extent, prior to Monday.

Bruegel’s Interview: Debt Restructuring & Greece

Austerity is on the line here, since it seems clear Greece can never repay its debts as currently scheduled, even with imposing further privations on the Greek population.

Forecasting Google’s Stock Price (GOOG) On 20-Trading-Day Horizons

Google’s stock price (GOOG) is relatively volatile, as the following chart shows.

GOOG

So it’s interesting that a stock market forecasting algorithm can produce the following 20 Trading-Day-Ahead forecasts for GOOG, for the recent period.

GOG20

The forecasts in the above chart, as are those mentioned subsequently, are out-of-sample predictions. That is, the parameters of the forecast model – which I call the PVar model – are estimated over one set of historic prices. Then, the forecasts from PVar are generated with values for the explanatory variables that are “outside” or not the same as this historic data.

How good are these forecasts and how are they developed?

Well, generally forecasting algorithms are compared with benchmarks, such as an autoregressive model or a “no-change” forecast.

So I constructed an autoregressive (AR) model for the Google closing prices, sampled at 20 day frequencies. This model has ten lagged versions of the closing price series, so I do not just rely here on first order autocorrelations.

Here is a comparison of the 20 trading-day-ahead predictions of this AR model, the above “proximity variable” (PVar) model which I take credit for, and the actual closing prices.

compGOOG

As you can see, the AR model is worse in comparison to the PVar model, although they share some values at the end of the forecast series.

The mean absolute percent errors (MAPE) of the AR model for a period more extended than shown in the graph is 7.0, compared with 5.1 for PVar. This comparison is calculated over data from 4/20/2011.

So how do I do it?

Well, since these models show so much promise, it makes sense to keep working on them, making improvements. However, previous posts here give broad hints, indeed pretty well laying out the framework, at least on an introductory basis.

Essentially, I move from predicting highs and lows to predicting closing prices.

To predict highs and lows, my post “further research” states

Now, the predictive models for the daily high and low stock price are formulated, as before, keying off the opening price in each trading day. One of the key relationships is the proximity of the daily opening price to the previous period high. The other key relationship is the proximity of the daily opening price to the previous period low. Ordinary least squares (OLS) regression models can be developed which do a good job of predicting the direction of change of the daily high and low, based on knowledge of the opening price for the day.

Other posts present actual regression models, although these are definitely prototypes, based on what I know now.

Why Does This Work?

I’ll bet this works because investors often follow simple rules such as “buy when the opening price is sufficiently greater than the previous period high” or “sell, if the opening price is sufficiently lower than the previous period low.”

I have assembled evidence, based on time variation in the predictive coefficients of the PVar variables, which I probably will put out here sometime.

But the point is that momentum trading is a major part of stock market activity, not only in the United States, but globally. There’s even research claiming to show that momentum traders do better than others, although that’s controversial.

This means that the daily price record for a stock, the opening, high, low, and closing prices, encode information that investors are likely to draw upon over different investing horizons.

I’m pleased these insights open up many researchable questions. I predict all this will lead to wholly new generations of models in stock market analysis. And my guess, and so far it is largely just that, is that these models may prove more durable than many insights into patterns of stock market prices – due to a sort of self-confirming aspect.

Modeling High Tech – the Demand for Communications Services

A colleague was kind enough to provide me with a copy of –

Demand for Communications Services – Insights and Perspectives, Essays in Honor of Lester D. Taylor, Alleman, NíShúilleabháin, and Rappoport, editors, Springer 2014

Some essays in this Festschrift for Lester Taylor are particularly relevant, since they deal directly with forecasting the disarray caused by disruptive technologies in IT markets and companies.

Thus, Mohsen Hamoudia in “Forecasting the Demand for Business Communications Services” observes about the telecom space that

“..convergence of IT and telecommunications market has created more complex behavior of market participants. Customers expect new product offerings to coincide with these emerging needs fostered by their growth and globalization. Enterprises require more integrated solutions for security, mobility, hosting, new added-value services, outsourcing and voice over internet protocol (VoiP). This changing landscape has led to the decline of traditional product markets for telecommunications operators.

In this shifting landscape, it is nothing less than heroic to discriminate “demand variables” and “ independent variables” deploying and produce useful demand forecasts from three stage least squares (3SLS) models, as does Mohsen Hamoudia in his analysis of BCS.

Here is Hamoudia’s schematic of supply and demand in the BCS space, as of a 2012 update.

BCS

Other cutting-edge contributions, dealing with shifting priorities of consumers, faced with new communications technologies and services, include, “Forecasting Video Cord-Cutting: The Bypass of Traditional Pay Television” and “Residential Demand for Wireless Telephony.”

Festschrift and Elasticities

This Springer Festschrift is distinctive inasmuch as Professor Taylor himself contributes papers – one a reminiscence titled “Fifty Years of Studying Economics.”

Taylor, of course, is known for his work in the statistical analysis of empirical demand functions and broke ground with two books, Telecommunications Demand: A Survey and Critique (1980) and Telecommunications Demand in Theory and Practice (1994).

Accordingly, forecasting and analysis of communications and high tech are a major focus of several essays in the book.

Elasticities are an important focus of statistical demand analysis. They flow nicely from double logarithmic or log-log demand specifications – since, then, elasticities are constant. In a simple linear demand specification, of course, the price elasticity varies across the range of prices and demand, which complicates testimony before public commissions, to say the least.

So it is interesting, in this regard, that Professor Taylor is still active in modeling, contributing to his own Festschrift with a note on translating logs of negative numbers to polar coordinates and the complex plane.

“Pricing and Maximizing Profits Within Corporations” captures the flavor of a telecom regulatory era which is fast receding behind us. The authors, Levy and Tardiff, write that,

During the time in which he was finishing the update, Professor Taylor participated in one of the most hotly debated telecommunications demand elasticity issues of the early 1990’s: how price-sensitive were short-distance toll calls (then called intraLATA long-distance calls)? The answer to that question would determine the extent to which the California state regulator reduced long-distance prices (and increased other prices, such as basic local service prices) in a “revenue-neutral” fashion.

Followup Workshop

Research in this volume provides a good lead-up to a forthcoming International Institute of Forecasters (IIF) workshop – the 2nd ICT and Innovation Forecasting Workshop to be held this coming May in Paris.

The dynamic, ever changing nature of the Information & Communications Technology (ICT) Industry is a challenge for business planners and forecasters. The rise of Twitter and the sudden demise of Blackberry are dramatic examples of the uncertainties of the industry; these events clearly demonstrate how radically the environment can change. Similarly, predicting demand, market penetration, new markets, and the impact of new innovations in the ICT sector offer a challenge to businesses and policymakers. This Workshop will focus on forecasting new services and innovation in this sector as well as the theory and practice of forecasting in the sector (Telcos, IT providers, OTTs, manufacturers). For more information on venue, organizers and registration, Download brochure

Top Forecasters of the US Economy, 2013-2014

Once again, Christophe Barraud, a French economist based in Paris, is ranked as the “best forecaster of the US economy” by Bloomberg (see here).

This is quite an accomplishment, considering that it is based on forecasts for 14 key monthly indicators including CPI, Durable Goods Orders, Existing Home Sales, Housing Starts, IP, ISM Manufacturing, ISM Nonmanufacturing, New Home Sales, Nonfarm Payrolls, Personal Income, Personal Spending, Retail Sales, Unemployment and GDP.

For this round, Bloomberg considered two years of data ending ended November 2014.

Barraud was #1 in the rankings for 2011-2012 also.

In case you wanted to take the measure of such talent, here is a recent interview with Barraud conducted by Figaro (in French).

The #2 slot in the Bloomberg rankings of best forecasters of the US economy went to Jim O’Sullivan of High Frequency Economics.

Here just an excerpt from an interview by subscription with O’Sullivan – again to take the measure of the man.

While I have been absorbed in analyzing a statistical/econometric problem, a lot has transpired – in Switzerland, in Greece and the Ukraine, and in various global regions. While I am optimistic in outlook presently, I suspect 2015 may prove to be a year of surprises.

On Self-Fulfilling Prophecy

In their excellent “Forecasting Stock Returns” in the Handbook of Economic Forecasting, David Rapach and Guofu Zhou write,

While stock return forecasting is fascinating, it can also be frustrating. Stock returns inherently contain a sizable unpredictable component, so that the best forecasting models can explain only a relatively small part of stock returns. Furthermore, competition among traders implies that once successful forecasting models are discovered, they will be readily adopted by others; the widespread adoption of successful forecasting models can then cause stock prices to move in a manner that eliminates the models’ forecasting ability..

Almost an article of faith currently, this perspective seems to rule out other reactions to forecasts which have been important in economic affairs, namely the self-fulfilling prophecy.

Now as “self-fulfilling prophecy” entered the lexicon, it was a prediction which originally was in error, but it became true, because people believed it was true and acted upon it.

Bank runs are the classic example.

The late Robert Merton wrote of the Last National Bank in his classic Social Theory and Social Structure, but there is no need for recourse to apocryphal history. Gary Richardson of the Federal Reserve Bank of Richmond has a nice writeup – Banking Panics of 1930 and 1931

..Caldwell was a rapidly expanding conglomerate and the largest financial holding company in the South. It provided its clients with an array of services – banking, brokerage, insurance – through an expanding chain controlled by its parent corporation headquartered in Nashville, Tennessee. The parent got into trouble when its leaders invested too heavily in securities markets and lost substantial sums when stock prices declined. In order to cover their own losses, the leaders drained cash from the corporations that they controlled.

On November 7, one of Caldwell’s principal subsidiaries, the Bank of Tennessee (Nashville) closed its doors. On November 12 and 17, Caldwell affiliates in Knoxville, Tennessee, and Louisville, Kentucky, also failed. The failures of these institutions triggered a correspondent cascade that forced scores of commercial banks to suspend operations. In communities where these banks closed, depositors panicked and withdrew funds en masse from other banks. Panic spread from town to town. Within a few weeks, hundreds of banks suspended operations. About one-third of these organizations reopened within a few months, but the majority were liquidated (Richardson 2007).

Of course, most of us know but choose to forget these examples, for a variety of reasons – the creation of the Federal Deposit Insurance Corporation has removed most of the threat, that was a long time ago, and so forth.

So it was with interest that I discovered a recent paper of researchers at Cal Tech and UCLA’s Andersson Management School The Self Fulfilling Prophecy of Popular Asset Pricing Models. The authors explore the impact of delegating investment decisions to investment professionals who, by all evidence, apply discounted cash flow models that are disconnected from investor’s individual utility functions.

Despite its elegance, the consumption-based model has one glaring deficiency.

The standard model and its more conventional variants have failed miserably at explaining the cross-section of returns; even tortured versions of the standard model have struggled to match data.

The authors then propose a Gendanken experiment where discounted cash flow models are used by the professional money managers who are delegated to invest by individuals.

The upshot –

Our thought experiment has an intriguing and heretofore unappreciated implication— there is a feedback relation between asset pricing models and the cross-section of expected returns. Our analysis implies that the cross-section of expected returns is not only described by theories of asset pricing, it is also determined by them.

I think Cornell and Hsu are on to something here.

More specifically, I have been trying to understand how to model a trading situation in which predictions of stock high and low prices in a period are self-confirming or self-fulfilling.

Suppose my prediction is that the daily high of Dazzle will be above yesterday’s daily high, if the opening price is above yesterday’s opening price. Then, if this persuades you to buy shares of Dazzle, it would seem that you contribute to the tendency for the stock price to increase. Furthermore, I don’t tell you exactly when the daily high will be reached, so I sort of put you in play. The onus is on you to make the right moves. The forecast does not come under suspicion.

As something of a data scientist, I think I can report that models of stock market trading at the level of agents participating in the market are not a major preoccupation of market analysts or theorists. The starting point seems to be Walras and the problem is how to set the price adjustment mechanism, since the tatonnement is obviously unrealistic

That then brings us probably to experimental economics, which shares a lot of turf with what is called behavioral economics.

The other possibility is simply to observe stock market prices and show that, quite generally, this type of rule must be at play and, because it is not inherently given to be true, it furthermore must be creating the conditions of its own success, to an extent.