Category Archives: stock market forecasts

Rational Bubbles

A rational bubble exists when investors are willing to pay more for stocks than is justified by the discounted stream of future dividends. If investors evaluate potential gains from increases in the stock price as justifying movement away from the “fundamental value” of the stock – a self-reinforcing process of price increases can take hold, but be consistent with “rational expectations.”

This concept has been around for a while, and can be traced to eminences such as Oliver Blanchard, now Chief Economist for the International Monetary Fund (IMF), and Mark Watson, Professor of Economics at Princeton University – (See Bubbles, Rational Expectations and Financial Markets).

In terms of formal theory, Diba and Grossman offer a sophisticated analysis in The Theory of Rational Bubbles in Stock Prices. They invoke intriguing language such as “explosive conditional expectations” and the like.

Since these papers from the 1980’s, the relative size of the financial sector has ballooned, and valuation of derivatives now dwarfs totals for annual value of production on the planet (See Bank for International Settlements).

And, in the US, we have witnessed two, dramatic stock market bubbles, here using the phrase in a more popular “plain-as-the-hand-in-front-of-your-face” sense.

SP500

Following through the metaphor, bursting of the bubble leaves financial plans in shambles, and, from the evidence of parts of Europe at least, can cause significant immiseration of large segments of the population.

It would seem reasonable, therefore, to institute some types of controls, as a bubble was emerging. Perhaps an increase in financial transactions taxes, or some other tactic to cause investors to hold stocks for longer periods.

The question, then, is whether it is possible to “test” for a rational bubble pre-emptively or before-the-fact.

So I have been interested recently to come on more recent analysis of so-called rational bubbles, applying advanced statistical techniques.

I offer some notes and links to download the relevant papers.

These include ground-breaking work by Craine in a working paper Rational Bubbles: A Test.

Then, there are two studies focusing on US stock prices (I include extracts below in italics):

Testing for a Rational Bubble Under Long Memory

We analyze the time series properties of the S&P500 dividend-price ratio in the light of long memory, structural breaks and rational bubbles. We find an increase in the long memory parameter in the early 1990s by applying a recently proposed test by Sibbertsen and Kruse (2009). An application of the unit root test against long memory by Demetrescu et al. (2008) suggests that the pre-break data can be characterized by long memory, while the post-break sample contains a unit root. These results reconcile two empirical findings which were seen as contradictory so far: on the one hand they confirm the existence of fractional integration in the S&P500 log dividend-price ratio and on the other hand they are consistent with the existence of a rational bubble. The result of a changing memory parameter in the dividend-price ratio has an important implication for the literature on return predictability: the shift from a stationary dividend-price ratio to a unit root process in 1991 is likely to have caused the well-documented failure of conventional return prediction models since the 1990s. 

The bubble component captures the part of the share price that is due to expected future price changes. Thus, the price contains a rational bubble, if investors are ready to pay more for the share, than they know is justified by the discounted stream of future dividends. Since they expect to be able to sell the share even at a higher price, the current price, although exceeding the fundamental value, is an equilibrium price. The model therefore allows the development of a rational bubble, in the sense that a bubble is fully consistent with rational expectations. In the rational bubble model, investors are fully cognizant of the fundamental value, but nevertheless they may be willing to pay more than this amount… This is the case if expectations of future price appreciation are large enough to satisfy the rational investor’s required rate of return. To sustain a rational bubble, the stock price must grow faster than dividends (or cash flows) in perpetuity and therefore a rational bubble implies a lack of cointegration between the stock price and fundamentals, i.e. dividends, see Craine (1993).

Testing for rational bubbles in a co-explosive vector autoregression

We derive the parameter restrictions that a standard equity market model implies for a bivariate vector autoregression for stock prices and dividends, and we show how to test these restrictions using likelihood ratio tests. The restrictions, which imply that stock returns are unpredictable, are derived both for a model without bubbles and for a model with a rational bubble. In both cases we show how the restrictions can be tested through standard chi-squared inference. The analysis for the no-bubble case is done within the traditional Johansen model for I(1) variables, while the bubble model is analysed using a co-explosive framework. The methodology is illustrated using US stock prices and dividends for the period 1872-2000.

The characterizing feature of a rational bubble is that it is explosive, i.e. it generates an explosive root in the autoregressive representation for prices. 

This is a very interesting analysis, but involves several stages of statistical testing, all of which is somewhat dependent on assumptions regarding underlying distributions.

Finally, it is interesting to see some of these methodologies for identifying rational bubbles applied to other markets, such as housing, where “fundamental value” has a somewhat different and more tangible meaning.

Explosive bubbles in house prices? Evidence from the OECD countries

We conduct an econometric analysis of bubbles in housing markets in the OECD area, using quarterly OECD data for 18 countries from 1970 to 2013. We pay special attention to the explosive nature of bubbles and use econometric methods that explicitly allow for explosiveness. First, we apply the univariate right-tailed unit root test procedure of Phillips et al. (2012) on the individual countries price-rent ratio. Next, we use Engsted and Nielsen’s (2012) co-explosive VAR framework to test for bubbles. Wefind evidence of explosiveness in many housing markets, thus supporting the bubble hypothesis. However, we also find interesting differences in the conclusions across the two test procedures. We attribute these differences to how the two test procedures control for cointegration between house prices and rent.

Out-Of-Sample R2 Values for PVAR Models

Out-of-sample (OOS) R2 is a good metric to apply to test whether your predictive relationship has out-of-sample predictability. Checking this for the version of the proximity variable model which is publically documented, I find OOS R2 of 0.63 for forecasts of daily high prices.

In other words, 63 percent of the variation of the daily growth in high prices for the S&P 500 is explained by four variables, documented in Predictability of the Daily High and Low of the S&P 500 Index.

This is a really high figure for any kind of predictive relationship involving security prices, so I thought I would put the data out there for anyone interested to check.

OOS R2

This metric is often found in connection with efforts to predict daily or other rates of return on securities, and is commonly defined as

OOSR2

See, for example, Campbell and Thompson.

The white paper linked above and downloadable from University of Munich archives shows –

Ratios involving the current period opening price and the high or low price of the previous period are significant predictors of the current period high or low price for many stocks and stock indexes. This is illustrated with daily trading data from the S&P 500 index. Regressions specifying these “proximity variables” have higher explanatory and predictive power than benchmark autoregressive and “no change” models. This is shown with out-of-sample comparisons of MAPE, MSE, and the proportion of time models predict the correct direction or sign of change of daily high and low stock prices. In addition, predictive models incorporating these proximity variables show time varying effects over the study period, 2000 to February 2015. This time variation looks to be more than random and probably relates to investor risk preferences and changes in the general climate of investment risk.

I wanted to provide interested readers with a spreadsheet containing the basic data and computations of this model, which I call the “proximity variable” model. The idea is that the key variables are ratios of nearby values.

And this is sort of an experiment, since I have not previously put up a spreadsheet for downloading on this blog. And please note the spreadsheet data linked below is somewhat different than the original data for the white paper, chiefly by having more recent observations. This does change the parameter estimates for the whole sample, since the paper shows we are in the realm of time-varying coefficients.

So here goes. Check out this link. PVARresponse

Of course, no spreadsheet is totally self-explanatory, so a few words.

First, the price data (open, high, low, etc) for the S&P 500 come from Yahoo Finance, although the original paper used other sources, too.

Secondly, the data matrix for the regressions is highlighted in light blue. The first few rows of this data matrix include the formulas with later rows being converted to numbers, to reduce the size of the file.

If you look in column K below about row 1720, you will find out-of-sample regression forecasts, created by using data from the immediately preceding trading day and before and current day opening price ratios.

There are 35 cases, I believe, in which the high of the day and the opening price are the same. These can easily be eliminated in calculating any metrics, and, doing so, in fact increases the OOS R2.

I’m sympathetic with readers who develop a passion to “show this guy to be all wrong.” I’ve been there, and it may help to focus on computational matters.

However, there is just no question but this approach is novel, and beats both No Change forecasts and a first order autoregressive forecasts (see the white paper) by a considerable amount.

I personally think these ratios are closely watched by some in the community of traders, and that other price signals motivating trades are variously linked with these variables.

My current research goes further than outlined in the white paper – a lot further. At this point, I am tempted to suggest we are looking at a new paradigm in predictability of stock prices. I project “waves of predictability” will be discovered in the movement of ensembles of security prices. These might be visualized like the wave at a football game, if you will. But the basic point is that I reckon we can show how early predictions of these prices changes are self-confirming to a degree, so emerging signs of the changes being forecast in fact intensify the phenomena being predicted.

Think big.

Keep the comments coming.

One-Month-Ahead Stock Market Forecasts

I have been spending a lot of time analyzing stock market forecast algorithms I stumbled on several months ago which I call the New Proximity Algorithms (NPA’s).

There is a white paper on the University of Munich archive called Predictability of the Daily High and Low of the S&P 500 Index. This provides a snapshot of the NPA at one stage of development, and is rock solid in terms of replicability. For example, an analyst replicated my results with Python, and I’ll probably will provide his code here at some point.

I now have moved on to longer forecast periods and more complex models, and today want to discuss month-ahead forecasts of high and low prices of the S&P 500 for this month – June.

Current Month Forecast for S&P 500

For the current month – June 2015 – things look steady with no topping out or crash in sight

With opening price data from June 1, the NPA month-ahead forecast indicates a high of 2144 and a low of 2030. These are slightly above the high and low for May 2015, 2,134.72 and 2,067.93, respectively.

But, of course, a week of data for June already is in, so, strictly speaking, we need a three week forecast, rather than a forecast for a full month ahead, to be sure of things. And, so far, during June, daily high and low prices have approached the predicted values, already.

In the interests of gaining better understanding of the model, however, I am going to “talk this out” without further computations at this moment.

So, one point is that the model for the low is less reliable than the high price forecast on a month-ahead basis. Here, for example, is the track record of the NPA month-ahead forecasts for the past 12 months or so with S&P 500 data.

12MOSNPA

The forecast model for the high tracks along with the actuals within around 1 percent forecast error, plus or minus. The forecast model for the low, however, has a big miss with around 7 percent forecast error in late 2014.

This sort of “wobble” for the NPA forecast of low prices is not unusual, as the following chart, showing backtests to 2003, shows.

LowMonthAheadBig

What’s encouraging is the NPA model for the low price adjusts quickly. If large errors signal a new direction in price movement, the model catches that quickly. More often, the wobble in the actual low prices seems to be transitory.

Predicting Turning Points

One reason why the NPA monthly forecast for June might be significant, is that the underlying method does a good job of predicting major turning points.

If a crash were coming in June, it seems likely, based on backtesting, that the model would signal something more than a slight upward trend in both the high and low prices.

Here are some examples.

First, the NPA forecast model for the high price of the S&P 500 caught the turning point in 2007 when the market began to go into reverse.

S&P2008high

But that is not all.

The NPA model for the month-ahead high price also captures a more recent reversal in the S&P 500.

laterHighS&P500

 

Also, the model for the low did capture the bottom in the S&P 500 in 2009, when the direction of the market changed from decline to increase.

2008High

This type of accuracy in timing in forecast modeling is quite remarkable.

It’s something I also saw earlier with the Hong Kong Hang Seng Index, but which seemed at that stage of model development to be confined to Chinese market data.

Now I am confident the NPA forecasts have some capability to predict turning points quite widely across many major indexes, ETF’s, and markets.

Note that all the charts shown above are based on out-of-sample extrapolations of the NPA model. In other words, one set of historical data are used to estimate the parameters of the NPA model, and other data, outside this sample, are then plugged in to get the month-ahead forecasts of the high and low prices.

Where This Is Going

I am compiling materials for presentations relating to the NPA, its capabilities, its forecast accuracy.

The NPA forecasts, as the above exhibits show, work well when markets are going down or turning directions, as when in a steady period of trending growth.

But don’t mistake my focus on these stock market forecasting algorithms for a last minute conversion to the view that nothing but the market is important. In fact, a lot of signals from business and global data suggest we could be in store for some big changes later in 2015 or in 2016.

What I want to do, I think, is understand how stock markets function as sort of prisms for these external developments – perhaps involving Greek withdrawal from the Eurozone, major geopolitical shifts affecting oil prices, and the onset of the crazy political season in the US.

Stock Market Price Predictability, Random Walks, and Market Efficiency

Can stock market prices be predicted? Can they be predicted with enough strength to make profits?

The current wisdom may be that market predictability is like craps. That is, you might win (correctly predict) for a while, maybe walking away with nice winnings, if you are lucky. But over the long haul, the casino is the winner.

This seems close to the view in Andrew P. Lo and Craig MacKinlay’s A NonRandom Walk Down Wall Street (NRW), a foil to Burton Malkiel’s A Random Walk Down Wall Street, perhaps.

Lo and MacKinlay (L&M) collect articles from the 1980’s and 1990’s – originally published in the “very best journals” – in a 2014 compilation with interesting intoductions and discussions.

Their work more or less conclusively demonstrates that US stock market prices are not, for identified historic periods, random walks.

The opposite idea – that stock prices are basically random walks – has a long history, “recently” traceable to the likes of Paul Samuelson, as well as Burton Malkiel. Supposedly, any profit opportunity in a deeply traded market will be quickly exploited, leaving price movements largely random.

The ringer for me in this whole argument is the autocorrelation (AC) coefficient.

The first order autocorrelation coefficient of a random walk is 1, but metrics derived from stock price series have positive first order autocorrelations less than 1 over daily or weekly data. In fact, L&M were amazed to discover the first order autocorrelation coefficient of weekly stock returns, based on CRSP data, was 30 percent and statistically highly significant. In terms of technical approach, a key part of their analysis involves derivation of asymptotic limits for distributions and confidence intervals, based on assumptions which encompass nonconstant (heteroskedastic) error processes.

Finding this strong autocorrelation was somewhat incidental to their initial attack on the issue of the randomness, which is based on variance ratios.

L&M were really surprised to discover significant AC in stock market returns, and, indeed, several of their articles explore ways they could be wrong, or things could be different than what they appear to be.

All this is more than purely theoretical, as Morgan Stanley and D.P. Shaw’s development of “high frequency equity trading strategies” shows. These strategies exploit this autocorrelation or time dependency through “statistical arbitrage.” By now, though, according to the authors, this is a thin-margin business, because of the “proliferation of hedge funds engaged in these activities.”

Well, there are some great, geeky lines for cocktail party banter, such as “rational expectations equilibrium prices need not even form a martingale sequence, of which the random walk is special case.”

By itself, the “efficient market hypothesis” (EFM) is rather nebulous, and additional contextualization is necessary to “test” the concept. This means testing several joint hypotheses. Accordingly, negative results can simply be attributed to failure of one or more collateral assumptions. This builds a protective barrier around the EFM, allowing it to retain its character as an article of faith among many economists.

AWL

Andrew W. Lo is a Professor of Finance at MIT and Director of the Laboratory for Financial Engineering. His site through MIT lists other recent publications, and I would like to draw readers’ attention to two:

Can Financial Engineering Cure Cancer?

Reading About the Financial Crisis: A Twenty-One-Book Review

Thoughts on Stock Market Forecasting

Here is an update on the forecasts from last Monday – forecasts of the high and low of SPY, QQQ, GE, and MSFT.

This table is easy to read, even though it is a little” busy”.

TableMay22

One key is to look at the numbers highlighted in red and blue (click to enlarge).

These are the errors from the week’s forecast based on the NPV algorithm (explained further below) and a No Change forecast.

So if you tried to forecast the high for the week to come, based on nothing more than the high achieved last week – you would be using a No Change model. This is a benchmark in many forecasting discussions, since it is optimal (subject to some qualifications) for a random walk. Of course, the idea stock prices are a random walk came into favor several decades ago, and now gradually is being rejected of modified, based on findings such as those above.

The NPV forecasts are more accurate for this last week than No Change projections 62.5 percent of the time, or in 5 out of the 8 forecasts in the table for the week of May 18-22. Furthermore, in all three cases in which the No Change forecasts were better, the NPV forecast error was roughly comparable in absolute size. On the other hand, there were big relative differences in the absolute size of errors in the situations in which the NPV forecasts proved more accurate, for what that is worth.

The NPV algorithm, by the way, deploys various price ratios (nearby prices) and their transformations as predictors. Originally, the approach focused on ratios of the opening price in a period and the high or low prices in the previous period. The word “new” indicates a generalization has been made from this original specification.

Ridge Regression

I have been struggling with Visual Basic and various matrix programming code for ridge regression with the NPV specifications.

Using cross validation of the λ parameter, ridge regression can improve forecast accuracy on the order of 5 to 10 percent. For forecasts of the low prices, this brings forecast errors closer to acceptable error ranges.

Having shown this, however, I am now obligated to deploy ridge regression in several of the forecasts I provide for a week or perhaps a month ahead.

This requires additional programming to be convenient and transparent to validation.

So, I plan to work on that this coming week, delaying other tables with weekly or maybe monthly forecasts for a week or so.

I will post further during the coming week, however, on the work of Andrew Lo (MIT Financial Engineering Center) and high frequency data sources in business forecasts.

Probable Basis of Success of NPV Forecasts

Suppose you are an observer of a market in which securities are traded. Initially, tests show strong evidence stock prices in this market follow random walk processes.

Then, someone comes along with a theory that certain price ratios provide a guide to when stock prices will move higher.

Furthermore, by accident, that configuration of price ratios occurs and is associated with higher prices at some date, or maybe a couple dates in succession.

Subsequently, whenever price ratios fall into this configuration, traders pile into a stock, anticipating its price will rise during the next trading day or trading period.

Question – isn’t this entirely plausible, and would it not be an example of a self-confirming prediction?

I have a draft paper pulling together evidence for this, and have shared some findings in previous posts. For example, take a look at the weird mirror symmetry of the forecast errors for the high and low.

And, I suspect, the absence or ambivalence of this underlying dynamic is why closing prices are harder to predict than period high or low prices of a stock. If I tell you the closing price will be higher, you do not necessarily buy the stock. Instead, you might sell it, since the next morning opening prices could jump down. Or there are other possibilities.

Of course, there are all kinds of systems traders employ to decide whether to buy or sell a stock, so you have to cast your net pretty widely to capture effects of the main methods.

Long Term Versus Short Term

I am getting mixed results about extending the NPV approach to longer forecast horizons – like a quarter or a year or more.

Essentially, it looks to me as if the No Change model becomes harder and harder to beat over longer forecast horizons – although there may be long run persistence in returns or other features that I see  other researchers (such as Andrew Lo) have noted.

Monday Morning Stock Forecasts May 18 – Highs and Lows for SPY, QQQ, GE, and MSFT

Look at it this way. There are lots of business and finance blogs, but how many provide real-time forecasts, along with updates on how prior predictions performed?

Here on BusinessForecastBlog – we roll out forecasts of the highs and lows of a growing list of securities for the coming week on Monday morning, along with an update on past performance.

It’s a good discipline, if you think you have discovered a pattern which captures some part of the variation in future values of a variable. Do the backtesting, but also predict in real-time. It’s very unforgiving.

Here is today’s forecast, along with a recap for last week (click to enlarge).TableMay18

There is an inevitable tendency to narrate these in “Nightly Business Report” fashion.

So the highs are going higher this week, and so are the lows, except perhaps for a slight drop in Microsoft’s low – almost within the margin of statistical noise. Not only that, but predicted increases in the high for QQQ are fairly substantial.

Last week’s forecasts were solid, in terms of forecast error, except Microsoft’s high came in above what was forecast. Still, -2.6 percent error is within the range of variation in the backtests for this security. Recall, too, that in the previous week, the forecast error for the high of MSFT was only .04 percent, almost spot on.

Since the market moved sideways for many securities, No Change forecasts were a strong competitor to the NPV (new proximity variable) forecasts. In fact, there was an 50:50 split. In half the four cases, the NPV forecasts performed better; in the other half, No Change forecasts had lower errors.

Direction of change predictions also came in about 50:50. They were correct for QQQ and SPY, and wrong for the two stocks.

Where is the Market Going?

This tool – forecasts based on the NPV algorithms – provides longer terms looks into the future, probably effectively up to one month ahead.

So in two weeks, I’m going to add that forecast to the mix. I think it may be important, incidentally, to conform to the standard practice of taking stock at the beginning of the month, rather than, say, simply going out four weeks from today.

To preview the power of this monthly NPV model, here are the backtests for the crisis months before and during the 2008 financial crisis.

SPYM2008

This is a remarkable performance, really. Once the crash really gets underway in late Summer-Fall 2008, the NPV forecast drops in a straight-line descent, as do the actual monthly highs. There are some turning points in common, too, between the two series. And generally, even at the start of the process, the monthly NPV model provides good guidance as to the direction and magnitude of changes.

Over the next two weeks, I’m collecting high frequency data to see whether I can improve these forecasts with supplemental information – such as interest rates spreads and other variables available on a weekly or monthly basis.

In closing, let me plug Barry Eichengreen’s article in Syndicate An Economics to Fit the Facts.

Eichengreen writes,

While older members of the economics establishment continue to debate the merits of competing analytical frameworks, younger economists are bringing to bear important new evidence about how the economy operates.

It’s all about dealing with the wealth of data that is being collected everywhere now, and much less about theoretical disputes involving formal models.

Finally, it’s always necessary to insert a disclaimer, whenever one provides real-time, actionable forecasts. This stuff is for informational and scientific purposes only. It is not intended to provide recommendations for specific stock trading, and what you do on that score is strictly your own business and responsibility.

Mountain climbing pic from Blink.

Reading the Tea Leaves – Forecasts of Stock High and Low Prices

The residuals of predictive models are central to their statistical evaluation – with implications for confidence intervals of forecasts.

Of course, another name for the residuals of a predictive model is their errors.

Today, I want to present some information on the errors for the forecast models that underpin the Monday morning forecasts in this blog.

The results are both reassuring and challenging.

The good news is that the best fit distributions support confidence intervals, and, in some cases, can be viewed as transformations of normal variates. This is by no means given, as monstrous forms such as the Cauchy distribution sometimes present themselves in financial modeling as a best candidate fit.

The challenge is that the skew patterns of the forecasts of the high and low prices are weirdly symmetric. It looks to me as if traders tend to pile on when the price signals are positive for the high, or flee the sinking ship when the price history indicates the low is going lower.

Here is the error distribution of percent errors for backtests of the five day forecast of the QQQ high, based on an out-of-sample study from 2004 to the present, a total of 573 five consecutive trading day periods.

JohnSUHigh

Here is the error distribution of percent errors for backtests of the five day forecast of the QQQ low.

JohnSULow

In the first chart for forecasts of high prices, errors are concentrated in the positive side of the percent error or horizontal axis. In the second graph, errors from forecasts of low prices are concentrated on the negative side of the horizontal axis.

In terms of statistics-speak, the first chart is skewed to the left, having a long tail of values to the left, while the second chart is skewed to the right.

What does this mean? Well, one interpretation is that traders are overshooting the price signals indicating a positive change in the high price or a lower low price.

Thus, the percent error is calculated as

(Actual – Predicted)/Actual

So the distribution of errors for forecasts of the high has an average which is slightly greater than zero, and the average for errors for forecasts of the low is slightly negative. And you can see the bulk of observations being concentrated, on the one hand, to the right of zero and, on the other, to the left of zero.

I’d like to find some way to fill out this interpretation, since it supports the idea that forecasts in this context are self-reinforcing, rather than self-nihilating.

I have more evidence consistent with this interpretation. So, if traders dive in when prices point to a high going higher, predictions of the high should be more reliable vis a vis direction of change with bigger predicted increases in the high. That’s also verifiable with backtests.

I use MathWave’s EasyFit. It’s user-friendly, and ranks best fit distributions based on three standard metrics of goodness of fit – the Chi-Squared, Komogorov-Smirnov, and Anderson-Darling statistics. There is a trial download of the software, if you are interested.

The Johnson SU distribution ranks first for the error distribution for the high forecasts, in terms of EasyFit’s measures of goodness of fit. The Johnson SU distribution also ranks first for Chi-Squared and the Anderson-Darling statistics for the errors of forecasts of the low.

This is an interesting distribution which can be viewed as a transformation of normal variates and which has applications, apparently, in finance (See http://www.ntrand.com/johnson-su-distribution/).

It is something I have encountered repeatedly in analyzing errors of proximity variable models. I am beginning to think it provides the best answer in determining confidence intervals of the forecasts.

Top picture from mysteryarts

 

Five Day Forecasts of High and Low for QQQ, SPY, GE, and MSFT – Week of May 11-15

Here are high and low forecasts for two heavily traded exchange traded funds (ETF’s) and two popular stocks. Like the ones in preceding weeks, these are for the next five trading days, in this case Monday through Friday May 11-15.

HLWeekMay11to15

The up and down arrows indicate the direction of change from last week – for the high prices only, since the predictions of lows are a new feature this week.

Generally, these prices are essentially “moving sideways” or with relatively small changes, except in the case of SPY.

For the record, here is the performance of previous forecasts.

TableMay8

Strong disclaimer: These forecasts are provided for information and scientific purposes only. This blog accepts no responsibility for what might happen, if you base investment or trading decisions on these forecasts. What you do with these predictions is strictly your own business.

Incidentally, let me plug the recent book by Andrew W. Lo and A. Craig McKinlay – A Non-Random Walk Down Wall Street from Princeton University Press and available as a e-book.

I’ve been reading an earlier book which Andrew Lo co-authored The Econometrics of Financial Markets.

What I especially like in these works is the insistence that statistically significant autocorrelations exist in stock prices and stock returns. They also present multiple instances in which stock prices fail tests for being random walks, and establish a degree of predictability for these time series.

Again, almost all the focus of work in the econometrics of financial markets is on closing prices and stock returns, rather than predictions of the high and low prices for periods.

How Did This Week’s Forecasts of QQQ, SPY, GE, and MSFT High Prices Do?

The following Table provides an update for this week’s forecasts of weekly highs for the securities currently being followed – QQQ, SPY, GE, and MSFT. Price forecasts and actual numbers are in US dollars.

TableMay8

This batch of forecasts performed extremely well in terms of absolute size of forecast errors, and, in addition, beating a “no change” forecast in three out of four predictions (exception being SPY) and correctly calling the change in direction of the high for QQQ.

It would be nice to be able to forecast the high prices for five-day-forward periods with the accuracy seen in the Microsoft (MSFT) forecast.

As all you market mavens know, US stock markets experienced a lot of declines in prices this week, so the highs for the week occurred Monday.

I’ve had several questions about the future direction of the market. Are declines going to be in the picture for the coming week, and even longer, for example?

I’ve been studying the capabilities of these algorithms to predict turning points in indexes and prices of individual securities. The answer is going to be probabilistic, and so is complicated. Sometimes the algorithm seems to provide pretty unambiguous signals as to turning points. In other instances, the tea leaves are harder to read, but, arguably, a signal does exist for most major turning points with the indexes I have focused on – SPY, QQQ, and the S&P 500.

So, the next question is – has the market hit a high for a week or a few weeks, or even perhaps a major turnaround?

Deploying these algorithms, coded in Visual Basic and C#, to attack this question is a little like moving a siege engine to the castle wall. A major undertaking.

I want to get there, but don’t want to be a “Chicken Little” saying “the sky is falling,” “the sky is falling.”

Stock Market Predictability

This little Monday morning exercise, which will be continued for the next several weeks, is providing evidence for the predictability of aspects of stock prices on a short term basis.

Once the basic facts are out there for everyone to see, a lot of questions arise. So what about new information? Surely yesterday’s open, high, low, and closing prices, along with similar information for previous days, do not encode an event like 9/11, or the revelation of massive accounting fraud with a stock issuing concern.

But apart from such surprises, I’m leaning to the notion that a lot more information about the general economy, company prospects and performance, and so forth are subtly embedded in the flow of price data.

I talked recently with an analyst who is applying methods from Kelly and Pruitt’s Market Expectations in the Cross Section of Present Values for wealth management clients. I hope to soon provide an “in-depth” on this type of applied stock market forecasting model, which focuses, incidentally, on stock market returns and dividends.

There is also some compelling research on the performance of momentum trading strategies which seems to indicate a higher level of predictability in stock prices than is commonly thought to exist.

Incidentally, in posting this slightly before the bell today, Friday, I am engaging in intra-day forecasting – betting that prices for these securities will stay below their earlier highs.

Forecasts of High Prices for Week May 4-8 – QQQ, SPY, GE, and MSFT

Here are forecasts of high prices for key securities for this week, May 4-8, along with updates to check the accuracy of previous forecasts. So far, there is a new security each week. This week it is Microsoft (MSFT). Click on the Table to enlarge.

TableMay4

These forecasts from the new proximity variable (NPV) algorithms compete with the “no change” forecast – supposedly the optimal predictions for a random walk.

The NPV forecasts in the Table are more accurate than no change forecasts at 3:2 odds. That is, if you take into account the highs of the previous weeks for each security – actual high numbers not shown in the Table – the NPV forecasts are more accurate 4 out of 6 times.

This performance corresponds roughly with the improvements of the NPV approach over the no change forecasts in backtests back to 2003.

The advantages of the NPV approach extend beyond raw accuracy, measured here in simple percent terms, since the “no change” forecast is uninformative about the direction of change. The NPV forecasts, on the other hand, generally get the direction of change right. In the Table above, again considering data from weeks preceding those shown, the direction of change of the high forecasts is spot on every time. Backtests suggest the NPV algorithm will correctly predict the direction of change of the high price about 75 percent of the time for this five day interval.

It will be interesting to watch QQQ in this batch of forecasts. This ETF is forecast to decline week-over-week in terms of the high price.

Next week I plan to expand the forecast table to include forecasts of the low prices.

There is a lot of information here. Much of the finance literature focuses on the rates of returns based on closing prices, or adjusted closing prices. Perhaps analysts figure that attempting to predict “extreme values” is not a promising idea. Nothing could be further from the truth.

This week I plan a post showing how to identify turning points in the movement of major indices with the NPV algorithms. The concept is simple. I forecast the high and low over coming periods, like a day, five days, ten trading days and so forth. For these “nested forecast periods” the high for the week ahead must be greater than or equal to the high for tomorrow or shorter periods. This means when the price of the SPY or QQQ heads south, the predictions of the high of these ETF’s sort of freeze at a constant value. The predictions for the low, however, plummet.

Really pretty straight-forward.

I’ve appreciated and benefitted from your questions, comments, and suggestions. Keep them coming.