Category Archives: forecasting research

Business Forecasting – Practical Problems and Solutions

Forecasts in business are unavoidable, since decisions must be made for annual budgets and shorter term operational plans, and investments must be made.

And regardless of approach, practical problems arise.

For example, should output from formal algorithms be massaged, so final numbers include judgmental revisions? What about error metrics? Is the mean absolute percent error (MAPE) best, because everybody is familiar with percents? What are plus’es and minus’es of various forecast error metrics? And, organizationally, where should forecasting teams sit – marketing, production, finance, or maybe in a free-standing unit?

The editors of Business Forecasting – Practical Problems and Solutions integrate dozens of selections to focus on these and other practical forecasting questions.

Here are some highlights.

In my experience, many corporate managers, even VP’s and executives, understand surprisingly little about fitting models to data.

So guidelines for reporting results are important.

In “Dos and Don’ts of Forecast Accuracy Measurement: A Tutorial,” Len Tashman advises “distinguish in-sample from out-of-sample accuracy,” calling it “the most basic issue.”

The acid test is how well the forecast model does “out-of-sample.” Holdout samples and cross-validation simulate how the forecast model will perform going forward. “If your average error in-sample is found to be 10%, it is very probable that forecast errors will average substantially more than 10%.” That’s because model parameters are calibrated to the sample over which they are estimated. There is a whole discussion of “over-fitting,” R2, and model complexity hinging on similar issues. Don’t fool yourself. Try to find ways to test your forecast model on out-of-sample data.

The discussion of fitting models when there is “extreme seasonality” broke new ground for me. In retail forecasting, there might be a toy or product that sells only at Christmastime. Demand is highly intermittent. As Udo Sglavo reveals, one solution is “time compression.” Collapse the time series data into two periods – the holiday season and the rest of the year. Then, the on-off characteristics of sales can be more adequately modeled. Clever.

John Mello’s “The Impact of Sales Forecast Game Playing on Supply Chains,” is probably destined to be a kind of classic, since it rolls up a lot of what we have all heard and observed about strategic behavior vis a vis forecasts.

Mello describes stratagems including

  • Enforcing – maintaining a higher forecast than actually anticipated, to keep forecasts in line with goals
  • Filtering – changing forecasts to reflect product on hand for sale
  • Hedging – overestimating sales to garner more product or production capability
  • Sandbagging – underestimating sales to set expectations lower than actually anticipated demand
  • Second-guessing – changing forecasts to reflect instinct or intuition
  • Spinning – manipulating forecasts to get favorable reactions from individuals or departments in the organization
  • Withholding – refusing to share current sales information

I’ve seen “sand-bagging” at work, when the salesforce is allowed to generate the forecasts, setting expectations for future sales lower than should, objectively, be the case. Purely by coincidence, of course, sales quotas are then easier to meet and bonuses easier to achieve.

I’ve always wondered why Gonik’s system, mentioned in an accompanying article by Michael Gilliland on the “Role of the Sales Force in Forecasting,” is not deployed more often. Gonik, in a classic article in the Harvard Business Review, ties sales bonuses jointly to the level of sales that are forecast by the field, and also to how well actual sales match the forecasts that were made. It literally provides incentives for field sales staff to come up with their best, objective estimate of sales in the coming period. (See Sales Forecasts and Incentives)

Finally, Larry Lapide’s “Where Should the Forecasting Function Reside?” asks a really good question.

The following graphic (apologies for the scan reproduction) summarizes some of his key points.

TTable

There is no fixed answer, Lapide provides a list of things to consider for each organization.

This book is a good accompaniment for Rob Hyndman and George Athanasopoulos’s online Forecasting: Principles and Practice.

Update and Extension – Weekly Forecasts of QQQ and Other ETF’s

Well, the first official forecast rolled out for QQQ last week.

It did relatively well. Applying methods I have been developing for the past several months, I predicted the weekly high for QQQ last week at 108.98.

In fact, the high price for QQQ for the week was 108.38, reached Monday, April 13.

This means the forecast error in percent terms was 0.55%.

It’s possible to look more comprehensively at the likely forecast errors with my approach with backtesting.

Here is a chart showing backtests for the “proximity variable method” for the QQQ high price for five day trading periods since the beginning of 2015.

QQQupdate

The red bars are errors, and, from their axis on the right, you can see most of these are below 0.5%.

This is encouraging, and there are several adjustments which may improve forecasting performance beyond this level of accuracy I want to explore.

So here is the forecast of the high prices that will be reached by QQQ and SPY for the week of April 20-24.

ForecastTab1

As you can see, I’ve added SPY, an ETF tracking the S&P500.

I put this up on Businessforecastblog because I seek to make a point – namely, that I believe methods I have developed can produce much more accurate forecasts of stock prices.

It’s often easier and more compelling to apply forecasting methods and show results, than it is to prove theoretically or otherwise argue that a forecasting method is worth its salt.

Disclaimer –  These forecasts are for informational purposes only. If you make investments based on these numbers, it is strictly your responsibility. Businessforecastblog is not responsible or liable for any potential losses investors may experience in their use of any forecasts presented in this blog.

Well, I am working on several stock forecasts to add to projections for these ETF’s – so will expand this feature in forthcoming Mondays.

Let’s Get Real Here – QQQ Stock Price Forecast for Week of April 13-17

The thing I like about forecasting is that it is operational, rather than merely theoretical. Of course, you are always wrong, but the issue is “how wrong?” How close do the forecasts come to the actuals?

I have been toiling away developing methods to forecast stock market prices. Through an accident of fortune, I have come on an approach which predicts stock prices more accurately than thought possible.

After spending hundreds of hours over several months, I am ready to move beyond “backtesting” to provide forward-looking forecasts of key stocks, stock indexes, and exchange traded funds.

For starters, I’ve been looking at QQQ, the PowerShares QQQ Trust, Series 1.

Invesco describes this exchange traded fund (ETF) as follows:

PowerShares QQQ™, formerly known as “QQQ” or the “NASDAQ- 100 Index Tracking Stock®”, is an exchange-traded fund based on the Nasdaq-100 Index®. The Fund will, under most circumstances, consist of all of stocks in the Index. The Index includes 100 of the largest domestic and international nonfinancial companies listed on the Nasdaq Stock Market based on market capitalization. The Fund and the Index are rebalanced quarterly and reconstituted annually.

This means, of course, that QQQ has been tracking some of the most dynamic elements of the US economy, since its inception in 1999.

In any case, here is my forecast, along with tracking information on the performance of my model since late January of this year.

QQQForecast

The time of this blog post is the morning of April 13, 2015.

My algorithms indicate that the high for QQQ this week will be around $109 or, more precisely, $108.99.

So this is, in essence, a five day forecast, since this high price can occur in any of the trading days of this week.

The chart above shows backtests for the algorithm for ten weeks. The forecast errors are all less than 0.65% over this history with a mean absolute percent error (MAPE) of 0.34%.

So that’s what I have today, and count on succeeding installments looking back and forward at the beginning of the next several weeks (Monday), insofar as my travel schedule allows this.

Also, my initial comments on this post appear to offer a dig against theory, but that would be unfair, really, since “theory” – at least the theory of new forecasting techniques and procedures – has been very important in my developing these algorithms. I have looked at residuals more or less as a gold miner examines the chat in his pan. I have considered issues related to the underlying distribution of stock prices and stock returns – NOTE TO THE UNINITIATED – STOCK PRICES ARE NOT NORMALLY DISTRIBUTED. There is indeed almost nothing about stocks or stock returns which is related to the normal probability distribution, and I think this has been a huge failing of conventional finance, the Black Scholes Theorem, and the like.

So theory is important. But you can’t stop there.

This should be interesting. Stay tuned. I will add other securities in coming weeks, and provide updates of QQQ forecasts.

Readers interested in the underlying methods can track back on previous blog posts (for example, Pvar Models for Forecasting Stock Prices or Time-Varying Coefficients and the Risk Environment for Investing).

Perspectives

Blogging gets to be enjoyable, although demanding. It’s a great way to stay in touch, and probably heightens personal mental awareness, if you do it enough.

The “Business Forecasting” focus allows for great breadth, but may come with political constraints.

On this latter point, I assume people have to make a living. Populations cannot just spend all their time in mass rallies, and in political protests – although that really becomes dominant at certain crisis points. We have not reached one of those for a long time in the US, although there have been mobilizations throughout the Mid-East and North Africa recently.

Nate Silver brought forth the “hedgehog and fox” parable in his best seller – The Signal and the Noise. “The fox knows many things, but the hedgehog knows one big thing.”

My view is that business and other forecasting endeavors should be “fox-like” – drawing on many sources, including, but not limited to quantitative modeling.

What I Think Is Happening – Big Picture

Global dynamics often are directly related to business performance, particularly for multinationals.

And global dynamics usually are discussed by regions – Europe, North America, Asia-Pacific, South Asia, the Mid-east, South American, Africa.

The big story since around 2000 has been the emergence of the People’s Republic of China as a global player. You really can’t project the global economy without a fairly detailed understanding of what’s going on in China, the home of around 1.5 billion persons (not the official number).

Without delving much into detail, I think it is clear that a multi-centric world is emerging. Growth rates of China and India far surpass those of the United States and certainly of Europe – where many countries, especially those along the southern or outer rim – are mired in high unemployment, deflation, and negative growth since just after the financial crisis of 2008-2009.

The “old core” countries of Western Europe, the United States, Canada, and, really now, Japan are moving into a “post-industrial” world, as manufacturing jobs are outsourced to lower wage areas.

Layered on top of and providing support for out-sourcing, not only of manufacturing but also skilled professional tasks like computer programming, is an increasingly top-heavy edifice of finance.

Clearly, “the West” could not continue its pre-World War II monopoly of science and technology (Japan being in the pack here somewhere). Knowledge had to diffuse globally.

With the GATT (General Agreement on Tariffs and Trade) and the creation of the World Trade Organization (WTO) the volume of trade expanded with reduction on tariffs and other barriers (1980’s, 1990’s, early 2000’s).

In the United States the urban landscape became littered with “Big Box stores” offering shelves full of clothing, electronics, and other stuff delivered to the US in the large shipping containers you see stacked hundreds of feet high at major ports, like San Francisco or Los Angeles.

There is, indeed, a kind of “hollowing out” of the American industrial machine.

Possibly it’s only the US effort to maintain a defense establishment second-to-none and of an order of magnitude larger than anyone elses’ that sustains certain industrial activities shore-side. And even that is problematical, since the chain of contracting out can be complex and difficult and costly to follow, if you are a US regulator.

I’m a big fan of post-War Japan, in the sense that I strongly endorse the kinds of evaluations and decisions made by the Japanese Ministry of International Trade and Investment (MITI) in the decades following World War II. Of course, a nation whose industries and even standing structures lay in ruins has an opportunity to rebuild from the ground up.

In any case, sticking to a current focus, I see opportunities in the US, if the political will could be found. I refer here to the opportunity for infrastructure investment to replace aging bridges, schools, seaport and airport facilities.

In case you had not noticed, interest rates are almost zero. Issuing bonds to finance infrastructure could not face more favorable terms.

Another option, in my mind – and a hat-tip to the fearsome Walt Rostow for this kind of thinking – is for the US to concentrate its resources into medicine and medical care. Already, about one quarter of all spending in the US goes to health care and related activities. There are leading pharma and biotech companies, and still a highly developed system of biomedical research facilities affiliated with universities and medical schools – although the various “austerities” of recent years are taking their toll.

So, instead of pouring money down a rathole of chasing errant misfits in the deserts of the Middle East, why not redirect resources to amplify the medical industry in the US? Hospitals, after all, draw employees from all socioeconomic groups and all ethnicities. The US and other national populations are aging, and will want and need additional medical care. If the world could turn to the US for leading edge medical treatment, that in itself could be a kind of foreign policy, for those interested in maintaining US international dominance.

Tangential Forces

While writing in this vein, I might as well offer my underlying theory of social and economic change. It is that major change occurs primarily through the impact of tangential forces, things not fully seen or anticipated. Perhaps the only certainty about the future is that there will be surprises.

Quite a few others subscribe to this theory, and the cottage industry in alarming predictions of improbable events – meteor strikes, flipping of the earth’s axis, pandemics – is proof of this.

Really, it is quite amazing how the billions on this planet manage to muddle through.

But I am thinking here of climate change as a tangential force.

And it is also a huge challenge.

But it is a remarkably subtle thing, not withstanding the on-the-ground reality of droughts, hurricanes, tornados, floods, and so forth.

And it is something smack in the sweet spot of forecasting.

There is no discussion of suitable responses to climate change without reference to forecasts of global temperature and impacts, say, of significant increases in sea level.

But these things take place over many years and, then, boom a whole change of regime may be triggered – as ice core and other evidence suggests.

Flexibility, Redundancy, Avoidance of Over-Specialization

My brother (by a marriage) is a priest, formerly a tax lawyer. We have begun a dialogue recently where we are looking for some basis for a new politics and new outlook, really that would take the increasing fragility of some of our complex and highly specialized systems into account – creating some backup systems, places, refuges, if you will.

I think there is a general principle that we need to empower people to be able to help themselves – and I am not talking about eliminating the social safety net. The ruling groups in the United States, powerful interests, and politicians would be well advised to consider how we can create spaces for people “to do their thing.” We need to preserve certain types of environments and opportunities, and have a politics that speaks to this, as well as to how efficiency is going to be maximized by scrapping local control and letting global business from wherever come in and have its way – no interference allowed.

The reason Reid and I think of this as a search for a new politics is that, you know, the counterpoint is that all these impediments to getting the best profits possible just result in lower production levels, meaning then that you have not really done good by trying to preserve land uses or local agriculture, or locally produced manufactures.

I got it from a good source in Beijing some years ago that the Chinese Communist Party believes that full-out growth of production, despite the intense pollution, should be followed for a time, before dealing with that problem directly. If anyone has any doubts about the rationality of limiting profits (as conventionally defined), I suggest they spend some time in China during an intense bout of urban pollution somewhere.

Maybe there are abstract, theoretical tools which could be developed to support a new politics. Why not, for example, quantify value experienced by populations in a more comprehensive way? Why not link achievement of higher value differently measured with direct payments, somehow? I mean the whole system of money is largely an artifact of cyberspace anyway.

Anyway – takeaway thought, create spaces for people to do their thing. Pretty profound 21st Century political concept.

Coming attractions here – more on predicting the stock market (a new approach), summaries of outlooks for the year by major sources (banks, government agencies, leading economists), megatrends, forecasting controversies.

Top picture from FIREBELLY marketing

Stock Market Predictability

The research findings in recent posts here suggest that, in broad outline, the stock market is predictable.

This is one of the most intensively researched areas of financial econometrics.

There certainly is no shortage of studies claiming to forecast stock prices. See for example, Atsalakis, G., and K. Valavanis. “Surveying stock market forecasting techniques-part i: Conventional methods.” Journal of Computational Optimization in Economics and Finance 2.1 (2010): 45-92.

But the field is dominated by decades-long controversy over the efficient market hypothesis (EMH).

I’ve been reading Lim and Brooks outstanding survey article – The Evolution of Stock Market Efficiency Over Time: A Survey of the Empirical Literature.

They highlight two types of studies focusing on the validity of a weak form of the EMH which asserts that security prices fully reflect all information contained in the past price history of the market…

The first strand of studies, which is the focus of our survey, tests the predictability of security returns on the basis of past price changes. More specifically, previous studies in this sub-category employ a wide array of statistical tests to detect different types of deviations from a random walk in financial time series, such as linear serial correlations, unit root, low-dimensional chaos, nonlinear serial dependence and long memory. The second group of studies examines the profitability of trading strategies based on past returns, such as technical trading rules (see the survey paper by Park and Irwin, 2007), momentum and contrarian strategies (see references cited in Chou et al., 2007).

Another line, related to this second branch of research tests.. return predictability using other variables such as the dividend–price ratio, earnings–price ratio, book-to-market ratio and various measures of the interest rates.

Lim and Brooks note the tests for the semi-strong-form and strong-form EMH are renamed as event studies and tests for private information, respectively.

So bottom line – maybe your forecasting model predicts stock prices or rates of return over certain periods, but the real issue is whether it makes money. As Granger writes much earlier, mere forecastability is not enough.

I certainly respect this criterion, and recognize it is challenging. It may be possible to trade on the models of high and low stock prices over periods such I have been discussing, but I can also show you situations in which the irreducibly stochastic elements in the predictions can lead to losses. And avoiding these losses puts you into the field of higher frequency trading, where “all bets are off,” since there is so much that is not known about how that really works, particularly for individual investors.

My  primary purpose, however, in pursuing these types of models is originally not so much for trading (although that is seductive), but to explore new ways of forecasting turning points in economic time series. Confronted with the dismal record of macroeconomic forecasters, for example, one can see that predicting turning points is a truly fundamental problem. And this is true, I hardly need to add, for practical business forecasts. Your sales may do well – and exponential smoothing models may suffice – until the next phase of the business cycle, and so forth.

So I am amazed by the robustness of the turning point predictions from the longer (30 trading days, 40 days, etc.) groupings.

I just have never myself developed or probably even seen an example of predicting turning points as clearly as the one I presented in the previous post relating to the Hong Kong Hang Seng Index.

HSItp

A Simple Example of Stock Market Predictability

Again, without claims as to whether it will help you make money, I want to close this post today with comments about another area of stock price predictability – perhaps even simpler and more basic than relationships regarding the high and low stock price over various periods.

This is an exercise you can try for yourself in a few minutes, and which leads to remarkable predictive relationships which I do not find easy to identify or track in the existing literature regarding stock market predictability.

First, download the Yahoo Finance historical data for SPY, the ETF mirroring the S&P 500. This gives you a spreadsheet with approximately 5530 trading day values for the open, high, low, close, volume, and adjusted close. Sort from oldest to most recent. Then calculate trading-day over trading-day growth rates, for the opening prices and then the closing prices. Then, set up a data structure associating the opening price growth for day t with the closing price growth for day t-1. In other words, lag the growth in the closing prices.

Then, calculate the OLS regression of growth in lagged closing prices onto the growth in opening prices.

You should get something like,

openoverlcose

This is, of course, an Excel package regression output. It indicates that X Variable 1, which is the lagged growth in the closing prices, is highly significant as an explanatory variable, although the intercept or constant is not.

This equation explains about 21 percent of the variation in the growth data for the opening prices.

It also successfully predicts the direction of change of the opening price about 65 percent of the time, or considerably better than chance.

Not only that, but the two and three-period growth in the closing prices are successful predictors of the two and three-period growth in the opening prices.

And it probably is possible to improve the predictive performance of these equations by autocorrelation adjustments.

Comments

Why present the above example? Well, because I want to establish credibility on the point that there are clearly predictable aspects of stock prices, and ones you perhaps have not heard of heretofore.

The finance literature on stock market prediction and properties of stock market returns, not to mention volatility, is some of the most beautiful and complex technical literatures I know of.

But, still, I think new and important relationships can be discovered.

Whether this leads to profit-making is another question. And really, the standards have risen significantly in recent times, with program and high frequency trading possibly snatching profit opportunities from traders at the last microsecond.

So I think the more important point, from a policy standpoint if nothing else, may be whether it is possible to predict turning points – to predict broader movements of stock prices within which high frequency trading may be pushing the boundary.

Behavioral Economics and Holiday Gifts

Chapter 1 of Advances in Behavioral Economics highlights the core proposition of this emerging field – namely that real economic choices over risky outcomes do not conform to the expected utility (EU) hypothesis.

The EU hypothesis states that the utility of a risky distribution of outcomes is a probability-weighted average of the outcome utilities. Many violations of this principle are demonstrated with psychological experiments.

These violations suggest “nudge” theory – that small, apparently inconsequential changes in the things people use can have disproportionate effects on behavior.

Along these lines, I found this PBS report by Paul Solman fascinating. In it, Solman, PBS economics correspondent, talks to Sendhil Mullainathan at Harvard University about consumer innovations that promise to improve your life through behavioral economics – and can be gifts for this Season. 

Happy Holidays all. 

China – Trade Colossus or Assembly Site?

There is a fascinating paper – How the iPhone Widens the United States Trade Deficit with the People’s Republic of China. In this Asian Bank Development Institute (ADBI) white paper, Yuqing Ying and his coauthor document the value chain for an Apple iPhone:

IPhone

The source for this breakout, incidentally, is a “teardown” performed by the IT market research company iSupply, still accessible at –https://technology.ihs.com/389273/iphone-3g-s-carries-17896-bom-and-manufacturing-cost-isuppli-teardown-reveals. In other words, iSupply physically took apart an iPhone to identify the manufacturers of the components.

The Paradox

After estimating that, in

2009 iPhones contributed US$1.9 billion to the trade deficit, equivalent to about 0.8% of the total US trade deficit with the PRC,

the authors go on to point out that

..most of the export value and the deficit due to the iPhone are attributed to imported parts and components from third countries and have nothing to do with the PRC. Chinese workers simply put all these parts and components together and contribute only US$6.5 to each iPhone, about 3.6% of the total manufacturing cost (e.g., the shipping price). The traditional way of measuring trade credits all of the US$178.96 to the PRC when an iPhone is shipped to the US, thus exaggerating the export volume as well as the imbalance. Decomposing the value added along the value chain of iPhone manufacturing suggests that, of the US$2.0 billion worth of iPhones exported from the PRC, 96.4% in fact amounts to transfers from Germany (US$326 million), Japan (US$670 million), Korea (US$259 million), the US (US$108 million), and other countries (US$ 542 million). All of these countries are involved in the iPhone production chain.

Yuqing Xing builds on the paradox in his more recent China’s High-Tech Exports: The Myth and Reality published in 2014 in MIT’s Asian Economic Papers.

Prevailing trade statistics are inconsistent with trade based on global supply chains and mistakenly credit entire values of assembled high-tech products to China. China’s real contribution to the reported 82 percent high-tech exports is labor not technology. High-tech products, mainly made of imported parts and components, should be called “Assembled High-tech.” To accurately measure high-tech exports, the value-added approach should be utilized with detailed analysis on the value chains distributions across countries. Furthermore, if assembly is the only source of value-added by Chinese workers, in terms of technological contribution these assembled high-tech exports are indifferent to labor-intensive products, and so they should be excluded from the high-tech classification.

MNEs, in particular Taiwanese IT firms in China, have performed an important role in the rapid expansion of high-tech exports. The trend of production fragmentation and outsourcing activities of MNEs in information and communication technology has benefitted China significantly, because of its huge labor endowment. The small share of indigenous firms in high-tech exports implies that China has yet to become a real competitor of the United States, EU, and Japan. That China is the number one high-tech exporter is thus a myth rather than a reality.

Ying and Yang

This perspective – that it is really “value-added” that we should focus on, rather than the total dollar volume of trade coming in or going out of a country – is interesting, but I can’t help but think there is a disconnect when you consider actual Chinese foreign exchange reserves, shown below (source – http://www.stats.gov.cn/tjsj/ndsj/2013/indexeh.htm).

ChinaFER

So currently China holds nearly 3.5 trillion dollars in foreign exchange reserves – most of which, but not all, is comprised of US dollars.

This is a huge amount of money, on the order of five percent of total global GDP.

How could China have accumulated this merely by being an assembly site for high tech and other products (see Five Facts about Value-Added Exports and Implications for Macroeconomics and Trade Research)? How can this be attributable just to mistakes in counting the origin of the many components in goods coming from China? Don’t those products have to come in and be counted as imports?

There is a mystery here, which it would be good to resolve.

Assembly photo at top from Apple Insider

Forecasting the Downswing in Markets – II

Because the Great Recession of 2008-2009 was closely tied with asset bubbles in the US and other housing markets, I have a category for asset bubbles in this blog.

In researching the housing and other asset bubbles, I have been surprised to discover that there are economists who deny their existence.

By one definition, an asset bubble is a movement of prices in a market away from fundamental values in a sustained manner. While there are precedents for suggesting that bubbles can form in the context of rational expectations (for example, Blanchard’s widely quoted 1982 paper), it seems more reasonable to consider that “noise” investors who are less than perfectly informed are part of the picture. Thus, there is an interesting study of the presence and importance of “out-of-town” investors in the recent run-up of US residential real estate prices which peaked in 2008.

The “deviations from fundamentals” approach in econometrics often translates to attempts to develop or show breaks in cointegrating relationships, between, for example, rental rates and housing prices. Let me just say right off that the problem with this is that the whole subject of cointegration of nonstationary time series is fraught with statistical pitfalls – such as weak tests to reject unit roots. To hang everything on whether or not Granger causation can or cannot be shown is to really be subject to the whims of random influences in the data, as well as violations of distributional assumptions on the relevant error terms.

I am sorry if all that sounds kind of wonkish, but it really needs to be said.

Institutionalist approaches seem more promising – such as a recent white paper arguing that the housing bubble and bust was the result of a ..

supply-side phenomenon, attributable to an excess of mispriced mortgage finance: mortgage-finance spreads declined and volume increased, even as risk increased—a confluence attributable only to an oversupply of mortgage finance.

But what about forecasting the trajectory of prices, both up and then down, in an asset bubble?

What can we make out of charts such as this, in a recent paper by Sornette and Cauwels?

negativebubble

Sornett and the many researchers collaborating with him over the years are working with a paradigm of an asset bubble as a faster than exponential increase in prices. In an as yet futile effort to extend the olive branch to traditional economists (Sornette is a geophysicist by training), Sornette evokes the “bubbles following from rational expectations meme.” The idea is that it could be rational for an investor to participate in a market that is in the throes of an asset bubble, providing that the investor believes that his gains in the near future adequately compensate for the increased risk of a collapse of prices. This is the “greater fool” theory to a large extent, and I always take delight in pointing out that one of the most intelligent of all human beings – Isaac Newton – was burned by exactly such a situation hundreds of years ago.

In any case, the mathematics of the Sornette et al approach are organized around the log-periodic power law, expressed in the following equation with the Sornette and Cauwels commentary (click to enlarge).

LPPL

From a big picture standpoint, the first thing to observe is that there is a parameter tc in the equation which is the “critical time.”

The whole point of this mathematical apparatus, which derives in part from differential equations and some basic modeling approaches common in physics, is that faster than exponential growth is destined to reach a point at which it basically goes ballistic. That is the critical point. The purpose of forecasting in this context then is to predict when this will be, when will the asset bubble reach its maximum price and then collapse?

And the Sornette framework allows for negative as well as positive price movements according to the dynamics in this equation. So, it is possible, if we can implement this, to predict how far the market will fall after the bubble pops, so to speak, and when it will turn around.

Pretty heady stuff.

The second big picture feature is to note the number of parameters to be estimated in fitting this model to real price data – minimally constants A, B, and C, an exponent m, the angular frequency ω and phase φ, plus the critical time.

For the mathematically inclined, there is a thread of criticism and response, more or less culminating in Clarifications to questions and criticisms on the Johansen–Ledoit–Sornette financial bubble model which used to be available as a PDF download from ETC Zurich.

In brief, the issue is whether the numerical analysis methods fitting the data to the LPPL model arrive at local, instead of global maxima. Obviously, different values for the parameters can lead to wholly different forecasts for the critical time tc.

To some extent, this issue can be dealt with by running a great number of estimations of the parameters, or by developing collateral metrics for adequacy of the estimates.

But the bottom line is – regardless of the extensive applications of this approach to all manner of asset bubbles internationally and in different markets – the estimation of the parameters seems more in the realm of art, than science, at the present time.

However, it may be that mathematical or computational breakthroughs are possible.

I feel these researchers are “very close.”

In any case, it would be great if there were a package in R or the like to gin up these estimates of the critical time, applying the log-periodic power law.

Then we could figure out “how low it can go.’

And, a final note to this post – it is ironic that as I write and post this, the stock markets have recovered from their recent swoon and are setting new records. So I guess I just want to be prepared, and am not willing to believe the runup can go on forever.

I’m also interested in methodologies that can keep forecasters usefully at work, during the downswing.

The Holy Grail of Business Forecasting – Forecasting the Next Downturn

What if you could predict the Chicago Fed National Activity Index (CFNAI), interpolated monthly values of the growth of nominal GDP, the Aruoba-Diebold-Scotti (ADS) Business Conditions Index, and the Kansas City Financial Stress Index (KCFSI) three, five, seven, even twelve months into the future? What if your model also predicted turning points in these US indexes, and also similar macroeconomic variables for countries in Asia and the European Union? And what if you could do all this with data on monthly returns on the stock prices of companies in the financial sector?

That’s the claim of Linda Allen, Turan Bali, and Yi Tang in a fascinating 2012 paper Does Systemic Risk in the Financial Sector Predict Future Economic Downturns?

I’m going to refer to these authors as Bali et al, since it appears that Turan Bali, shown below, did some of the ground-breaking research on estimating parametric distributions of extreme losses. Bali also is the corresponding author.

T_bali

Bali et al develop a new macroindex of systemic risk that predicts future real economic downturns which they call CATFIN.

CATFIN is estimated using both value-at-risk (VaR) and expected shortfall (ES) methodologies, each of which are estimated using three approaches: one nonparametric and two different parametric specifications. All data used to construct the CATFIN measure are available at each point in time (monthly, in our analysis), and we utilize an out-of-sample forecasting methodology. We find that all versions of CATFIN are predictive of future real economic downturns as measured by gross domestic product (GDP), industrial production, the unemployment rate, and an index of eighty-five existing monthly economic indicators (the Chicago Fed National Activity Index, CFNAI), as well as other measures of real macroeconomic activity (e.g., NBER recession periods and the Aruoba-Diebold-Scott [ADS] business conditions index maintained by the Philadelphia Fed). Consistent with an extensive body of literature linking the real and financial sectors of the economy, we find that CATFIN forecasts aggregate bank lending activity.

The following graphic illustrates three components of CATFIN and the simple arithmetic average, compared with US recession periods.

CATFIN

Thoughts on the Method

OK, here’s the simple explanation. First, these researchers identify US financial companies based on definitions in Kenneth French’s site at the Tuck School of Business (Dartmouth). There are apparently 500-1000 of these companies for the period 1973-2009. Then, for each month in this period, rates of return of the stock prices of these companies are calculated. Then, three methods are used to estimate 1% value at risk (VaR) – two parametric methods and one nonparametric methods. The nonparametric method is straight-forward –

The nonparametric approach to estimating VaR is based on analysis of the left tail of the empirical return distribution conducted without imposing any restrictions on the moments of the underlying density…. Assuming that we have 900 financial firms in month t , the nonparametric measure of1%VaR is the ninth lowest observation in the cross-section of excess returns. For each month, we determine the one percentile of the cross-section of excess returns on financial firms and obtain an aggregate 1% VaR measure of the financial system for the period 1973–2009.

So far, so good. This gives us the data for the graphic shown above.

In order to make this predictive, the authors write that –

CATFINEQ

Like a lot of leading indicators, the CATFIN predictive setup “over-predicts” to some extent. Thus, there are there are five instances in which a spike in CATFIN is not followed by a recession, thereby providing a false positive signal of future real economic distress. However, the authors note that in many of these cases, predicted macroeconomic declines may have been averted by prompt policy intervention. Their discussion of this is very interesting, and plausible.

What This Means

The implications of this research are fairly profound – indicating, above all, the priority of the finance sector in leading the overall economy today. Certainly, this consistent with the balance sheet recession of 2008-2009, and probably will continue to be relevant going forward – since nothing really has changed and more concentration of ownership in finance has followed 2008-2009.

I do think that Serena Ng’s basic point in a recent review article probably is relevant – that not all recessions are the same. So it may be that this method would not work as well for, say, the period 1945-1970, before financialization of the US and global economies.

The incredibly ornate mathematics of modeling the tails of return distributions are relevant in this context, incidentally, since the nonparametric approach of looking at the empirical distributions month-by-month could be suspect because of “cherry-picking.” So some companies could be included, others excluded to make the numbers come out. This is much difficult in a complex maximum likelihood estimation process for the location parameters of these obscure distributions.

So the question on everybody’s mind is – WHAT DOES THE CATFIN MODEL INDICATE NOW ABOUT THE NEXT FEW MONTHS? Unfortunately, I am unable to answer that, although I have corresponded with some of the authors to inquire whether any research along such lines can be cited.

Bottom line – very impressive research and another example of how important science can get lost in the dance of prestige and names.