Asset Bubbles

It seems only yesterday when “rational expectations” ruled serious discussions of financial economics. Value was determined by the CAPM – capital asset pricing model. Markets reflected the operation of rational agents who bought or sold assets, based largely on fundamentals. Although imprudent, stupid investors were acknowledged to exist, it was impossible for a market in general to be seized by medium- to longer term speculative movements or “bubbles.”

This view of financial and economic dynamics is at the same time complacent and intellectually aggressive. Thus, proponents of the efficient market hypothesis contest the accuracy of earlier discussions of the Dutch tulip mania.

Now, however, there seems no doubt that bubbles in asset markets are both real and intractable to regulation and management, despite their catastrophic impacts.

But asset bubbles are so huge now that Larry Summers suggests, before the International Monetary Fund (IMF) recently, that the US is in a secular stagnation, and that the true, “market-clearing” interest rate is negative. Thus, given the unreality of implementing a negative interest rate, we face a long future of the zero bound – essentially zero interest rates.

Furthermore, as Paul Krugman highlights in a follow-on blog post – Summers says the economy needs bubbles to generate growth.

We now know that the economic expansion of 2003-2007 was driven by a bubble. You can say the same about the latter part of the 90s expansion; and you can in fact say the same about the later years of the Reagan expansion, which was driven at that point by runaway thrift institutions and a large bubble in commercial real estate.

So you might be tempted to say that monetary policy has consistently been too loose. After all, haven’t low interest rates been encouraging repeated bubbles?

But as Larry emphasizes, there’s a big problem with the claim that monetary policy has been too loose: where’s the inflation? Where has the overheated economy been visible?

So how can you reconcile repeated bubbles with an economy showing no sign of inflationary pressures? Summers’s answer is that we may be an economy that needs bubbles just to achieve something near full employment – that in the absence of bubbles the economy has a negative natural rate of interest. And this hasn’t just been true since the 2008 financial crisis; it has arguably been true, although perhaps with increasing severity, since the 1980s.

Re-enter the redoubtable “liquidity trap” stage left.

Summers and Krugman move at a fairly abstract and theoretical level, regarding asset bubbles and the current manifestation.

But more and more, the global financial press points the finger at the US Federal Reserve and its Quantitative Easing (QE) as the cause of emerging bubbles around the world.

One of the latest to chime in is the Chinese financial magazine Caixin with Heading Toward a Cliff.

The Fed’s QE policy has caused a gigantic liquidity bubble in the global economy, especially in emerging economies and asset markets. The improvement in the global economy since 2008 is a bubble phenomenon, centering around the demand from bubble goods or wealth effect. Hence, real Fed tightening would prick the bubble and trigger another recession. This is why some talk of the Fed tightening could trigger the global economy to trend down…

The odds are that the world is experiencing a bigger bubble than the one that unleashed the 2008 Global Financial Crisis. The United States’ household net wealth is much higher than at the peak in the last bubble. China’s property rental yields are similar to what Japan experienced at the peak of its property bubble. The biggest part of today’s bubble is in government bonds valued at about 100 percent of global GDP. Such a vast amount of assets is priced at a negative real yield. Its low yield also benefits other borrowers. My guesstimate is that this bubble subsidizes debtors to the tune of 10 percent of GDP or US$ 7 trillion dollars per annum. The transfer of income from savers to debtors has never happened on such a vast scale, not even close. This is the reason that so many bubbles are forming around the world, because speculation is viewed as an escape route for savers.The property market in emerging economies is the second-largest bubble. It is probably 100 percent overvalued. My guesstimate is that it is US$ 50 trillion overvalued.Stocks, especially in the United States, are significantly overvalued too. The overvaluation could be one-third or about US$ 20 trillion.There are other bubbles too. Credit risk, for example, is underpriced. The art market is bubbly again. These bubbles are not significant compared to the big three above.

The Caixin author – Andy Xie – goes on to predict inflation as the eventual outcome – a prediction I find far-fetched given the coming reaction to Fed tapering.

And the reach of the Chinese real estate bubble is highlighted by a CBS 60 Minutes video filmed some months ago.

Anatomy of a Bubble

The Great Recession of 2008-2009 alerted us – what goes up, can come down. But are there common patterns in asset bubbles? Can the identification of these patterns help predict the peak and subsequent point of rapid decline?

Macrotrends is an interesting resource in this regard. The following is a screenshot of a Macrotrends chart which, in the original, has interactive features.

Macrotrends.org_The_Four_Biggest_US_Bubbles              

Scaling the NASDAQ, gold, and oil prices in terms of percentage changes from points several years preceding price peaks suggests bubbles share the same cadence, in some sense.

These curves highlight that asset bubbles can occur over significant periods – several years to a decade. This is the part of the seduction. At first, when commentators cry “bubble,” prudent investors stand aside to let prices peak and crash. Yet prices may continue to rise for years, leaving investors increasingly feeling they are “being left behind.”

Here are data from three asset bubbles – the Hong Kong Hang Seng Index, oil prices to refiners (combined), and the NASDAQ 100 Index. Click to enlarge.

BubbleAnatomy

I arrange these time series so their peak prices – the peak of the bubble – coincide, despite the fact that these peaks occurred at different historical times (October 2007, August 2008, March 2000, respectively).

I include approximately 5 years of prior values of each time series, and scale the vertical dimensions so the peaks equal 100 percent.

This produces a chart which suggests three distinct phases to an asset bubble.

Phase 1 is a ramp-up. In this initial phase, prices surge for 2-3 years, then experience a relatively minor drop.

Phase 2 is the beginning of a sustained period of faster-than-exponential growth, culminating in the market peak, followed immediately by the market collapse. Within a few months of the peak, the rates of growth of prices in all three series are quite similar, indeed almost identical. These rates of price growth are associated with “an accelerating acceleration” of growth, in fact – as a study of first and second differences of the rates of growth show.

The critical time point, at which peak price occurs, looks like the point at which traders can see the vertical asymptote just a month or two in front of them, given the underlying dynamics.

Phase 3 is the market collapse. Prices drop maybe 80 percent of the value they rose from the initial point, and rapidly – in the course of 1-2 years. This is sometimes modeled as a “negative bubble.” It is commonly considered that the correction overshoots, and then adjusts back.

There also seems to be a Phase 4, when prices can recover some or perhaps almost all of their lost glory, but where volatility can be substantial.

Predictability

It seems reasonable that the critical point, or peak price, should be more or less predictable, a few months into Phase 2.

The extent of the drop from the peak in Phase 3 seems more or less predictable, also.

The question really is whether the dynamics of Phase 1 are truly informative. Is there something going on in Phase 1 that is different than in immediately preceding periods? Phase 1 seems to “set the stage.”

But there is no question the lure of quick riches involved in the advanced stages of an asset bubble can dazzle the most intelligent among us – and as a case in point, I give you Sir Isaac Newton, co-inventor with Liebnitz of the calculus, discoverer of the law of gravitation, and exponent of a vast new science, in his time, of mathematical physics.

SirIsaacNewton

A post on Business Insider highlights his unhappy case with the South Seas stock bubble. Newton was in this scam early, and then got out. But the Bubble kept levitating, so he entered the market again near the top – in Didier Sornette’s terminology, near the critical point of the process, only to lose what in his time was vast fortune of worth $2.4 million dollars in today’s money.

Links – 2014, Early January

US and Global Economy

Bernanke sees headwinds fading as US poised for growth – happy talk about how good things are going to be as quantitative easing is “tapered.”

Slow Growth and Short Tails But Dr. Doom (Nouriel Roubini) is guardedly optimistic about 2014

The good news is that economic performance will pick up modestly in both advanced economies and emerging markets. The advanced economies, benefiting from a half-decade of painful private-sector deleveraging (households, banks, and non-financial firms), a smaller fiscal drag (with the exception of Japan), and maintenance of accommodative monetary policies, will grow at an annual pace closer to 1.9%. Moreover, so-called tail risks (low-probability, high-impact shocks) will be less salient in 2014. The threat, for example, of a eurozone implosion, another government shutdown or debt-ceiling fight in the United States, a hard landing in China, or a war between Israel and Iran over nuclear proliferation, will be far more subdued.

GOLDMAN: Here’s What Will Happen With GDP, Housing, The Fed, And Unemployment Next year Goldman Sachs chief economist Jan Hatzius writes: 10 Questions for 2014  – Jan Hatzius is very bullish on 2014!

Three big macro questions for 2014 Gavyn Davies – tapering QE, China, and the euro. Requires free registration to read.

The State of the Euro, In One Graph From Paul Krugman, the point being that the EU’s austerity policies have significantly worsened the debt ratios of Spain, Portugal, Ireland, Greece, and Italy, despite lower interest rates. (Click to enlarge)

StateofEuro

Technology

JCal’s 2014 predictions: Intense competition for YouTube and a shake up in online video economics

Rumblings in the YouTube community in the midst of tremendous growth in video productions – interesting.

Do disruptive technologies really overturn market leadership?

Discusses tests of the idea that ..such technologies have the characteristic that they perform worse on an important metric (or metrics) than current market leading technologies. Of course, if that were it, then the technologies could hardly be called disruptive and would be confined, at best, to niche uses.

The second critical property of such technologies is that while they start behind on key metrics, they improve relatively rapidly and eventually come to outperform existing technologies on many metrics. It is there that disruptive technologies have their bite. Initially, they are poor performers and established firms would not want to integrate them into their products as they would disappoint their customers who happen to be most of the current market. However, when performance improves, the current technologies are displaced and established firms want to get in on the game. The problem is that they may be too late. In other words, Christensen’s prediction was that established firms would have legitimate “blind spots” with regard to disruptive technologies leaving room open for new entrants to come in, adopt those technologies and, ultimately, displace the established firms as market leaders.

Big Data – A Big Opportunity for Telecom Players

Today with sharp increase in online and mobile shopping with use of Apps, telecom companies have access to consumer buying behaviours and preference which are actually being used with real time geo-location and social network analysis to target consumers. Hmmm.

5 Reasons Why Big Data Will Crush Big Research

Traditional marketing research or “big research” focuses disproportionately on data collection.  This mentality is a hold-over from the industry’s early post-WWII boom –when data was legitimately scarce.  But times have changed dramatically since Sputnik went into orbit and the Ford Fairlane was the No. 1-selling car in America.

Here is why big data is going to win.

Reason 1: Big research is just too small…Reason 2 : Big research lacks relevance… Reason 3: Big research doesn’t handle complexity well… Reason 4: Big research’s skill sets are outdated…  Reason 5: Big research lacks the will to change…

I know “market researchers” who fit the profile in this Forbes article, and who are more or less lost in the face of the new extent of data and techniques for its analysis. On the other hand, I hear from the grapevine that many executives and managers can’t really see what the Big Data guys in their company are doing. There are success stories on the Internet (see the previous post here, for example), but this may be best case. Worst case is a company splurges on the hardware to implement Big Data analytics, and the team just comes up with gibberish – very hard to understand relationships with no apparent business value.

Some 2013 Recaps

Top Scientific Discoveries of 2013

Humankind goes interstellar ..Genome editing ..Billions and billions of Earths

exoplanets-660x326

Global warming: a cause for the pause ..See-through brains ..Intergalactic Neutrinos ..A new meat-eating mammal

olinguito

Pesticide controversy grows ..Making organs from stem cells ..Implantable electronics ..Dark matter shows up — or doesn’t ..Fears of the fathers

The 13 Most Important Charts of 2013

TopCharts

And finally, a miscellaneous item. Hedge funds apparently do beat the market, or at least companies operating in the tail of the performance distribution show distinctive characteristics.

How do Hedge Fund “Stars” Create Value? Evidence from Their Daily Trades

I estimate hedge fund performance by computing calendar-time transaction portfolios (see, e.g., Seasholes and Zhu, 2010) with holding periods ranging from 21 to 252 days. Across all holding periods, I find no evidence that the average or median hedge fund outperforms, after accounting for trading commissions. However, I find significant evidence of outperformance in the right-tail of the distribution. Specifically, bootstrap simulations indicate that the annual performance of the top 10-30% of hedge funds cannot be explained by luck. Similarly, I find that superior performance persists. The top 30% of hedge funds outperform by a statistically significant 0.25% per month over the subsequent year. In sharp contrast to my hedge fund findings, both bootstrap simulations and performance persistence tests fail to reveal any outperformance among non-hedge fund institutional investors….

My remaining tests investigate how outperforming hedge funds (i.e., “star” hedge funds) create value. My main findings can be summarized as follows. First, star hedge funds’ profits are concentrated over relatively short holding periods. Specifically, more than 25% (50%) of star hedge funds’ annual outperformance occurs within the first month (quarter) after a trade. Second, star hedge funds tend to be short-term contrarians with small price impacts. Third, the profits of star hedge funds are concentrated in their contrarian trades. Finally, the performance persistence of star hedge funds is substantially stronger among funds that follow contrarian strategies (or funds with small price impacts) and is not at all present for funds that follow momentum strategies (or funds with large price impacts).

The On-Coming Tsunami of Data Analytics

More than 25,000 visited businessforecastblog, March 2012-December 2013, some spending hours on the site. Interest ran nearly 200 visitors a day in December, before my ability to post was blocked by a software glitch, and we did this re-boot.

Now I have hundreds of posts offline, pertaining to several themes, discussed below. How to put this material back up – as reposts, re-organized posts, or as longer topic summaries?

There’s a silver lining. This forces me to think through forecasting, predictive and data analytics.

One thing this blog does is compile information on which forecasting and data analytics techniques work, and, to some extent, how they work, how key results are calculated. I’m big on computation and performance metrics, and I want to utilize the SkyDrive more extensively to provide full access to spreadsheets with worked examples.

Often my perspective is that of a “line worker” developing sales forecasts. But there is another important focus – business process improvement. The strength of a forecast is measured, ultimately, by its accuracy. Efforts to improve business processes, on the other hand, are clocked by whether improvement occurs – whether costs of reaching customers are lower, participation rates higher, customer retention better or in stabilization mode (lower churn), and whether the executive suite and managers gain understanding of who the customers are. And there is a third focus – that of the underlying economics, particularly the dynamics of the institutions involved, such as the US Federal Reserve.

Right off, however, let me say there is a direct solution to forecasting sales next quarter or in the coming budget cycle. This is automatic forecasting software, with Forecast Pro being one of the leading products. Here’s a YouTube video with the basics about that product.

You can download demo versions and participate in Webinars, and attend the periodic conferences organized by Business Forecast Systems showcasing user applications in a wide variety of companies.

So that’s a good solution for starters, and there are similar products, such as the SAS/ETS time series software, and Autobox.

So what more would you want?

Well, there’s need for background information, and there’s a lot of terminology. It’s useful to know about exponential smoothing and random walks, as well as autoregressive and moving averages.  Really, some reaches of this subject are arcane, but nothing is worse than a forecast setup which gains the confidence of stakeholders, and then falls flat on its face. So, yes, eventually, you need to know about “pathologies” of the classic linear regression (CLR) model – heteroscedasticity, autocorrelation, multicollinearity, and specification error!

And it’s good to gain this familiarity in small doses, in connection with real-world applications or even forecasting personalities or celebrities. After a college course or two, it’s easy to lose track of concepts. So you might look at this blog as a type of refresher sometimes.

Anticipating Turning Points in Time Series

But the real problem comes with anticipating turning points in business and economic time series. Except when modeling seasonal variation, exponential smoothing usually shoots over or under a turning point in any series it is modeling.

If this were easy to correct, macroeconomic forecasts would be much better. The following chart highlights the poor performance, however, of experts contributing to the quarterly Survey of Professional Forecasters, maintained by the Philadelphia Fed.

SPFcomp2

So, the red line is the SPF consensus forecast for GDP growth on a three quarter horizon, and the blue line is the forecast or nowcast for the current quarter (there is a delay in release of current numbers). Notice the huge dips in the current quarter estimate, associated with four recessions 1981, 1992, 2001-2, and 2008-9. A mere three months prior to these catastrophic drops in growth, leading forecasters at big banks, consulting companies, and universities totally missed the boat.

This is important in a practical sense, because recessions turn the world of many businesses upside down. All bets are off. The forecasting team is reassigned or let go as an economy measure, and so forth.

Some forward-looking information would help business intelligence focus on reallocating resources to sustain revenue as much as possible, using analytics to design cuts exerting the smallest impact on future ability to maintain and increase market share.

Hedgehogs and Foxes

Nate Silver has a great table in his best-selling The
Signal and the Noise
on the qualities and forecasting performance of hedgehogs and foxes. The idea comes from a Greek poet, “The fox knows many little things, but the hedgehog knows one big thing.”

Following Tetlock, Silver finds foxes are multidisplinary, adaptable, self-critical, cautious, and empirical, tolerant of complexity. By contrast, the Hedgehog is specialized, sticks to the same approaches, stubbornly adheres to his model in spite of counter-evidence, is order-seeking, confident, and ideological. The evidence suggests foxes generally outperform hedgehogs, just as ensemble methods typically outperform a single technique in forecasting.

Message – be a fox.

So maybe this can explain some of the breadth of this blog. If we have trouble predicting GDP growth, what about forecasts in other areas – such as weather, climate change, or that old chestnut, sun spots? And maybe it is useful to take a look at how to forecast all the inputs and associated series – such as exchange rates, growth by global region, the housing market, interest rates, as well as profits.

And while we are looking around, how about brain waves? Can brain waves be forecast? Oh yes, it turns out there is a fascinating and currently applied new approach called neuromarketing, which uses headbands and electrodes, and even MRI machines, to detect deep responses of consumers to new products and advertising.

New Methods

I know I have not touched on cluster analysis and classification, areas making big contributions to improvement of business process. But maybe if we consider the range of “new” techniques for predictive analytics, we can see time series forecasting and analysis of customer behavior coming under one roof.

There is, for example, this many predictor thread emerging in forecasting in the late 1990’s and especially in the last decade with factor models for macroeconomic forecasting. Reading this literature, I’ve become aware of methods for mapping N explanatory variables onto a target variable, when there are M<N observations. These are sometimes called methods of data shrinkage, and include principal components regression, ridge regression, and the lasso. There are several others, and a good reference is The Elements of Statistical Learning, Data Mining, Learning and Prediction, 2nd edition, by Trevor Hastie, Robert Tibshirani, and Jerome Friedman. This excellent text is downloadable, accessible via the Tools, Apps, Texts, Free Stuff menu option located just to the left of the search utility on the heading for this blog.

There also is bagging, which is the topic of the previous post, as well as boosting, and a range of decision tree and regression tree modeling tactics, including random forests.

I’m actively exploring a number of these approaches, ginning up little examples to see how they work and how the computation goes. So far, it’s impressive. This stuff can really improve over the old approaches, which someone pointed out, have been around since the 1950’s at least.

It’s here I think that we can sight the on-coming wave, just out there on the horizon – perhaps hundreds of feet high. It’s going to swamp the old approaches, changing market research forever and opening new vistas, I think, for forecasting, as traditionally understood.

I hope to be able to ride that wave, and now I put it that way, I get a sense of urgency in keeping practicing my web surfing.

Hope you come back and participate in the comments section, or email me at cvj@economicdataresources.com

Forecasting in Data-limited Situations – A New Day

Over the Holidays – while frustrated in posting by a software glitch – I looked at the whole “shallow data issue” in light of  a new technique I’ve learned called bagging.

Bottom line, using spreadsheet simulations, I can show bagging radically reduces out-of-sample forecast error, in a situation typical for a lot business forecasting – where there are just a few workable observations, quite a few candidate drivers or explanatory variables, and a lot of noise in the data.

Here is a comparison of the performance of OLS regression and bagging with out-of-sample data generated with the same rules which create the “sample data” in the example spreadsheet shown below.

SmallDATAgraphcomp

The contrast is truly stark. Although, as we will see, the ordinary least squares (OLS) regression has an R2 or “goodness of fit” of 0.99, it does not generalize well out-of-sample, producing the purple line in the graph with 12 additional cases or observations. Bagging the original sample 200 times and re-estimating OLS regression on the bagged samples, then averaging the regression constants and coefficients, produces a much tighter fit on these out-of-sample observations.

Example Spreadsheet

The spreadsheet below illustrates 12 “observations” on a  TARGET or dependent variable and nine (9) explanatory variables, x1 through x9.

SmallDATA1

The top row with numbers in red lists the “true” values of these explanatory variables or drivers, and the column of numbers in red on the far right are the error terms (which are generated by a normal distribution with zero mean and standard deviation of 50).

So if we multiply 3 times 0.22 and add -6 times -2.79 and so forth, adding 68.68 at the end, we get the first value of the TARGET variable 60.17.

While this example is purely artificial, an artifact, one can imagine that these numbers are first differences – that is the current value of a variable minus its preceding value. Thus, the TARGET variable might record first differences in sales of a product quarter by quarter. And we suppose forecasts for  x1 through x9 are available, although not shown above. In fact, they are generated in simulations with the same generating mechanisms utilized to create the sample.

Using the simplest multivariate approach, the ordinary least squares (OLS) regression, displayed in the Excel format, is –

SmallDATAreg

There’s useful information in this display, often the basis of a sort of “talk-through” the regression result. Usually, the R2 is highlighted, and it is terrific here, “explaining” 99 percent of the variation in the data, in, that is, the 12 in-sample values for the TARGET variable. Furthermore, four explanatory variables have statistically significant coefficients, judged by their t-statistics – x2, x6, x7, and x9. These are highlighted in a kind of purple in the display.

Of course, the estimated values of x1 through x9 are, for the most part, numerically quite different than the true values of the constant term and coefficients {10, 3, -6, 0.5, 15, 1, -1, -5, 0.25, 1}. Nevertheless, because of the large variances or standard errors of the estimates, as noted above some estimated coefficients are within a 95 percent confidence interval of these true values. It’s just that the confidence intervals are very wide.

The in-sample predicted values are accurate, generally speaking. These loopy coefficient estimates essentially balance one another off in-sample.

But it’s not the in-sample performance we are interested in, but the out-of-sample performance. And we want to compare the out-of-sample performance of this OLS regression estimate with estimates of the coefficients and TARGET variable produced by ridge regression and bagging.

Bagging

Bagging [bootstrap aggregating] was introduced by Breiman in the 1990’s to reduce the variance of predictors. The idea is that you take N bootstrap samples of the original data, and with each of these samples, estimate your model, creating, in the end, an ensemble prediction.

Bootstrap sampling draws random samples with replacement from the original sample, creating other samples of the same size. With 12 cases or observations on the TARGET and explanatory variables there are a large number of possible random samples of these 12 cases drawn with replacement; in fact, given nine explanatory variables and the TARGET variable, there are 129  or somewhat more than 5 billion distinct samples, 12 of which, incidentally, are comprised of exactly the same case drawn repeatedly from the original sample.

A primary application of bagging has been in improving the performance of decision trees and systems of classification. Applications to regression analysis seem to be more or less an after-thought in the literature, and the technique does not seem to be in much use in applied business forecasting contexts.

Thus, in the spreadsheet above, random draws with replacement are taken of the twelve rows of the spreadsheet (TARGET and drivers) 200 times, creating 200 samples. An ordinary least squares regression is estimated over each regression, and the constant and parameter estimates are averaged at the end of the process.

Here is a comparison of the estimated coefficients from Bagging and OLS, compared with the true values.

BaggingTable

There’s still variation of the parameter estimates from the true values with bagging, but the variance of the error process (50) is, by design, high. For example, most of the value of TARGET is from the error process, so this is noisy data.

Discussion

Some questions. For example – Are there specific features of the problem presented here which tip the results markedly in favor of bagging? What are the criteria for determining whether bagging will improve regression forecasts? Another question regards the ease or difficulty of bagging regressions in Excel.

The criterion for bagging to deliver dividends is basically parameter instability over the sample. Thus, in the problem here, deleting any observation from the 12 cases and re-estimating the regression results in big changes to estimated parameters. The basic reason is the error terms constitute by far the largest contribution to the value of TARGET for each case.

In practical forecasting, this criterion, which not very clearly defined, can be explored, and then comparisons with regard to actual outcomes can be studied. Thus, estimate the bagged regression forecast,  wait a period, and compare bagged and simple OLS forecasts. Substantial improvement in forecast accuracy, combined with parameter instability in the sample, would seem to be a smoking gun.

Apart from the large contribution of the errors or residuals to the values of TARGET, the other distinctive feature of the problem presented here is the large number of predictors in comparison with the number of cases or observations. This, in part, accounts for the high coefficient of determination or R2, and also suggests that the close in-sample fit and poor out-of-sample performance are probably related to “over-fitting.”

Changes to Businessforecastblog in 2014 – Where We Have Been, Where We Are Going

We’ve been struggling with a software glitch in WordPress, due to, we think, incompatibilities between plug-in’s and a new version of the blogging software. It’s been pretty intense. The site has been fully up, but there was no possibility of new posts, not even a notice to readers about what was happening. All this started just before Christmas and ended, basically, yesterday.

So greetings. Count on daily posts as rule, and I will get some of the archives accessible ASAP.

But, for now, a few words about my evolving perspective.

I came out of the trenches, so to speak, of sales, revenue, and new product forecasting, for enterprise information technology (IT) and, earlier, for public utilities and state and federal agencies. When I launched Businessforecastblog last year, my bias popped up in the secondary heading for the blog – with its reference to “data-limited contexts” – and in early posts on topics like “simple trending” and random walks.

longterm_study_of_market_trends

I essentially believed that most business and economic time series are basically one form or another of random walks, and that exponential smoothing is often the best forecasting approach in an applied context. Of course, this viewpoint can be bolstered by reference to research from the 1980’s by Nelson and Plosser and the M-Competitions. I also bought into a lazy consensus that it was necessary to have more observations than explanatory variables in order to estimate a multivariate regression. I viewed segmentation analysis, so popular in marketing research, as a sort of diversion from the real task of predicting responses of customers directly, based on their demographics, firmagraphics, and other factors.

So the press of writing frequent posts on business forecasting and related topics has led me to a learn a lot.

The next post to this blog, for example, will be about how “bagging” – from Bootstrap Aggregation – can radically reduce forecasting errors when there are only a few historical or other observations, but a large number of potential predictors. In a way, this provides a new solution to the problem of forecasting in data limited contexts.

This post also includes specific computations, in this case done in a spreadsheet. I’m big on actually computing stuff, where possible. I believe Elliot Shulman’s dictum, “you don’t really know something until you compute it.” And now I see how to include access to spreadsheets for readers, so there will be more of that.

Forecasting turning points is the great unsolved problem of business forecasting. That’s why I’m intensely interested in analysis of what many agree are asset bubbles. Bursting of the dot.com bubble initiated the US recession of 2001. Collapse of the housing market and exotic financial instrument bubbles in 2007 bought on the worst recession since World War II, now called the Great Recession. If it were possible to forecast the peak of various asset bubbles, like researchers such as Didier Sornette suggest, this would mean we would have some advance – perhaps only weeks of course – on the onset of the next major business turndown.

Along the way, there are all sorts of interesting sidelights relating to business forecasting and more generally predictive analytics. In fact, it’s clear that in the era of Big Data, data analytics can contribute to improvement of business processes – things like target marketing for customers – as well as perform less glitzy tasks of projecting sales for budget formulation and the like.

Email me at cvj@economicdataresources.com if you want to receive PDF compilations on topics from the archives. I’m putting together compilations on New Methods and Asset Bubbles, for starters, in a week or so.

 

Sales and new product forecasting in data-limited (real world) contexts