Category Archives: macroeconomic forecasting

Three Pass Regression Filter – New Data Reduction Method

Malcolm Gladwell’s 10,000 hour rule (for cognitive mastery) is sort of an inspiration for me. I picked forecasting as my field for “cognitive mastery,” as dubious as that might be. When I am directly engaged in an assignment, at some point or other, I feel the need for immersion in the data and in estimations of all types. This blog, on the other hand, represents an effort to survey and, to some extent, get control of new “tools” – at least in a first pass. Then, when I have problems at hand, I can try some of these new techniques.

Ok, so these remarks preface what you might call the humility of my approach to new methods currently being innovated. I am not putting myself on a level with the innovators, for example. At the same time, it’s important to retain perspective and not drop a critical stance.

The Working Paper and Article in the Journal of Finance

Probably one of the most widely-cited recent working papers is Kelly and Pruitt’s three pass regression filter (3PRF). The authors, shown above, are with the University of Chicago, Booth School of Business and the Federal Reserve Board of Governors, respectively, and judging from the extensive revisions to the 2011 version, they had a bit of trouble getting this one out of the skunk works.

Recently, however, Kelly and Pruit published an important article in the prestigious Journal of Finance called Market Expectations in the Cross-Section of Present Values. This article applies a version of the three pass regression filter to show that returns and cash flow growth for the aggregate U.S. stock market are highly and robustly predictable.

I learned of a published application of the 3PRF from Francis X. Dieblod’s blog, No Hesitations, where Diebold – one of the most published authorities on forecasting – writes

Recent interesting work, moreover, extends PLS in powerful ways, as with the Kelly-Pruitt three-pass regression filter and its amazing apparent success in predicting aggregate equity returns.

What is the 3PRF?

The working paper from the Booth School of Business cited at a couple of points above describes what might be cast as a generalization of partial least squares (PLS). Certainly, the focus in the 3PRF and PLS is on using latent variables to predict some target.

I’m not sure, though, whether 3PRF is, in fact, more of a heuristic, rather than an algorithm.

What I mean is that the three pass regression filter involves a procedure, described below.

(click to enlarge).

3PRFprocedure

Here’s the basic idea –

Suppose you have a large number of potential regressors xi ε X, i=1,..,N. In fact, it may be impossible to calculate an OLS regression, since N > T the number of observations or time periods.

Furthermore, you have proxies zj ε  Z, I = 1,..,L – where L is significantly less than the number of observations T. These proxies could be the first several principal components of the data matrix, or underlying drivers which theory proposes for the situation. The authors even suggest an automatic procedure for generating proxies in the paper.

And, finally, there is the target variable yt which is a column vector with T observations.

Latent factors in a matrix F drive both the proxies in Z and the predictors in X. Based on macroeconomic research into dynamic factors, there might be only a few of these latent factors – just as typically only a few principal components account for the bulk of variation in a data matrix.

Now here is a key point – as Kelly and Pruitt present the 3PRF, it is a leading indicator approach when applied to forecasting macroeconomic variables such as GDP, inflation, or the like. Thus, the time index for yt ranges from 2,3,…T+1, while the time indices of all X and Z variables and the factors range from 1,2,..T. This means really that all the x and z variables are potentially leading indicators, since they map conditions from an earlier time onto values of a target variable at a subsequent time.

What Table 1 above tells us to do is –

  1. Run an ordinary least square (OLS) regression of the xi      in X onto the zj in X, where T ranges from 1 to T and there are      N variables in X and L << T variables in Z. So, in the example      discussed below, we concoct a spreadsheet example with 3 variables in Z,      or three proxies, and 10 predictor variables xi in X (I could      have used 50, but I wanted to see whether the method worked with lower      dimensionality). The example assumes 40 periods, so t = 1,…,40. There will      be 40 different sets of coefficients of the zj as a result of      estimating these regressions with 40 matched constant terms.
  2. OK, then we take this stack of estimates of      coefficients of the zj and their associated constants and map      them onto the cross sectional slices of X for t = 1,..,T. This means that,      at each period t, the values of the cross-section. xi,t, are      taken as the dependent variable, and the independent variables are the 40      sets of coefficients (plus constant) estimated in the previous step for      period t become the predictors.
  3. Finally, we extract the estimate of the factor loadings      which results, and use these in a regression with target variable as the      dependent variable.

This is tricky, and I have questions about the symbolism in Kelly and Pruitt’s papers, but the procedure they describe does work. There is some Matlab code here alongside the reference to this paper in Professor Kelly’s research.

At the same time, all this can be short-circuited (if you have adequate data without a lot of missing values, apparently) by a single humungous formula –

3PRFformula

Here, the source is the 2012 paper.

Spreadsheet Implementation

Spreadsheets help me understand the structure of the underlying data and the order of calculation, even if, for the most part, I work with toy examples.

So recently, I’ve been working through the 3PRF with a small spreadsheet.

Generating the factors:I generated the factors as two columns of random variables (=rand()) in Excel. I gave the factors different magnitudes by multiplying by different constants.

Generating the proxies Z and predictors X. Kelly and Pruitt call for the predictors to be variance standardized, so I generated 40 observations on ten sets of xi by selecting ten different coefficients to multiply into the two factors, and in each case I added a normal error term with mean zero and standard deviation 1. In Excel, this is the formula =norminv(rand(),0,1).

Basically, I did the same drill for the three zj — I created 40 observations for z1, z2, and z3 by multiplying three different sets of coefficients into the two factors and added a normal error term with zero mean and variance equal to 1.

Then, finally, I created yt by multiplying randomly selected coefficients times the factors.

After generating the data, the first pass regression is easy. You just develop a regression with each predictor xi as the dependent variable and the three proxies as the independent variables, case-by-case, across the time series for each. This gives you a bunch of regression coefficients which, in turn, become the explanatory variables in the cross-sectional regressions of the second step.

The regression coefficients I calculated for the three proxies, including a constant term, were as follows – where the 1st row indicates the regression for x1 and so forth.

coeff

This second step is a little tricky, but you just take all the values of the predictor variables for a particular period and designate these as the dependent variables, with the constant and coefficients estimated in the previous step as the independent variables. Note, the number of predictors pairs up exactly with the number of rows in the above coefficient matrix.

This then gives you the factor loadings for the third step, where you can actually predict yt (really yt+1 in the 3PRF setup). The only wrinkle is you don’t use the constant terms estimated in the second step, on the grounds that these reflect “idiosyncratic” effects, according to the 2011 revision of the paper.

Note the authors describe this as a time series approach, but do not indicate how to get around some of the classic pitfalls of regression in a time series context. Obviously, first differencing might be necessary for nonstationary time series like GDP, and other data massaging might be in order.

Bottom line – this worked well in my first implementation.

To forecast, I just used the last regression for yt+1 and then added ten more cases, calculating new values for the target variable with the new values of the factors. I used the new values of the predictors to update the second step estimate of factor loadings, and applied the last third pass regression to these values.

Here are the forecast errors for these ten out-of-sample cases.

3PRFforecasterror

Not bad for a first implementation.

 Why Is Three Pass Regression Important?

3PRF is a fairly “clean” solution to an important problem, relating to the issue of “many predictors” in macroeconomics and other business research.

Noting that if the predictors number near or more than the number of observations, the standard ordinary least squares (OLS) forecaster is known to be poorly behaved or nonexistent, the authors write,

How, then, does one effectively use vast predictive information? A solution well known in the economics literature views the data as generated from a model in which latent factors drive the systematic variation of both the forecast target, y, and the matrix of predictors, X. In this setting, the best prediction of y is infeasible since the factors are unobserved. As a result, a factor estimation step is required. The literature’s benchmark method extracts factors that are significant drivers of variation in X and then uses these to forecast y. Our procedure springs from the idea that the factors that are relevant to y may be a strict subset of all the factors driving X. Our method, called the three-pass regression filter (3PRF), selectively identifies only the subset of factors that influence the forecast target while discarding factors that are irrelevant for the target but that may be pervasive among predictors. The 3PRF has the advantage of being expressed in closed form and virtually instantaneous to compute.

So, there are several advantages, such as (1) the solution can be expressed in closed form (in fact as one complicated but easily computable matrix expression), and (2) there is no need to employ maximum likelihood estimation.

Furthermore, 3PRF may outperform other approaches, such as principal components regression or partial least squares.

The paper illustrates the forecasting performance of 3PRF with real-world examples (as well as simulations). The first relates to forecasts of macroeconomic variables using data such as from the Mark Watson database mentioned previously in this blog. The second application relates to predicting asset prices, based on a factor model that ties individual assets’ price-dividend ratios to aggregate stock market fluctuations in order to uncover investors’ discount rates and dividend growth expectations.

Partial Least Squares and Principal Components

I’ve run across outstanding summaries of “partial least squares” (PLS) research recently – for example Rosipal and Kramer’s Overview and Recent Advances in Partial Least Squares and the 2010 Handbook of Partial Least Squares.

Partial least squares (PLS) evolved somewhat independently from related statistical techniques, owing to what you might call family connections. The technique was first developed by Swedish statistician Herman Wold and his son, Svante Wold, who applied the method in particular to chemometrics. Rosipal and Kramer suggest that the success of PLS in chemometrics resulted in a lot of applications in other scientific areas including bioinformatics, food research, medicine, [and] pharmacology..

Someday, I want to look into “path modeling” with PLS, but for now, let’s focus on the comparison between PLS regression and principal component (PC) regression. This post develops a comparison with Matlab code and macroeconomics data from Mark Watson’s website at Princeton.

The Basic Idea Behind PC and PLS Regression

Principal component and partial least squares regression share a couple of features.

Both, for example, offer an approach or solution to the problem of “many predictors” and multicollinearity. Also, with both methods, computation is not transparent, in contrast to ordinary least squares (OLS). Both PC and PLS regression are based on iterative or looping algorithms to extract either the principal components or underlying PLS factors and factor loadings.

PC Regression

The first step in PC regression is to calculate the principal components of the data matrix X. This is a set of orthogonal (which is to say completely uncorrelated) vectors which are weighted sums of the predictor variables in X.

This is an iterative process involving transformation of the variance-covariance or correlation matrix to extract the eigenvalues and eigenvectors.

Then, the data matrix X is multiplied by the eigenvectors to obtain the new basis for the data – an orthogonal basis. Typically, the first few (the largest) eigenvalues – which explain the largest proportion of variance in X – and their associated eigenvectors are used to produce one or more principal components which are regressed onto Y. This involves a dimensionality reduction, as well as elimination of potential problems of multicollinearity.

PLS Regression

The basic idea behind PLS regression, on the other hand, is to identify latent factors which explain the variation in both Y and X, then use these factors, which typically are substantially fewer in number than k, to predict Y values.

Clearly, just as in PC regression, the acid test of the model is how it performs on out-of-sample data.

The reason why PLS regression often outperforms PC regression, thus, is that factors which explain the most variation in the data matrix may not, at the same time, explain the most variation in Y. It’s as simple as that.

Matlab example

I grabbed some data from Mark Watson’s website at Princeton — from the links to a recent paper called Generalized Shrinkage Methods for Forecasting Using Many Predictors (with James H. Stock), Journal of Business and Economic Statistics, 30:4 (2012), 481-493.Download Paper (.pdf). Download Supplement (.pdf), Download Data and Replication Files (.zip). The data include the following variables, all expressed as year-over-year (yoy) growth rates: The first variable – real GDP – is taken as the forecasting target. The time periods of all other variables are lagged one period (1 quarter) behind the quarterly values of this target variable.

macrolist

Matlab makes calculation of both principal component and partial least squares regressions easy.

The command to extract principal components is

[coeff, score, latent]=princomp(X)

Here X the data matrix, and the entities in the square brackets are vectors or matrices produced by the algorithm. It’s possible to compute a principal components regression with the contents of the matrix score. Generally, the first several principal components are selected for the regression, based on the importance of a component or its associated eigenvalue in latent. The following scree chart illustrates the contribution of the first few principal components to explaining the variance in X.

Screechart

The relevant command for regression in Matlab is

b=regress(Y,score(:,1:6))

where b is the column vector of estimated coefficients and the first six principal components are used in place of the X predictor variables.

The Matlab command for a partial least square regresssion is

[XL,YL,XS,YS,beta] = plsregress(X,Y,ncomp)

where ncomp is the number of latent variables of components to be utilized in the regression. There are issues of interpreting the matrices and vectors in the square brackets, but I used this code –

data=xlsread(‘stock.xls’); X=data(1:47,2:79); y = data(2:48,1);

[XL,yl,XS,YS,beta] = plsregress(X,y,10); yfit = [ones(size(X,1),1) X]*beta;

lookPLS=[y yfit]; ZZ=data(48:50,2:79);newy=data(49:51,1);

new=[ones(3,1) ZZ]*beta; out=[newy new];

The bottom line is to test the estimates of the response coefficients on out-of-sample data.

The following chart shows that PLS outperforms PC, although the predictions of both are not spectacularly accurate.

plspccomp

Commentary

There are nuances to what I have done which help explain the dominance of PLS in this situation, as well as the weakly predictive capabilities of both approaches.

First, the target variable is quarterly year-over-year growth of real US GDP. The predictor set X contains 78 other macroeconomic variables, all expressed in terms of yoy (year-over-year) percent changes.

Again, note that the time period of all the variables or observations in X are lagged one quarter from the values in Y, or the values or yoy quarterly percent growth of real US GDP.

This means that we are looking for a real, live leading indicator. Furthermore, there are plausibly common factors in the Y series shared with at least some of the X variables. For example, the percent changes of a block of variables contained in real GDP are included in X, and by inspection move very similarly with the target variable.

Other Example Applications

There are at least a couple of interesting applied papers in the Handbook of Partial Least Squares – a downloadable book in the Springer Handbooks of Computational Statistics. See –

Chapter 20 A PLS Model to Study Brand Preference: An Application to the Mobile Phone Market

Chapter 22 Modeling the Impact of Corporate Reputation on Customer Satisfaction and Loyalty Using Partial Least Squares

Another macroeconomics application from the New York Fed –

“Revisiting Useful Approaches to Data-Rich Macroeconomic Forecasting”

http://www.newyorkfed.org/research/staff_reports/sr327.pdf

Finally, the software company XLStat has a nice, short video on partial least squares regression applied to a marketing example.

Forecasting and Data Analysis – Principal Component Regression

I get excited that principal components offer one solution to the problem of the curse of dimensionality – having fewer observations on the target variable to be predicted, than there are potential drivers or explanatory variables.

It seems we may have to revise the idea that simpler models typically outperform more complex models.

Principal component (PC) regression has seen a renaissance since 2000, in part because of the work of James Stock and Mark Watson (see also) and Bai in macroeconomic forecasting (and also because of applications in image processing and text recognition).

Let me offer some PC basics  and explore an example of PC regression and forecasting in the context of macroeconomics with a famous database.

Dynamic Factor Models in Macroeconomics

Stock and Watson have a white paper, updated several times, in PDF format at this link

stock watson generalized shrinkage June _2012.pdf

They write in the June 2012 update,

We find that, for most macroeconomic time series, among linear estimators the DFM forecasts make efficient use of the information in the many predictors by using only a small number of estimated factors. These series include measures of real economic activity and some other central macroeconomic series, including some interest rates and monetary variables. For these series, the shrinkage methods with estimated parameters fail to provide mean squared error improvements over the DFM. For a small number of series, the shrinkage forecasts improve upon DFM forecasts, at least at some horizons and by some measures, and for these few series, the DFM might not be an adequate approximation. Finally, none of the methods considered here help much for series that are notoriously difficult to forecast, such as exchange rates, stock prices, or price inflation.

Here DFM refers to dynamic factor models, essentially principal components models which utilize PC’s for lagged data.

What’s a Principal Component?

Essentially, you can take any bundle of data and compute the principal components. If you mean-center and (in most cases) standardize the data, the principal components divide up the variance of this data, based on the size of their associated eigenvalues. The associated eigenvectors can be used to transform the data into an equivalent and same size set of orthogonal vectors. Really, the principal components operate to change the basis of the data, transforming it into an equivalent representation, but one in which all the variables have zero correlation with each other.

The Wikipaedia article on principal components is useful, but there is no getting around the fact that principal components can only really be understood with matrix algebra.

Often you see a diagram, such as the one below, showing a cloud of points distributed around a line passing through the origin of a coordinate system, but at an acute angle to those coordinates.

PrincipalComponents

This illustrates dimensionality reduction with principal components. If we express all these points in terms of this rotated set of coordinates, one of these coordinates – the signal – captures most of the variation in the data. Projections of the datapoints onto the second principal component, therefore, account for much less variance.

Principal component regression characteristically specifies only the first few principal components in the regression equation, knowing that, typically, these explain the largest portion of the variance in the data.

It’s also noteworthy that some researchers are talking about “targeted” principal components. So the first few principal components account for the largest, the next largest, and so on amount of variance in the data. However, the “data” in this context does not include the information we have on the target variable. Targeted principal components therefore involves first developing the simple correlations between the target variable and all the potential predictors, then ordering these potential predictors from highest to lowest correlation. Then, by one means or another, you establish a cutoff, below which you exclude weak potential predictors from the data matrix you use to compute the principal components. Interesting approach which makes sense. Testing it with a variety of examples seems in order. 

PC Regression and Forecasting – A Macroeconomics Example

I downloaded a trial copy of XLSTAT – an Excel add-in with a well-developed set of principal component procedures. In the past, I’ve used SPSS and SAS on corporate networked systems. Now I am using Matlab and GAUSS for this purpose.

The problem is what does it mean to have a time series of principal components? Over the years, there have been relevant discussions – Jolliffe’s key work, for example, and more recent papers.

The problem with time series, apart from the temporal interdependencies, is that you always are calculating the PC’s over different data, as more data comes in. What does this do to the PC’s or factor scores? Do they evolve gradually? Can you utilize the factor scores from a smaller dataset to predict subsequent values of factor scores estimated over an augmented dataset?

Based on a large macroeconomic dataset I downloaded from Mark Watson’s page, I think the answer can be a qualified “yes” to several of these questions. The Mark Watson dataset contains monthly observations on 106 macroeconomic variables for the period 1950 to 2006.

For the variables not bounded within a band, I calculated year-over-year (yoy) growth rates for each monthly observation. Then, I took first differences again over 12 months. These transformations eliminated trends, which mess up the PC computations (basically, if you calculate PC’s with a set of increasing variables, the first PC will represent a common growth factor, and is almost useless for modeling purposes.) The result of my calculations was to center each series at nearly zero, and to make the variability of each series comparable – so I did not standardize.

Anyway, using XLSTAT and Forecast Pro – I find that the factor scores

(a)   Evolve slowly as you add more data.

(b)   Factor scores for smaller datasets provide insight into subsequent factor scores one to several months ahead.

(c)    Amazingly, turning points of the first principal component, which I have studied fairly intensively, are remarkably predictable.

ForecastProPCForecast

So what are we looking at here (click to enlarge)?

Well, the top chart is the factor score for the first PC, estimated over data to May 1975, with a forecast indicated by the red line at the right of the graph. This forecast produces values which are very close to the factor score values for data estimated to May 1976 – where both datasets begin in 1960. Not only that, but we have here an example of prediction of a turning point bigtime.

Of course this is the magic of Box-Jenkins, since, this factor score series is best estimated, according to Forecast Pro, with an ARIMA model.

I’m encouraged by this exercise to think that it may be possible to go beyond the lagged variable specification in many of these DFM’s to a contemporaneous specification, where the target variable forecasts are based on extrapolations of the relevant PC’s.

In any case, for applied business modeling, if we got something like a medical device new order series (suitably processed data) linked with these macro factor scores, it could be interesting – and we might get something that is not accessible with ordinary methods of exponential smoothing.

Underlying Theory of PC’s

Finally, I don’t think it is possible to do much better than to watch Andrew Ng at Stanford in Lectures 14 and 15. I recommend skipping to 17:09 – seventeen minutes and nine seconds – into Lecture 14, where Ng begins the exposition of principal components. He winds up this Lecture with a fascinating illustration of high dimensionality principal component analysis applied to recognizing or categorizing faces in photographs at the end of this lecture. Lecture 15 also is very useful – especially as it highlights the role of the Singular Value Decomposition (SVD) in actually calculating principal components.

Lecture 14

http://www.youtube.com/watch?v=ey2PE5xi9-A

Lecture 15

http://www.youtube.com/watch?v=QGd06MTRMHs

The Accuracy of Macroeconomics Forecasts – Survey of Professional Forecasters

The Philadelphia Federal Reserve Bank maintains historic records of macroeconomic forecasts from the Survey of Professional Forecasters (SPF). These provide an outstanding opportunity to assess forecasting accuracy in macroeconomics.

For example, in 2014, what is the chance the “steady as she goes” forecast from the current SPF is going to miss a downturn 1, 2, or 3 quarters into the future?

1-Quarter-Ahead Forecast Performance on Real GDP

Here is a chart I’ve ginned up for a 1-quarter ahead performance of the SPF forecasts of real GDP since 1990.

SP!1Q

The blue line is the forecast growth rate for real GDP from the SPF on a 1-quarter-ahead basis. The red line is the Bureau of Economic Analysis (BEA) final number for the growth rate for the relevant quarters. The growth rates in both instances are calculated on a quarter-over-quarter basis and annualized.

Side-stepping issues regarding BEA revisions, I used BEA final numbers for the level and growth of real GDP by quarter. This may not completely fair to the SPF forecasters, but it is the yardstick SPF is usually judged by its “consumers.”

Forecast errors for the 1-quarter-ahead forecasts, calculated on this basis, average about 2 percent in absolute value.

They also exhibit significant first order autocorrelation, as is readily suggested by the chart above. So, the SPF tends to under-predict during expansion phases of the business cycle and over-predict during contraction phases.

Currently, the SPF 2014:Q1 forecast for 2014:Q2 is for 3.0 percent real growth of GDP, so maybe it’s unlikely that an average error for this forecast would result in actual 2014:Q2 growth dipping into negative territory.

2-Quarter-Ahead Forecast Performance on Real GDP

Errors for the 2-quarter-ahead SPF forecast, judged against BEA final numbers for real GDP growth, only rise to about 2.14 percent.

However, I am interested in more than the typical forecast error associated with forecasts of real Gross Domestic Product (GDP) on a 1-, 2-, or 3- quarter ahead forecast horizon.

Rather, I’m curious whether the SPF is likely to catch a
downturn over these forecast horizons, given that one will occur.

So if we just look at recessions in this period, in 2001, 2002-2003, and 2008-2009, the performance significantly deteriorates. This can readily be seen in the graph for 1-quarter-ahead forecast errors shown above in 2008 when the consensus SPF forecast indicated a slight recovery for real GDP in exactly the quarter it totally tanked.

Bottom Line

In general, the SPF records provide vivid documentation of the difficulty of predicting turning points in key macroeconomic time series, such as GDP, consumer spending, investment, and so forth. At the same time, the real-time macroeconomic databases provided alongside the SPF records offer interesting opportunities for second- and third-guessing both the experts and the agencies responsible for charting US macroeconomics.

Additional Background

The Survey of Professional Forecasters is the oldest quarterly survey of macroeconomic forecasts in the United States. It dates back to 1968, when it was conducted by the American Statistical Association and the National Bureau of Economic Research (NBER). In 1990, the Federal Reserve Bank of Philadelphia assumed responsibility, and, today, devotes a special section on its website to the SPF, as well “Historical SPF Forecast Data.”

Current and recent contributors to the SPF include “celebrity forecasters” highlighted in other posts here, as well as bank-associated and university-affiliated forecasters.

The survey’s timing is geared to the release of the Bureau of Economic Analysis’ advance report of the national income and product accounts. This report is released at the end of the first month of each quarter. It contains the first estimate of GDP (and components) for the previous quarter. Survey questionnaires are sent after this report is released to the public. The survey’s questionnaires report recent historical values of the data from the BEA’s advance report and the most recent reports of other government statistical agencies. Thus, in submitting their projections, panelists’ information includes data reported in the advance report.

Recent participants include:

Lewis Alexander, Nomura Securities; Scott Anderson, Bank of the West (BNP Paribas Group); Robert J. Barbera, Johns Hopkins University Center for Financial Economics; Peter Bernstein, RCF Economic and Financial Consulting, Inc.; Christine Chmura, Ph.D. and Xiaobing Shuai, Ph.D., Chmura Economics & Analytics; Gary Ciminero, CFA, GLC Financial Economics; Julia Coronado, BNP Paribas; David Crowe, National Association of Home Builders; Nathaniel Curtis, Navigant; Rajeev Dhawan, Georgia State University; Shawn Dubravac, Consumer Electronics Association; Gregory Daco, Oxford Economics USA, Inc.; Michael R. Englund, Action Economics, LLC; Timothy Gill, NEMA; Matthew Hall and Daniil Manaenkov, RSQE, University of Michigan; James Glassman, JPMorgan Chase & Co.; Jan Hatzius, Goldman Sachs; Peter Hooper, Deutsche Bank Securities, Inc.; IHS Global Insight; Fred Joutz, Benchmark Forecasts and Research Program on Forecasting, George Washington University; Sam Kahan, Kahan Consulting Ltd. (ACT Research LLC); N. Karp, BBVA Compass; Walter Kemmsies, Moffatt & Nichol; Jack Kleinhenz, Kleinhenz & Associates, Inc.; Thomas Lam, OSK-DMG/RHB; L. Douglas Lee, Economics from Washington; Allan R. Leslie, Economic Consultant; John Lonski, Moody’s Capital Markets Group; Macroeconomic Advisers, LLC; Dean Maki, Barclays Capital; Jim Meil and Arun Raha, Eaton Corporation; Anthony Metz, Pareto Optimal Economics; Michael Moran, Daiwa Capital Markets America; Joel L. Naroff, Naroff Economic Advisors; Michael P. Niemira, International Council of Shopping Centers; Luca Noto, Anima Sgr; Brendon Ogmundson, BC Real Estate Association; Martin A. Regalia, U.S. Chamber of Commerce; Philip Rothman, East Carolina University; Chris Rupkey, Bank of Tokyo-Mitsubishi UFJ; John Silvia, Wells Fargo; Allen Sinai, Decision Economics, Inc.; Tara M. Sinclair, Research Program on Forecasting, George Washington University; Sean M. Snaith, Ph.D., University of Central Florida; Neal Soss, Credit Suisse; Stephen Stanley, Pierpont Securities; Charles Steindel, New Jersey Department of the Treasury; Susan M. Sterne, Economic Analysis Associates, Inc.; Thomas Kevin Swift, American Chemistry Council; Richard Yamarone, Bloomberg, LP; Mark Zandi, Moody’s Analytics.

Sayings of the Top Macro Forecasters

Yesterday, I posted the latest Bloomberg top twenty US macroeconomic forecaster rankings, also noting whether this current crop made it into the top twenty in previous “competitions” for November 2010-November 2012 or November 2009-November 2011.

It turns out the Bloomberg top twenty is relatively stable. Seven names or teams on the 2014 list appear in both previous competitions. Seventeen made it into the top twenty at least twice in the past three years.

But who are these people and how can we learn about their forecasts on a real-time basis?

Well, as you might guess, this is a pretty exclusive club. Many are Chief Economists and company Directors in investment advisory organizations serving private clients. Several did a stint on the staff of the Federal Reserve earlier in their career. Their public interface is chiefly through TV interviews, especially Bloomberg TV, or other media coverage.

I found a couple of exceptions, however – Michael Carey and Russell Price.

Michael Carey and Crédit Agricole

Michael Carey is Chief Economist North America Crédit Agricole CIB. He ranked 14, 7, and 5, based on his average scores for his forecasts of the key indicators in these three consecutive competitions. He apparently is especially good on employment forecasts.

MikeCarey

Carey is a lead author for a quarterly publication from Crédit Agricole called Prospects Macro.

The Summary for the current issue (1st Quarter 2014) caught my interest –

On the economic trend front, an imperfect normalisation seems to be getting underway. One may talk about a normalisation insofar as – unlike the two previous financial years – analysts have forecast a resumption of synchronous growth in the US, the Eurozone and China. US growth is forecast to rise from 1.8% in 2013 to 2.7%; Eurozone growth is slated to return to positive territory, improving from -0.4% to +1.0%; while Chinese growth is forecast to dip slightly, from 7.7% to 7.2%, which does not appear unwelcome nor requiring remedial measures. The imperfect character of the forecast normalisation quickly emerges when one looks at the growth predictions for 2015. In each of the three regions, growth is not gathering pace, or only very slightly. It is very difficult to defend the idea of a cyclical mechanism of self-sustaining economic acceleration. This observation seems to echo an ongoing academic debate: growth in industrialised countries seems destined to be weak in the years ahead. Partly, this is because structural growth drivers seem to be hampered (by demographics, debt and technology shocks), and partly because real interest rates seem too high and difficult to cut, with money-market rates that are already virtually at zero and low inflation, which is likely to last. For the markets, monetary policies can only be ‘reflationist’. Equities prices will rise until they come upagainst the overvaluation barrier and long-term rates will continue to climb, but without reaching levels justified by growth and inflation fundamentals.

I like that – an “imperfect normalization” (note the British spelling). A key sentence seems to be “It is very difficult to defend the idea of a cyclical mechanism of self-sustaining economic acceleration.”

So maybe the issue is 2015.

The discussion of emerging markets prospects is well-worth quoting also.

At 4.6% (and 4.2% excluding China), average growth in 2013 across all emerging countries seems likely to have been at its lowest since 2002, apart from the crisis year of 2009. Despite the forecast slowdown in China (7.2%, after 7.7%), the overall pace of growth for EMs is likely to pick up slightly in 2014 (to 4.8%, and 4.5% excluding China). The trend is likely to continue through 2015. This modest rebound, despite the poor growth figures expected from Brazil, is due to the slightly improved performance of a few other large emerging economies such as India, and above all Mexico, South Korea and some Central European countries. As regards the content of this growth, it is investment that should improve, on the strength of better growth prospects in the industrialised countries…

The growth differential with the industrialised countries has narrowed to around 3%, whereas it had stood at around 5% between 2003 and 2011…

This situation is unlikely to change radically in 2014. Emerging markets should continue to labour under two constraints. First off, the deterioration in current accounts has worsened as a result of fairly weak external demand, stagnating commodity prices, and domestic demand levels that are still sticky in many emerging countries…Commodity-exporting countries and most Asian exporters of manufactured goods are still generating surpluses, although these are shrinking. Conversely, large emerging countries such as India, Indonesia, Brazil, Turkey and South Africa are generating deficits that are in some cases reaching alarming proportions – especially in Turkey. These imbalances could restrict growth in 2014-15, either by encouraging governments to tighten monetary conditions or by limiting access to foreign financing.

Secondly, most emerging countries are now paying the price for their reluctance to embrace reform in the years of strong global growth prior to the great global financial crisis. This price is today reflected in falling potential growth levels in some emerging countries, whose weaknesses are now becoming increasingly clear. Examples are Russia and its addiction to commodities; Brazil and its lack of infrastructure, low savings rate and unruly inflation; India and its lack of infrastructure, weakening rate of investment and political dependence of the Federal state on the federated states. Unfortunately, the less favourable international situation (think rising interest rates) and local contexts (eg, elections in India and Brazil in 2014) make implementing significant reforms more difficult over the coming quarters. This is having a depressing effect on prospects for growth

I’m subscribing to notices of updates to this and other higher frequency reports from Crédit Agricole.

Russell Price and Ameriprise

Russell Price, younger than Michael Carey, was Number 7 on the current Bloomberg list of top US macro forecasters, ranking 16 the previous year. He has his own monthly publication with Ameriprise called Economic Perspectives.

RussellPrice

The current issue dated January 28, 2014 is more US-centric, and projects a “modest pace of recovery” for the “next 3 to 5 years.” Still, the current issue warns that analyst projections of company profits are probably “overly optimistic.”

I need to read one or two more of the issues to properly evaluate, but Economic Perspectives is definitely a cut above the average riff on macroeconomic prospects.

Another Way To Tap Into Forecasts of the Top Bloomberg Forecasters

The Wall Street Journal’s Market Watch is another way to tap into forecasts from names and teams on the top Bloomberg lists.

The Market Watch site publishes weekly median forecasts based on the 15 economists who have scored the highest in our contest over the past 12 months, as well as the forecasts of the most recent winner of the Forecaster of the Month contest.

The economists in the Market Watch consensus forecast include many currently or recently in the top twenty Bloomberg list – Jim O’Sullivan of High Frequency Economics, Michael Feroli of J.P. Morgan, Paul Edelstein of IHS Global Insight, Brian Jones of Société Générale, Spencer Staples of EconAlpha, Ted Wieseman of Morgan Stanley, Jan Hatzius’s team at Goldman Sachs, Stephen Stanley of Pierpont Securities, Avery Shenfeld of CIBC, Maury Harris’s team at UBS, Brian Wesbury and Robert Stein of First Trust, Jeffrey Rosen of Briefing.com, Paul Ashworth of Capital Economics, Julia Coronado of BNP Paribas, and Eric Green’s team at TD Securities.

And I like the format of doing retrospectives on these consensus forecasts, in tables such as this:

MarketWatchTable

So what’s the bottom line here? Well, to me, digging deeper into the backgrounds of these top ranked forecasters, finding access to their current thinking is all part of improving competence.

I can think of no better mantra than Malcolm Gladwell’s 10,000 Hour Rule –

Top Bloomberg Macro Forecaster Rankings for 2014

Bloomberg compiles global rankings for forecasters of US macro variables, based on their forecasts of a range of key monthly indicators. The rankings are based on performance over two year periods, ending November in the year the rankings are announced.

Here is a summary sheet for the past three years for the top twenty US macroeconomic forecasters or forecasting teams, with their organizational affiliation (click to enlarge).

Top Bloomberg rankings

SOURCES: http://www.christophe-barraud.com/wp-content/uploads/2014/01/Classement-Bloomberg-janvier-20141.pdf, http://www.bloomberg.com/bb/avfile/r5M7ODl4WNms, https://www.economy.com/home/products/samples/2012-01-20-Bloomberg.pdf

The list of top forecasters for the US economy has been fairly stable recently. At least seventeen out of the top twenty forecasters for the US are listed twice; six forecasters or forecasting teams made the top list in all three periods.

Interestingly, European forecasters have recently taken the lead. Bloomberg News notes Number One – Christophe Barraud is only 27 years old, and developed an interest in forecasting, apparently, as a teenager, when he and his dad bet on horses at tracks near Nice, France.

In the most recent ranking, key indicators include CPI, Durable Goods Orders, Existing Home Sales, Housing Starts, IP, ISM Manufacturing, ISM Nonmanufacturing, New Home Sales, Nonfarm Payrolls, Personal Income, Personal Spending, PPI, Retail Sales, Unemployment and GDP. A total of 68 forecasters or forecasting teams qualified for and participated in the ranking exercise.

Bloomberg Markets also announced other regional rankings, shown in this infographic

Bloombergmarkets And as a special treat this Friday, for the collectors among readers, here is the Ben Bernacke commemorative baseball card, developed at the Fed as a going away present.

Bernacke

Links – February 1, 2014

IT and Big Data

Kayak and Big Data Kayak is adding prediction of prices of flights over the coming 7 days to its meta search engine for the travel industry.

China’s Lenovo steps into ring against Samsung with Motorola deal Lenovo Group, the Chinese technology company that earns about 80 percent of its revenue from personal computers, is betting it can also be a challenger to Samsung Electronics Co Ltd and Apple Inc in the smartphone market.

5 Things To Know About Cognitive Systems and IBM Watson Rob High video on Watson at http://www.redbooks.ibm.com/redbooks.nsf/pages/watson?Open. Valuable to review. Watson is probably different than you think. Deep natural language processing.

Playing Computer Games and Winning with Artificial Intelligence (Deep Learning) Pesents the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards… [applies] method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm…outperforms all previous approaches on six of the games and surpasses a human expert on three of them.

Global Economy

China factory output points to Q1 lull Chinese manufacturing activity slipped to its lowest level in six months, with indications of slowing growth for the quarter to come in the world’s second-largest economy.

Japan inflation rises to a 5 year high, output rebounds Japan’s core consumer inflation rose at the fastest pace in more than five years in December and the job market improved, encouraging signs for the Bank of Japan as it seeks to vanquish deflation with aggressive money printing.

Coup Forecasts for 2014

coupforecast                       

World risks deflationary shock as BRICS puncture credit bubbles Ambrose Evans-Pritchard does some nice analysis in this piece.

Former IMF Chief Economist, Now India’s Central Bank Governor Rajan Takes Shot at Bernanke’s Destabilizing Policies

Some of his key points:

Emerging markets were hurt both by the easy money which flowed into their economies and made it easier to forget about the necessary reforms, the necessary fiscal actions that had to be taken, on top of the fact that emerging markets tried to support global growth by huge fiscal and monetary stimulus across the emerging markets. This easy money, which overlaid already strong fiscal stimulus from these countries. The reason emerging markets were unhappy with this easy money is “This is going to make it difficult for us to do the necessary adjustment.” And the industrial countries at this point said, “What do you want us to do, we have weak economies, we’ll do whatever we need to do. Let the money flow.”

Now when they are withdrawing that money, they are saying, “You complained when it went in. Why should you complain when it went out?” And we complain for the same reason when it goes out as when it goes in: it distorts our economies, and the money coming in made it more difficult for us to do the adjustment we need for the sustainable growth and to prepare for the money going out

International monetary cooperation has broken down. Industrial countries have to play a part in restoring that, and they can’t at this point wash their hands off and say we’ll do what we need to and you do the adjustment. ….Fortunately the IMF has stopped giving this as its mantra, but you hear from the industrial countries: We’ll do what we have to do, the markets will adjust and you can decide what you want to do…. We need better cooperation and unfortunately that’s not been forthcoming so far.

Science Perspective

Researchers Discover How Traders Act Like Herds And Cause Market Bubbles

Building on similarities between earthquakes and extreme financial events, we use a self-organized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed in a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P 500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders, randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while that of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders. http://pre.aps.org/abstract/PRE/v88/i6/e062814

Blogs review: Getting rid of the Euler equation – the equation at the core of modern macro The Euler equation is one of the fundamentals, at a deep level, of dynamic stochastic general equilibrium (DSGE) models promoted as the latest and greatest in theoretical macroeconomics. After the general failures in mainstream macroeconomics with 2008-09, DGSE have come into question, and this review is interesting because it suggests, to my way of thinking, that the Euler equation linking past and future consumption patterns is essentially grafted onto empirical data artificially. It is profoundly in synch with neoclassical economic theory of consumer optimization, but cannot be said to be supported by the data in any robust sense. Interesting read with links to further exploration.

BOSTON COLLOQUIUM FOR PHILOSOPHY OF SCIENCE: Revisiting the Foundations of Statistics – check this out – we need the presentations online.

Global Economy Outlook – Some Problems

There seems to be a meme evolving around the idea that – while the official business outlook for 2014 is positive – problems with Chinese debt, or more generally, emerging markets could be the spoiler.

The encouraging forecasts posted by bank and financial economists (see Hatzius, for example) present 2014 as a balance of forces, with things tipping in the direction of faster growth in the US and Europe. Austerity constraints, sequestration in the US and draconian EU policies, will loosen, allowing the natural robustness of the underlying economy to assert itself – after years of sub-par performance. In the meanwhile, growth in the emerging economies is admittedly slowing, but is still is expected at much higher rates than in heartland areas of the industrial West or Japan.

So, fingers crossed, the World Bank and other official economic forecasting agencies show an uptick in economic growth in the US and, even, Europe for 2014.

But then we have articles that highlight emerging market risks:

China’s debtfuelled boom is in danger of turning to bust This Financial Times article develops the idea that only five developing countries have had a credit boom nearly as big as China’s, in each case leading to a credit crisis and slowdown. So currently Chinese “total debt” – a concept not well-defined in this short piece – is currently running about 230 per cent of gross domestic product. The article offers comparison with “33 previous credit binges” and to smaller economies, such as Taiwan, Thailand, Zimbabwe, and so forth. Strident, but not compelling.

With China Awash in Money, Leaders Start to Weigh Raising the Floodgates  From the New York Times, a more solid discussion – The amount of money sloshing around China’s economy, according to a broad measure that is closely watched here, has now tripled since the end of 2006. China’s tidal wave of money has powered the economy to new heights, but it has also helped drive asset prices through the roof. Housing prices have soared, feeding fears of a bubble while leaving many ordinary Chinese feeling poor and left out.

The People’s Bank of China has been creating money to a considerable extent by issuing more renminbi to bankroll its purchase of hundreds of billions of dollars a year in currency markets to minimize the appreciation of the renminbi against the dollar and keep Chinese exports inexpensive in foreign markets; the central bank disclosed on Wednesday that the country’s foreign reserves, mostly dollars, soared $508.4 billion last year, a record increase.

 ChinaM2                 

Source: New York Times

Moreover, the rapidly expanding money supply reflects a flood of loans from the banking system and the so-called shadow banking system that have kept afloat many inefficient state-owned enterprises and bankrolled the construction of huge overcapacity in the manufacturing sector.

There also are two at least two recent, relevant posts by Yves Smith – who is always on the watch for sources of instability in the banking system

How Serious is China’s Shadow Banking/Wealth Management Products Problem?

China Credit Worries Rise as Large Shadow Banking Default Looms

In addition to concerns about China, of course, there are major currency problems developing for Russia, India, Chile, Brazil, Turkey, South Africa, and Argentina.

emergingcurrencies

From the Economist The plunging currency club

So there are causes for concern, especially with the US Fed, under Janet Yellen, planning on winding down QE or quantitative easing.

When Easy Money Ends is a good read in this regard, highlighting the current scale of QE (quantitative easing) programs globally, and savings from lower interest rates – coupled with impacts of higher interest rates.

Since the start of the financial crisis, the Fed, the European Central Bank, the Bank of England, and the Bank of Japan have used QE to inject more than $4 trillion of additional liquidity into their economies…If interest rates were to return to 2007 levels, interest payments on government debt could rise by 20%, other things being equal…US and European nonfinancial corporations saved $710 billion from lower debt-service payments, with ultralow interest rates thus boosting profits by about 5% in the US and the UK, and by 3% in the euro-zone. This source of profit growth will disappear as interest rates rise, and some firms will need to reconsider business models – for example, private equity – that rely on cheap capital…We could also witness the return of asset-price bubbles in some sectors, especially real estate, if QE continues. The International Monetary Fund noted in 2013 that there were already “signs of overheating in real-estate markets” in Europe, Canada, and some emerging-market economies. 

Links – January 11, 2014

Sober Looks at the US Economy and Social Setup

Joseph Stiglitz is calling the post-2008 “recovery” period The Great Malise

Yes, we avoided a Great Depression II, but only to emerge into a Great Malaise, with barely increasing incomes for a large proportion of citizens in advanced economies. We can expect more of the same in 2014. In the United States, median incomes have continued their seemingly relentless decline; for male workers, income has fallen to levels below those attained more than 40 years ago. Europe’s double-dip recession ended in 2013, but no one can responsibly claim that recovery has followed. More than 50% of young people in Spain and Greece remain unemployed.…Europe’s continuing stagnation is bad enough; but there is still a significant risk of another crisis in yet another eurozone country, if not next year, in the not-too-distant future. Matters are only slightly better in the US, where a growing economic divide – with more inequality than in any other advanced country – has been accompanied by severe political polarization. …growth will remain anemic, barely strong enough to generate jobs for new entrants into the labor force. A dynamic tax-avoiding Silicon Valley and a thriving hydrocarbon sector are not enough to offset austerity’s weight. Thus, while there may be some reduction of the Federal Reserve’s purchases of long-term assets (so-called quantitative easing, or QE), a move away from rock-bottom interest rates is not expected until 2015 at the earliest…China’s decelerating growth had a significant impact on commodity prices, and thus on commodity exporters around the world. But China’s slowdown needs to be put in perspective: even its lower growth rate is the envy of the rest of the world, and its move toward more sustainable growth, even if at a somewhat lower level, will serve it – and the world – well in the long run. As in previous years, the fundamental problem haunting the global economy in 2013 remained a lack of global aggregate demand. This does not mean, of course, that there is an absence of real needs – for infrastructure, to take one example, or, more broadly, for retrofitting economies everywhere in response to the challenges of climate change. But the global private financial system seems incapable of recycling the world’s surpluses to meet these needs. And prevailing ideology prevents us from thinking about alternative arrangements…Maybe the global economy will perform a little better in 2014 than it did in 2013, or maybe not. Seen in the broader context of the continuing Great Malaise, both years will come to be regarded as a time of wasted opportunities.

On the 50th Anniversary of the War on Poverty, The Atlantic Monthly ran a first-rate article Poverty vs. Democracy in America. Full of pithy quotes and info, such as this about the emergence of an impoverished underclass

50 million strong—whose ranks have swelled since the Great Recession to the highest rate and number below the poverty line in nearly 50 years. Nearly half of them—20.5 million people, including each of the people mentioned above—are living in deep poverty on less than $12,000 per year for a family of four, the highest rate since record-keeping began in 1975. Add to that the hundred million citizens who are struggling to stay a few paychecks above the poverty line, and fully half the U.S. population is either poor or “near poor,” according to the Census Bureau.

 Economically speaking, their poverty entails a lack of decent-paying jobs and government supports to sustain a healthy life. With half of American jobs paying less than $33,000 per year and a quarter paying poverty-line wages of $22,000 or less, even as financial markets soar, people in the bottom fifth of the income distribution now command the smallest share of income—3.3 percent—since the government started tracking income breakdowns in the 1960s. Middle-wage jobs lost during the Great Recession are largely being replaced by low-wage jobs—when they are replaced at all—contributing to an 11 percent decline in real income for poor families since 1979. For the 27 million adults who are unemployed or underemployed and the 48 million people in working poor families who rely on some form of public support, means-tested government programs excluding Medicaid have remained essentially flat for the past 20 years, at around $1,000 per capita per year. Only unemployment insurance and food stamps have seen a marked increase in recent years, although both are currently under assault in Congress.

Indian and Chinese Space Programs

Here’s a beautiful picture of the Indian subcontinent, shot from space

 BdLAkorIgAAhv11                

This reminds me that India, currently, is sending an unmanned mission to Mars –  Mangalyaan. Mangalyaan left Earth orbit around the beginning of December 2013. December 11, it successfully completed a mid-course correction, and appears to be on its way to orbiting Mars by September of this year.

Not to be outdone, China landed an exploratory mission on Earth’s Moon in recent weeks. Here’s a pic taken by the “Jade Rabbit rover” vehicle brought there by the lander – I really like that name, “Jade Rabbit rover.”

Chinaspacemission

These missions both will be criticized as wasting valuable resources which could be used to deal with poverty and underdevelopment in the sponsoring countries. But I think it is more reasonable to consider all this under the heading leap-frogging – like countries which skip installing land lines for telephone service in favor of erecting lots of mobile communications towers. India and China are leapfrogging some stages of development, and may benefit from the science and technical challenges of space travel, which surely is part of the human future.

Here’s a relatively recent critique of China’s growing investment in science and technology which sounds suspiciously to me like sour grapes. It’s simple. Keep giving young people education in technical subjects with better and better science backing this up, and sheer numbers eventually will turn the tide. Inventors maybe from the interior provinces of China, neglected by the elite institutions, might come up with startling discoveries – if the US experience is any guide. A lot of the best US science and technology comes from relatively out-of-the-way places, state universities, industry labs, and then is snapped up by the elite institutions at the center.

2014 Outlook: Jan Hatzius Forecast for Global Economic Growth

Jan Hatzius is chief economist of Global Investment Research (GIR) at Goldman Sachs, and achieved notoriety with his early recognition of the housing bust in 2008.

Here he discusses the current outlook for 2014.

The outlook is fairly rosy, so it’s interesting Goldman just released “Where we worry: Risks to our outlook”, exerpted extensively at Zero Hedge.

Downside economic risks include:

1. Reduction in fiscal drag is less of a plus than we expect

2. Deleveraging obstacles continue to weigh on private demand

3. Less effective spare capacity leads to earlier wage/inflation pressure

4. Euro area risks resurface

5. China financial/credit concerns become critical