The End of Quantitative Easing, the Expansion of QE

The US Federal Reserve Bank declared an end to its quantitative easing (QE) program at the end of October.

QE involves direct Fed intervention into buying longer term bonds with an eye to exercising leverage on long term interest rates and, thus, encouraging investment. Readers wanting more detail on how QE is implemented – check Ed Dolan’s slide show Quantitative Easing and the Fed 2008-2014: A Tutorial

The New York Times article on the Fed actions – Quantitative Easing Is Ending. Here’s What It Did, in Charts – had at least two charts that are must-see’s.

First, the ballooning of the Federal Reserve Balance sheet from less than $1 trillion to $4.5 trillion today –

FedQEassets

Secondly, according to Times estimates, about 40 percent of Fed assets are comprised of mortgage-backed securities now – making the Fed a potential major player in the US housing markets.

MBS

Several recent articles offer interpretation – what does the end of this five-year long program mean for the US economy and for investors. What were the impacts of QE?

I thought Jeff Miller’s “Old Prof” compendium was especially good – Weighing the Week Ahead: What the End of QE Means for the Individual Investor. If you click this link and find a post more recent than November 1, scroll down for the QE discussion. Basically, Miller thinks the impact on investors will be minimal.

This is also true in the Business Week article The Hawaiian Tropic Effect: Why the Fed’s Quantitative Easing Isn’t Over

But quantitative easing is the gift that keeps on giving. Even after the purchases end, its effects will persist. How could that be? The Fed will still own all those bonds it bought, and according to the agency itself, it’s the level of its holdings that affects the bond market, not the rate of addition to those holdings. Having reduced the supply of bonds available on the market, the Fed has raised their price. Yields (i.e. market interest rates) go down when prices go up. So the effect of quantitative easing is to lower interest rates for things Americans actually care about, such as 30-year fixed-rate mortgages.

Some other articles which attempt to tease out exactly what impacts QE did have on the economy –

Evaluation of quantitative easing QE had “some effects” but it’s one of several influences on the bond market and long term interest rates.

Quantitative easing: giving cash to the public would have been more effective

QE has also had unforeseen side-effects. The policy involved allowing banks and other financial institutions to exchange bonds for cash, and the hope was that this would lead to improved flows of credit to firms looking to expand. In reality, it encouraged financial speculation in property, shares and commodities. The bankers and the hedge fund owners did well out of QE, but the side-effect of footloose money searching the globe for high yields was higher food and fuel prices. High inflation and minimal wage growth led to falling real incomes and a slower recovery.

What Quantitative Easing Did Not Do: Three Revealing Charts – good discussion organized around the following three points –

  1. QE did not work according to the textbook model
  2. QE did not cause inflation
  3. QE was not powerful enough to overcome fiscal restraint

Expansion of QE

But quantitative easing as a central bank policy is by no means a dead letter.

In fact, at the very moment the US Federal Reserve announced the end of its five-year long program of bond-buying, the Bank of Japan (BOPJ) announced a significant expansion of its QE, as noted in this article from Forbes.

Last week, as the Federal Reserve officially announced the end of its long-term asset purchase program (commonly known as QE3), the Bank of Japan significantly ratcheted up its own quantitative easing program, in a surprising 5-4 split decision. Starting next year, the Bank of Japan will increase its balance sheet by 15 percent of GDP per annum and will extend the average duration of its bond purchases from 7 years to 10 years. The big move by Japan’s central bank comes amid the country’s GDP declining by 7.1% in the second quarter of 2014 (on an annualized basis) from the previous quarter following the increase of the VAT sales tax from 5% to 8% in Japan earlier this year and worries that Japan could fall into another deflationary spiral..

The scale of the Japanese effort is truly staggering, as this chart from the Forbes article illustrates.

 CentralBankAssets

The Economist article on this development Every man for himself tries to work out the implications of the Japanese action on the value of the yen, Japanese inflation/deflation, the Japanese international trade position, impact on competitors (China), and impacts on the US dollar.

What about Europe? Well, Bloomberg offers this primer – Europe’s QE Quandary. Short take – there are 18 nations which have to agree and move together, Germany’s support being decisive. But deflation appears to be spreading in Europe, so many expect something to be done along QE lines.

If you are forecasting for businesses, government agencies, or investors, these developments by central banks around the world are critically important. Their effects may be subtle and largely in unintended consequences, but the scale of operations means you simply have to keep track.

Forecasting the Downswing in Markets – II

Because the Great Recession of 2008-2009 was closely tied with asset bubbles in the US and other housing markets, I have a category for asset bubbles in this blog.

In researching the housing and other asset bubbles, I have been surprised to discover that there are economists who deny their existence.

By one definition, an asset bubble is a movement of prices in a market away from fundamental values in a sustained manner. While there are precedents for suggesting that bubbles can form in the context of rational expectations (for example, Blanchard’s widely quoted 1982 paper), it seems more reasonable to consider that “noise” investors who are less than perfectly informed are part of the picture. Thus, there is an interesting study of the presence and importance of “out-of-town” investors in the recent run-up of US residential real estate prices which peaked in 2008.

The “deviations from fundamentals” approach in econometrics often translates to attempts to develop or show breaks in cointegrating relationships, between, for example, rental rates and housing prices. Let me just say right off that the problem with this is that the whole subject of cointegration of nonstationary time series is fraught with statistical pitfalls – such as weak tests to reject unit roots. To hang everything on whether or not Granger causation can or cannot be shown is to really be subject to the whims of random influences in the data, as well as violations of distributional assumptions on the relevant error terms.

I am sorry if all that sounds kind of wonkish, but it really needs to be said.

Institutionalist approaches seem more promising – such as a recent white paper arguing that the housing bubble and bust was the result of a ..

supply-side phenomenon, attributable to an excess of mispriced mortgage finance: mortgage-finance spreads declined and volume increased, even as risk increased—a confluence attributable only to an oversupply of mortgage finance.

But what about forecasting the trajectory of prices, both up and then down, in an asset bubble?

What can we make out of charts such as this, in a recent paper by Sornette and Cauwels?

negativebubble

Sornett and the many researchers collaborating with him over the years are working with a paradigm of an asset bubble as a faster than exponential increase in prices. In an as yet futile effort to extend the olive branch to traditional economists (Sornette is a geophysicist by training), Sornette evokes the “bubbles following from rational expectations meme.” The idea is that it could be rational for an investor to participate in a market that is in the throes of an asset bubble, providing that the investor believes that his gains in the near future adequately compensate for the increased risk of a collapse of prices. This is the “greater fool” theory to a large extent, and I always take delight in pointing out that one of the most intelligent of all human beings – Isaac Newton – was burned by exactly such a situation hundreds of years ago.

In any case, the mathematics of the Sornette et al approach are organized around the log-periodic power law, expressed in the following equation with the Sornette and Cauwels commentary (click to enlarge).

LPPL

From a big picture standpoint, the first thing to observe is that there is a parameter tc in the equation which is the “critical time.”

The whole point of this mathematical apparatus, which derives in part from differential equations and some basic modeling approaches common in physics, is that faster than exponential growth is destined to reach a point at which it basically goes ballistic. That is the critical point. The purpose of forecasting in this context then is to predict when this will be, when will the asset bubble reach its maximum price and then collapse?

And the Sornette framework allows for negative as well as positive price movements according to the dynamics in this equation. So, it is possible, if we can implement this, to predict how far the market will fall after the bubble pops, so to speak, and when it will turn around.

Pretty heady stuff.

The second big picture feature is to note the number of parameters to be estimated in fitting this model to real price data – minimally constants A, B, and C, an exponent m, the angular frequency ω and phase φ, plus the critical time.

For the mathematically inclined, there is a thread of criticism and response, more or less culminating in Clarifications to questions and criticisms on the Johansen–Ledoit–Sornette financial bubble model which used to be available as a PDF download from ETC Zurich.

In brief, the issue is whether the numerical analysis methods fitting the data to the LPPL model arrive at local, instead of global maxima. Obviously, different values for the parameters can lead to wholly different forecasts for the critical time tc.

To some extent, this issue can be dealt with by running a great number of estimations of the parameters, or by developing collateral metrics for adequacy of the estimates.

But the bottom line is – regardless of the extensive applications of this approach to all manner of asset bubbles internationally and in different markets – the estimation of the parameters seems more in the realm of art, than science, at the present time.

However, it may be that mathematical or computational breakthroughs are possible.

I feel these researchers are “very close.”

In any case, it would be great if there were a package in R or the like to gin up these estimates of the critical time, applying the log-periodic power law.

Then we could figure out “how low it can go.’

And, a final note to this post – it is ironic that as I write and post this, the stock markets have recovered from their recent swoon and are setting new records. So I guess I just want to be prepared, and am not willing to believe the runup can go on forever.

I’m also interested in methodologies that can keep forecasters usefully at work, during the downswing.

Forecasting the Downswing in Markets

I got a chance to work with the problem of forecasting during a business downturn at Microsoft 2007-2010.

Usually, a recession is not good for a forecasting team. There is a tendency to shoot the messenger bearing the bad news. Cost cutting often falls on marketing first, which often is where forecasting is housed.

But Microsoft in 2007 was a company which, based on past experience, looked on recessions with a certain aplomb. Company revenues continued to climb during the recession of 2001 and also during the previous recession in the early 1990’s, when company revenues were smaller.

But the plunge in markets in late 2008 was scary. Microsoft’s executive team wanted answers. Since there were few forthcoming from the usual market research vendors – vendors seemed sort of “paralyzed” in bringing out updates – management looked within the organization.

I was part of a team that got this assignment.

We developed a model to forecast global software sales across more than 80 national and regional markets. Forecasts, at one point, were utilized in deliberations of the finance directors, developing budgets for FY2010. Our Model, by several performance comparisons, did as well or better than what was available in the belated efforts of the market research vendors.

This was a formative experience for me, because a lot of what I did, as the primary statistical or econometric modeler, was seat-of-the-pants. But I tried a lot of things.

That’s one reason why this blog explores method and technique – an area of forecasting that, currently, is exploding.

Importance of the Problem

Forecasting the downswing in markets can be vitally important for an organization, or an investor, but the first requirement is to keep your wits. All too often there are across-the-board cuts.

A targeted approach can be better. All market corrections, inflections, and business downturns come to an end. Growth resumes somewhere, and then picks up generally. Companies that cut to the bone are poorly prepared for the future and can pay heavily in terms of loss of market share. Also, re-assembling the talent pool currently serving the organization can be very expensive.

But how do you set reasonable targets, in essence – make intelligent decisions about cutbacks?

I think there are many more answers than are easily available in the management literature at present.

But one thing you need to do is get a handle on the overall swing of markets. How long will the downturn continue, for example?

For someone concerned with stocks, how long and how far will the correction go? Obviously, perspective on this can inform shorting the market, which, my research suggests, is an important source of profits for successful investors.

A New Approach – Deploying high frequency data

Based on recent explorations, I’m optimistic it will be possible to get several weeks lead-time on releases of key US quarterly macroeconomic metrics in the next downturn.

My last post, for example, has this graph.

MIDAScomp

Note how the orange line hugs the blue line during the descent 2008-2009.

This orange line is the out-of-sample forecast of quarterly nominal GDP growth based on the quarter previous GDP and suitable lagged values of the monthly Chicago Fed National Activity Index. The blue line, of course, is actual GDP growth.

The official name for this is Nowcasting and MIDAS or Mixed Data Sampling techniques are widely-discussed approaches to this problem.

But because I was only mapping monthly and not, say, daily values onto quarterly values, I was able to simply specify the last period quarterly value and fifteen lagged values of the CFNAI in a straight-forward regression.

And in reviewing literature on MIDAS and mixing data frequencies, it is clear to me that, often, it is not necessary to calibrate polynomial lag expressions to encapsulate all the higher frequency data, as in the classic MIDAS approach.

Instead, one can deploy all the “many predictors” techniques developed over the past decade or so, starting with the work of Stock and Watson and factor analysis. These methods also can bring “ragged edge” data into play, or data with different release dates, if not different fundamental frequencies.

So, for example, you could specify daily data against quarterly data, involving perhaps several financial variables with deep lags – maybe totaling more explanatory variables than observations on the quarterly or lower frequency target variable – and wrap the whole estimation up in a bundle with ridge regression or the LASSO. You are really only interested in the result, the prediction of the next value for the quarterly metric, rather than unbiased estimates of the coefficients of explanatory variables.

Or you could run a principal component analysis of the data on explanatory variables, including a rag-tag collection of daily, weekly, and monthly metrics, as well as one or more lagged values of the higher frequency variable (quarterly GDP growth in the graph above).

Dynamic principal components also are a possibility, if anyone can figure out the estimation algorithms to move into a predictive mode.

Being able to put together predictor variables of all different frequencies and reporting periods is really exciting. Maybe in some way this is really what Big Data means in predictive analytics. But, of course, progress in this area is wholly empirical, it not being clear what higher frequency series can successfully map onto the big news indices, until the analysis is performed. And I think it is important to stress the importance of out-of-sample testing of the models, perhaps using cross-validation to estimate parameters if there is simply not enough data.

One thing I believe is for sure, however, and that is we will not be in the dark for so long during the next major downturn. It will be possible to  deploy all sorts of higher frequency data to chart the trajectory of the downturn, probably allowing a call on the turning point sooner than if we waited for the “big number” to come out officially.

Top picture courtesy of the Bridgespan Group

Mapping High Frequency Data Onto Aggregated Variables – Monthly and Quarterly Data

A lot of important economic data only are available in quarterly installments. The US Gross Domestic Product (GDP) is one example.

Other financial series and indexes, such as the Chicago Fed National Activity Index, are available in monthly, or even higher frequencies.

Aggregation is a common tactic in this situation. So monthly data is aggregated to quarterly data, and then mapped against quarterly GDP.

But there are alternatives.

One is what Elena Andreou, Eric Ghysels and Andros Kourtellos call a naïve specification –

MIDASsim0ple

With daily (D) and quarterly (Q) data, there typically are a proliferation of parameters to estimate – 66 if you allow 22 trading days per month. Here ND in the above equation is the number of days in the quarterly period.

The usual workaround is a weighting scheme. Thus, two parameter exponential Almon lag polynomials are identified with MIDAS, or Mixed Data Sampling.

However, other researchers note that with the monthly and quarterly data, direct estimation of expressions such as the one above (with XM instead of XD ) is more feasible.

The example presented here shows that such models can achieve dramatic gains in accuracy.

Quarterly and Monthly Data Example

Let’s consider forecasting releases of the US nominal Gross Domestic Product by the Bureau of Economic Analysis.

From the BEA’s 2014 News Release Schedule for the National Economic Accounts, one can see that advance estimates of GDP occur a minimum of one month after the end of the quarter being reported. So, for example, the advance estimate for the Third Quarter was released October 30 of this year.

This means the earliest quarter updates on US GDP become available fully a month after the end of the quarter in question.

The Chicago Fed National Activity Index (CFNAI), a monthly guage of overall economic activity, is released three weeks after the month being measured.

So, by the time the preliminary GDP for the latest quarter (analyzed or measured) is released, as many as four CFNAI recent monthly indexes are available, three of which pertain to the months constituting this latest measured quarter.

Accordingly, I set up an equation with a lagged term for GDP growth and fifteen terms or values for CFNAImonthly indexes. For each case, I regress a value for GDP growth for quarter t onto GDP growth for quarter t-1 and values for all the monthly CFNAI indices for quarter t, except for the most recent or last month, and twelve other values for the CFNAI index for the three quarters preceding the final quarter to be estimated – quarter t-1, quarter t-2, and quarter t-3.

One of the keys to this data structure is that the monthly CFNAI values do not “stack,” as it were. Instead the most recent lagged CFNAI value for a case always jumps by three months. So, for the 3rd quarter GDP in, say, 2006, the CFNAI value starts with the value for August 2006 and tracks back 14 values to July 2005. Then for the 4th quarter of 2006, the CFNAI values start with November 2006, and so forth.

This somewhat intricate description supports the idea that we are estimating current quarter GDP just at the end of the current quarter before the preliminary measurements are released.

Data and Estimation

I compile BEA quarterly data for nominal US GDP dating from the first Quarter of 1981 or 1981:1 to the 4th Quarter of 2011. I also download monthly data from the Chicago Fed National Activity Index from October 1979 to December 2011.

For my dependent or target variable, I calculate year-over-year GDP growth rates by quarter, from the BEA data.

I estimate an equation, as illustrated initially in this post, by ordinary least squares (OLS). For quarters, I use the sample period 1981:2 to 2006:4. The monthly data start earlier to assure enough lagged terms for the CFNAI index, and run from 1979:10 to 2006:12.

Results

The results are fairly impressive. The regression equation estimated over quarterly and monthly data to the end of 2006 performs much better than a simple first order autocorrelation during the tremendous dip in growth characterizing the Great Recession. In general, even after stabilization of GDP growth in 2010 and 2011, the high frequency data regression produces better out-of-sample forecasts.

Here is a graph comparing the out-of-sample forecast accuracy of the high frequency regression and a simple first order autocorrelation relationship.

MIDAScomp

What’s especially interesting is that the high frequency data regression does a good job of capturing the drop in GDP and the movement at the turning point in 2009 – the depth of the Great Recession.

I throw this chart up as a proof-of-concept. More detailed methods, using a specially-constructed Chicago Fed index, are described in a paper in the Journal of Economic Perspectives.