Links – April 18

Ukraine

US financial showdown with Russia is more dangerous than it looks, for both sides Ambrose Evans-Pritchard at his most incisive.

How the Ukraine crisis ends Henry Kissinger, not always one of my favorites, writes an almost wise comment on the Ukraine from early March. Still relevant.

The West must understand that, to Russia, Ukraine can never be just a foreign country. Russian history began in what was called Kievan-Rus. The Russian religion spread from there. Ukraine has been part of Russia for centuries, and their histories were intertwined before then. Some of the most important battles for Russian freedom, starting with the Battle of Poltava in 1709 , were fought on Ukrainian soil. The Black Sea Fleet — Russia’s means of projecting power in the Mediterranean — is based by long-term lease in Sevastopol, in Crimea. Even such famed dissidents as Aleksandr Solzhenitsyn and Joseph Brodsky insisted that Ukraine was an integral part of Russian history and, indeed, of Russia.

China

The Future of Democracy in Hong Kong There is an enlightening video (about 1 hour long) interview with Veteran Hong Kong political leaders Anson Chan and Martin Lee. Beijing and local Hong Kong democratic rule appear to be on a collision course.

Inside Look At Electric Taxis Hitting China In Mass This Summer China needs these. The pollution in Beijing and other big cities from cars is stifling and getting worse.

taxi

Economy

Detecting bubbles in real time Interesting suggestion for metric to guage bubble status of an asset market.

Fed’s Yellen More Concerned About Inflation Running Below 2% Target Just a teaser, but check this Huffington Post flash video of Yellen, still at the time with the San Francisco Fed, as she lays out the dangers of deflation in early 2013. Note also the New Yorker blog on Yellen’s recent policy speech, and her silence on speculative bubbles.

Yellen

Data Analytics

Manipulate Me: The Booming Business in Behavioral Finance

Hidden Markov Models: The Backwards Algorithm

Suppose you are at a table at a casino and notice that things don’t look quite right. Either the casino is extremely lucky, or things should have averaged out more than they have. You view this as a pattern recognition problem and would like to understand the number of ‘loaded’ dice that the casino is using and how these dice are loaded. To accomplish this you set up a number of Hidden Markov Models, where the number of loaded die are the latent variables, and would like to determine which of these, if any, is more likely to be using rigged dice.

Loess Seasonal Decomposition as a Forecasting Tool

I’ve applied something called loess decomposition to the London PM Fix gold series previously discussed in this blog.

This suggests insights missing from an application of Forecast Pro – a sort of standard in the automatic forecasting field.

Loess decomposition separates a time series into components – trend, seasonals, and residuals or remainder – based on locally weighted regression smoothing of the data.

I always wondered whether, in fact, there was a seasonal component to the monthly London PM fix time series.

Not every monthly or quarterly time series has credible seasonal components, of course.

The proof would seem to be in the pudding. If a program derives seasonal components for a time series, do those seasonal components improve forecasts? That seems to be the critical issue.

STL Decomposition

STL decomposition – seasonal trend decomposition based on loess – was proposed by Cleveland et al in an interesting-sounding publication called “The Journal of Official Statistics.” I found the citation working through the procedure for bagging exponential smoothing mentioned in the previous post.

Amazingly, there is an online resource which calculates this loess decomposition for data you input, based on a listed R routine. The citation is Wessa P., (2013), Decomposition by Loess (v1.0.2) in Free Statistics Software (v1.1.23-r7), Office for Research Development and Education, URL http://www.wessa.net/rwasp_decomposeloess.wasp/

Comparison of STL Decomposition and Forecast Pro Gold Price Forecasts

Here’s a typical graph comparing the forecast errors from the Forecast Pro runs with STL Decomposition.

goldloessFP

The trend component extracted by the STL decomposition was uncomplicated and easy to forecast by linear extrapolation. I added the seasonal component to these extrapolations to get the monthly forecasts over the six month forecast horizon. Forecast Pro, on the other hand, did not signal the existence of a seasonal component in this series, and, furthermore, identified the optimal forecast model as a random walk and the optimal forecast as the last observed value.

Here is the trend component from the STL decomposition.

Goldtrend

Discussion

Potentially, there is lots more to discuss here.

For example, to establish forecasts based on the loess decomposition of the gold price outperform Forecast Pro means compiling a large number of forecast comparisons, ideally one for all possible training sets beyond a minimum number of observations required for stable calculation of the STL algorithm. That is, each training set generates somewhat different values for the trend, seasonals, and residuals with loess decomposition. And Forecast Pro needs to be run for all these possible training sets also, with forecasts compared to out-of-sample data.

While I have not gone to this extent, I have done these computations several times with good results for STL decomposition.

Also, it’s clear that loess decomposition extracts constant variance seasonals. However, the shape of these seasonals change as the training set changes. It is necessary, thus, to study whether these changes can reflect multiplicative seasonality, for series in which that type of seasonality predominates. For example, perhaps STL seasonals tend to reflect the end points of the training sets.

Bergmeir, Hyndman, and Benıtez (BHB) apply a Box Cox transformation in one of their bagged exponential smoothing methods. This is possibly another way to sidestep problems of multiplicative or hetereoskedastic seasonality. It also makes sense when one is attempting to bag a time series.

However, my explorations suggest the results of STL decomposition are quite flexible, and, in the case of this gold price series, often produce superior forecasts to results from one of the main off-the-shelf automatic forecasting programs.

I personally am going to work on including STL decomposition in my forecasting toolkit.

Bagging Exponential Smoothing Forecasts

Bergmeir, Hyndman, and Benıtez (BHB) successfully combine two powerful techniques – exponential smoothing and bagging (bootstrap aggregation) – in ground-breaking research.

I predict the forecasting system described in Bagging Exponential Smoothing Methods using STL Decomposition and Box-Cox Transformation will see wide application in business and industry forecasting.

These researchers demonstrate their algorithms for combining exponential smoothing and bagging outperform all other forecasting approaches in the M3 forecasting competition database for monthly time series, and do better than many approaches for quarterly and annual data. Furthermore, the BHB approach can be implemented with extant routines in the programming language R.

This table compares bagged exponential smoothing with other approaches on monthly data from the M3 competition.

BaggedES

Here BaggedETS.BC refers to a variant of the bagged exponential smoothing model which uses a Box Cox transformation of the data to reduce the variance of model disturbances, The error metrics are the symmetric mean absolute percentage error (sMAPE) and the mean absolute scaled error (MASE). These are calculated for applications of the various models to out-of-sample, holdout, or test sample data from each of 1428 monthly time series in the competition.

See the online text by Hyndman and Athanasopoulos for motivations and discussions of these error metrics.

The BHB Algorithm

In a nutshell, here is the BHB description of their algorithm.

After applying a Box-Cox transformation to the data, the series is decomposed into trend, seasonal and remainder components. The remainder component is then bootstrapped using the MBB, the trend and seasonal components are added back, and the Box-Cox transformation is inverted. In this way, we generate a random pool of similar bootstrapped time series. For each one of these bootstrapped time series, we choose a model among several exponential smoothing models, using the bias-corrected AIC. Then, point forecasts are calculated using all the different models, and the resulting forecasts are averaged.

The MBB is the moving block bootstrap. It involves random selection of blocks of the remainders or residuals, preserving the time sequence and, hence, autocorrelation structure in these residuals.

Several R routines supporting these algorithms have previously been developed by Hyndman et al. In particular, the ets routine developed by Hyndman and Khandakar fits 30 different exponential smoothing models to a time series, identifying the optimal model by an Akaike information criterion.

Some Thoughts

This research lays out an almost industrial-scale effort to extract more information for prediction purposes from time series, and at the same time to use an applied forecasting workhorse – exponential smoothing.

Exponential smoothing emerged as a forecasting technique in applied contexts in the 1950’s and 1960’s. The initial motivation was error correction from forecasts of arbitrary origin, instead of an underlying stochastic model. Only later were relationships between exponential smoothing and time series processes, such as random walks, revealed with the work of Muth and others.

The M-competitions, initially organized in the 1970’s, gave exponential smoothing a big boost, since, by some accounts, exponential smoothing “won.” This is one of the sources of the meme – simpler models beat more complex models.

Then, at the end of the 1990’s, Makridakis and others organized a penultimate M-competition which was, in fact, won by the automatic forecasting software program Forecast Pro. This program typically compares ARIMA and exponential smoothing models, picking the best model through proprietary optimization of the parameters and tests on holdout samples. As in most sales and revenue forecasting applications, the underlying data are time series.

While all this was going on, the machine learning community was ginning up new and powerful tactics, such as bagging or bootstrap aggregation. Bagging can be a powerful technique for focusing on parameter estimates which are otherwise masked by noise.

So this application and research builds interestingly on a series of efforts by Hyndman and his associates and draws in a technique that has been largely confined to machine learning and data mining.

It is really almost the first of its kind – where bagging applications to time series forecasting have been less spectacularly successful than in cross-sectional regression modeling, for example.

A future post here will go through the step-by-step of this approach using some specific and familiar time series from the M competition data.

Inflation/Deflation – 3

Business forecasters often do not directly forecast inflation, but usually are consumers of inflation forecasts from specialized research organizations.

But there is a level of literacy that is good to achieve on the subject – something a quick study of recent, authoritative sources can convey.

A good place to start is the following chart of US Consumer Price Index (CPI) and the GDP price index, both expressed in terms of year-over-year (yoy) percentage changes. The source is the St. Louis Federal Reserve FRED data site.

InflationFRED

The immediate post-WW II period and the 1970;s and 1980’s saw surging inflation. Since somewhere in the 1980’s and certainly after the early 1990’s, inflation has been on a downward trend.

Some Stylized Facts About Forecasting Inflation

James Stock and Mark Watson wrote an influential NBER (National Bureau of Economic Research) paper in 2006 titled Why Has US Inflation Become Harder to Forecast.

These authors point out that the rate of price inflation in the United States has become both harder and easier to forecast, depending on one’s point of view.

On the one hand, inflation (along with many other macroeconomic time series) is much less volatile than it was in the 1970s or early 1980s, and the root mean squared error of naïve inflation forecasts has declined sharply since the mid-1980s. In this sense, inflation has become easier to forecast: the risk of inflation forecasts, as measured by mean squared forecast errors (MSFE), has fallen.

On the other hand, multivariate forecasting models inspired by economic theory – such as the Phillips curve –lose ground to univariate forecasting models after the middle 1980’s or early 1990’s. The Phillips curve, of course, postulates a tradeoff between inflation and economic activity and is typically parameterized in inflationary expectations and the gap between potential and actual GDP.

A more recent paper Forecasting Inflation evaluates sixteen inflation forecast models and some judgmental projections. Root mean square prediction errors (RMSE’s) are calculated in quasi-realtime recursive out-of-sample data – basically what I would call “backcasts.” In other words, the historic data is divided into training and test samples. The models are estimated on the various possible training samples (involving, in this case, consecutive data) and forecasts from these estimated models are matched against the out-of-sample or test data.

The study suggests four principles. 

  1. Subjective forecasts do the best
  2. Good forecasts must account for a slowly varying local mean.
  3. The Nowcast is important and typically utilizes different techniques than standard forecasting
  4.  Heavy shrinkage in the use of information improves inflation forecasts

Interestingly, this study finds that judgmental forecasts (private sector surveys and the Greenbook) are remarkably hard to beat. Otherwise, most of the forecasting models fail to consistently trump a “naïve forecast” which is the average inflation rate over four previous periods.

What This Means

I’m willing to offer interpretations of these findings in terms of (a) the resilience of random walk models, and (b) the eclipse of unionized labor in the US.

So forecasting inflation as an average of several previous values suggests the underlying stochastic process is some type of random walk. Thus, the optimal forecast for a simple random walk is the most currently observed value. The optimal forecast for a random walk with noise is an exponentially weighted average of the past values of the series.

The random walk is a recurring theme in many macroeconomic forecasting contexts. It’s hard to beat.

As far as the Phillips curve goes, it’s not clear to me that the same types of tradeoffs between inflation and unemployment exist in the contemporary US economy, as did, say, in the 1950’s or 1960’s. The difference, I would guess, is the lower membership in and weaker power of unions. After the 1980’s, things began to change significantly on the labor front. Companies exacted concessions from unions, holding out the risk that the manufacturing operation might be moved abroad to a lower wage area, for instance. And manufacturing employment, the core of the old union power, fell precipitously.

As far as the potency of subjective forecasts – I’ll let Faust and Wright handle that. While these researchers find what they call subjective forecasts beat almost all the formal modeling approaches, I’ve seen other evaluations calling into question whether any inflation forecast beats a random walk approach consistently. I’ll have to dig out the references to make this stick.

Inflation/Deflation-2

The following chart, courtesy of the Bank of Japan (BOJ), shows inflation and deflation dynamics in Japan since the 1980’s.

Japandeflation

There is interesting stuff in this chart, not the least of which is the counterintuitive surge in Japanese consumer prices in the Great Recession (2008 et passim).

Note, however, that the GDP deflator fell below zero in 1994, returning to positive territory only briefly around 2009. The other index in the chart – the DCGPI  –  is a Domestic Corporate Goods Price Index calculated by Japanese statistical agencies.

The Japanese experience with deflation is relevant to deflation dynamics in the Eurozone, and has been central to the thinking and commentary of US policymakers and macroeconomic/monetary theorists, such as Benjamin Bernanke.

The conventional wisdom explains inflation and deflation with the Phillips Curve. Thus, in one variant, inflation is projected to be a function of inflationary expectations, the output gap between potential and actual GDP, and other factors – decline in export prices, demographic changes, “unlucky” historical developments, or institutional issues in the financial sector.

It makes a big difference how you model inflationary expectations in this model, as John Williams points out in a Federal Reserve Bank of San Francisco Economic Letter.

If inflationary expectations are unanchored, a severe recession can lead to a deflationary spiral.

The logic is as follows: In the early stage of recession, the emergence of slack causes the inflation rate to dip. The resulting lower inflation rate prompts people to reduce their future inflation expectations. Continued economic slack causes the inflation rate to fall still further. If the recession is severe and long enough, this process eventually will cause prices to fall and then spiral lower and lower, resulting in ever-faster deflation rates. The deflation rate stabilizes only when slack is eliminated. And inflation turns positive again only after a sustained period of tight labor markets.

This contrasts with “well anchored” inflationary expectations, where people expect the monetary authorities will step in and end deflationary episodes at some point. In technical time series terms – inflation time series exhibit longer term returns to mean values and this acts as a magnet pulling prices up in some deflationary circumstances. Janet Yellen and her husband George Ackerlof have commented on this type of dynamic in inflationary expectations.

The Industrial Side of the Picture

The BOJ Working Paper responsible for the introductory chart also considers “other factors” in the Phillips Curve explanation, presenting a fascinating table.

JapanGDPdeflatiorbreakout

The huge drop in prices of electric machinery in Japan over 1990-2009 caught my attention.

The collapse in electric machinery prices represent changed conditions in export markets with cheaper and high quality electronics manufactured in China and other areas harboring contract electronics manufacturing.

Could this be a major contributor to persisting Japanese deflation, initially triggered obviously by massive drops in Japanese real estate in the earlyi 1990’s?

An interesting paper by Haruhiko Murayama of the Kyoto Research Institute – Reality and Cause of Deflation in Japan – makes a persuasive case for just that conclusion.

Murayama argues competition from China and other lower wage electronics producers is a major factor in continuing Japanese deflation.

The greatest cause of the deflation is a lack of demand, which in turn is attributable to the fact that emerging countries such as China, South Korea and Taiwan have come to manufacture inexpensive high-quality electrical products by introducing new equipment and by taking advantage of their cheap labor. While competing with emerging countries, Japanese electrical machinery makers have been forced to lower their export prices. In addition, an influx of foreign products has reduced their domestic sales, and as a result, overall earnings and demand in Japan have declined, leading to continuous price drops.

He goes on to say that,

..Japan is the only developed country whose electric machinery makers have been struggling because of the onslaught of competition from emerging countries. General Electric of the United States, which is known as a company founded by Thomas Edison and which was previously the largest electric machinery maker in the world, has already shifted its focus to the aircraft and nuclear industries, after facing intense competition with Japanese manufacturers. Other U.S. companies such as RCA (Radio Corporation of America), Motorola and Zenith no longer exist for reasons such as because they failed or were acquired by Japanese companies. The situation is similar in Europe. Consequently, whereas electric machinery accounts for as much as 19.5% of Japan’s nominal exports, equivalent products in the United States (computers and peripherals) take up only 2.3% of the overall U.S. exports (in the October-December quarter of 2012)

This explanation corresponds more directly to my personal observation with contract electronics manufacturing and, earlier, US-based electronics manufacturing. And it seems to apply relatively well for Europe – where Chinese competition in broadening areas of production pressure many European companies – creating a sort of vacuum for future employment and economic growth.

The Kyoto analysis also gets the policy prescription about right –

..the deflation in Japan is much more pervasive than is indicated by the CPI and its cause is a steep drop in export prices of electrical machinery, the main export item for the country, which has been triggered by the increasing competition from emerging countries and which makes it impossible to offset the effects of a rise in import prices by raising export prices. The onslaught of competition from emerging countries is unlikely to wane in the future. Rather, we must accept it as inevitable that emerging countries will continue to rise one after another and attempt to overtake countries that have so far enjoyed economic prosperity.

If so, what is most important is that companies facing the competition from emerging countries recognize that point and try to create high-value products that will be favored by foreign customers. It is also urgently necessary to save energy and develop new energy sources.

Moreover, companies which have not been directly affected by the rise of emerging countries should also take action on the assumption that demand will remain stagnant and deflation will continue in Japan until the electrical machinery industry recovers [or, I would add, until alternative production centers emerges]. They should tackle fundamental challenges with a sense of crisis, including how to provide products and services that precisely meet users’ needs, expand sales channels from the global perspective and exert creativity. Policymakers must develop a price index that more accurately reflects the actual price trend and take appropriate measures in light of the abovementioned challenges.

Maybe in a way that’s what Big Data is all about.

Hyperinflation and Asset Bubbles

According to Mizuno et al, the worst inflation in recent history occurred in Hungary after World War II. The exchange rate for the Hungarian Pengo to the US dollar rose from 100 in July 1945 to 6 x 1024 Pengos per dollar by July 1946.

Hyperinflations are triggered by inflationary expectations.  Past increases in prices influence expectations about future prices. These expectations trigger market behavior which accelerate price increases in the current period, in a positive feedback loop. Bounds on inflationary expectations are loosened by legitimacy issues relating to the state or social organization.

Hyperinflation can become important for applied forecasting in view of the possibility of smaller countries withdrawing from the Euro.

However, that is not the primary reason I want to address this topic at this point in time.

Rather, episodes of hyperinflation share broad and interesting similarities to the movement of prices in asset bubbles – like the dotcom bubble of the late 1990’s, the Hong Kong Hang Seng Stock Market Index from 1970 to 2000, and housing price bubbles in the US, Spain, Ireland, and elsewhere more recently.

Hyperinflations exhibit faster than exponential growth in prices to some point, at which time the regime shifts. This is analogous to the faster than exponential growth of asset prices in asset bubbles, and has a similar basis. Thus, in an asset bubble, the growth of prices becomes the focus of action. Noise or momentum traders become active, buying speculatively, often financing with Ponzi-like schemes. In a hyperinflation, inflation and its acceleration gets written into to the pricing equation. People stockpile and place advance orders  and, on the supply side, markup prices assuming rising factor costs.

The following is a logarithmic chart of inflation indexes in four South and Central American countries from 1970 to 1996 based on archives of the IMF World Economic Outlook (click to enlarge).

The straight line indicates an exponential growth of prices of 20 percent per year, underlining the faster than exponential growth in the other curves.

Hyper

After an initial period, each of these hyperinflation curves exhibit similar mathematical properties. Mizuno et al fit negative double exponentials of the following form to the price data.

negdexp

Sornette, Takayasu, and Zhouargue that the double exponential is “nothing but a discrete-time approximation of a general power law growth endowed with a finite-time singularity at some critical time tc.”

This enables the authors to develop an analysis which not only fits the ramping prices in each country, but also to predicts the end of the hyperinflation with varying success.

The rationale for this is simply that unleashing inflationary expectations, beyond a certain point, follows a common mathematical theme, and ends at a predictable point.

It is true that simple transformations render these hyperinflation curves very similar, as shown in the following chart.

scaledHyper

Here I scale the logs of the cumulative price growth for Bolivia, Nicaragua, and Peru, adjusting them to the same time period (22 years). Clearly, the hyperinflation becomes more predictable after several years, and the takeoff rate to collapse seems to be approximately the same.

The same type of simple transformations would appear to regularize the market bubbles in the Macrotrends chart, although I have not yet collected all the data.

In reading the literature on asset bubbles, there is a split between so-called rational bubbles, and asset bubbles triggered, in some measure, by “bounded rationality” or what economists are prone to call “irrationality.”

Examples of how “irrational” agents might proceed to fuel an asset bubble are given in a selective review of the asset bubble literature developed recently by Anna Scherbina from which I take several extracts below.

For example, there is “feedback trading” involving traders who react solely to past price movements (momentum traders?). Scherbina writes,

In response to positive news, an asset experiences a high initial return. This is noticed by a group of feedback traders who assume that the high return will continue and, therefore, buy the asset, pushing prices above fundamentals. The further price increase attracts additional feedback traders, who also buy the asset and push prices even higher, thereby attracting subsequent feedback traders, and so on. The price will keep rising as long as more capital is being invested. Once the rate of new capital inflow slows down, so does the rate of price growth; at this point, capital might start flowing out, causing the bubble to deflate.

Other mechanisms are biased self-attribution and the representativeness heuristic. In biased self-attribution,

..people to take into account signals that confirm their beliefs and dismiss as noise signals that contradict their beliefs…. Investors form their initial beliefs by receiving a noisy private signal about the value of a security.. for example, by researching the security. Subsequently, investors receive a noisy public signal…..[can be]  assumed to be almost pure noise and therefore should be ignored. However, since investors suffer from biased self-attribution, they grow overconfident in their belief after the public signal confirms their private information and further revise their valuation in the direction of their private signal. When the public signal contradicts the investors’ private information, it is appropriately ignored and the price remains unchanged. Therefore, public signals, in expectation, lead to price movements in the same direction as the initial price response to the private signal. These subsequent price moves are not justified by fundamentals and represent a bubble. The bubble starts to deflate after the accumulated public signals force investors to eventually grow less confident in their private signal.

Scherbina describes the representativeness heuristic as follows.

 The fourth model combines two behavioral phenomena, the representativeness heuristic and the conservatism bias. Both phenomena were previously documented in psychology and represent deviations from optimal Bayesian information processing. The representativeness heuristic leads investors to put too much weight on attention-grabbing (“strong”) news, which causes overreaction. In contrast, conservatism bias captures investors’ tendency to be too slow to revise their models, such that they underweight relevant but non-attention-grabbing (routine) evidence, which causes underreaction… In this setting, a positive bubble will arise purely by chance, for example, if a series of unexpected good outcomes have occurred, causing investors to over-extrapolate from the past trend. Investors make a mistake by ignoring the low unconditional probability that any company can grow or shrink for long periods of time. The mispricing will persist until an accumulation of signals forces investors to switch from the trending to the mean-reverting model of earnings.

Interesting, several of these “irrationalities” can generate negative, as well as positive bubbles.

Finally, Scherbina makes an important admission, namely that

 The behavioral view of bubbles finds support in experimental studies. These studies set up artificial markets with finitely-lived assets and observe that price bubbles arise frequently. The presence of bubbles is often attributed to the lack of common knowledge of rationality among traders. Traders expect bubbles to arise because they believe that other traders may be irrational. Consequently, optimistic media stories and analyst reports may help create bubbles not because investors believe these views but because the optimistic stories may indicate the existence of other investors who do, destroying the common knowledge of rationality.

I dwell on these characterizations because I think it is important to put to rest the nonsensical “perfect information, perfect foresight, infinite time horizon discounting” models which litter this literature. Behavioral economics is a fresh breeze, for sure, in this context. And behavioral economics  seems to me linked with the more muscular systems dynamics and complexity theory approaches to bubbles, epitomized by the work of Sornette and his coauthors.

Let me then leave you with the fundamental equation for the price dynamics of an asset bubble.

Sornettepricedynamics

Inflation/Deflation 1

I initially envisaged a series of posts on forecasting inflation, treating inflation/deflation as phenomena more or less at arms length.

However, I began assembling posts on this topic while in the US, before traveling to Japan – where currently I am (now in Osaka).  In Japan, I’m pulled by the attractions of all the sights, the temples, the great food, and it’s cherry blossom season. Also, I just brought a tablet, leaving the laptop at home. I find that finishing a blog post nicely is hard with this Android device.

But now I have something to say – and I add some of the details of the initial post I planned on at the end of these comments.

Deflation in Japan

It’s cherry blossom season in Japan, and hotel reservations got scarce. So we had to book in clean, reasonably priced hotels that still had space – and it turns out these are sometimes on the margins of what you might call “the party district.”

I think watching the revels of young people here – uproarious and loud, all night and well into the early morning – provides a flesh-and-blood side to the general movement of prices – in this case the on-and-off deflation in Japan.

 

Check out Japan and the Exhaustion of Consumerism.

So even though I am on the 8th floor of this really nice business hotel in Osaka, the “party district” is full of young Japanese men and women (what I would call at my advanced tenure “youth”). Anyone who thinks the Japanese are reserved and repressed should put their ear to my window high above the street, listen to the revels – the howls and screams, often in unison by groups – as people get more and more booze under their belts through the evening and then through until morning. Go down at 6am and you will find couples and groups staggering around, quite drunk or loud, unruly groups of guys…..

The Original Material

As with gold and interest rates, there are several steps in coming to grips with inflation forecasts, not the least of which is the immediate prospect for price change in various global regions and inflation forecast performance over various short and longer term forecast horizons.

Generally, inflation forecasts are being adjusted downward for 2014 and 2015. The Survey of Professional Forecasters First Quarter 2014 Survey, for example, projects

…current-quarter headline CPI inflation to average 1.7 percent, lower than the last survey’s estimate of 1.8 percent. The forecasters predict current-quarter headline PCE inflation of 1.3 percent, lower than the prediction of 1.8 percent from the survey of three months ago.

The forecasters also see lower headline and core measures of CPI and PCE inflation during the next two years. Measured on a fourth-quarter over fourth-quarter basis, headline CPI inflation is expected to average 1.8 percent in 2014, down from 2.0 percent in the last survey, and 2.0 percent in 2015, down 0.2 percentage point from the previous estimate. Forecasters expect fourth-quarter over fourth-quarter headline PCE inflation to average 1.6 percent in 2014, down from 1.9 percent in the last survey, and 1.8 percent in 2015, down 0.1 percentage point from the previous estimate.

Over the next 10 years, 2014 to 2023, the forecasters expect headline CPI inflation to average 2.3 percent at an annual rate. The corresponding estimate for 10-year annual-average PCE inflation is 2.0 percent.

Lower and lower rates of inflation merge into deflation, of course, which is considered to be a risk currently for the eurozone. CNN Money reports that, in March, that the

..annual rate of inflation fell to 0.5%, down from 0.7% in February and weaker than most economists were expecting. Inflation is now at its lowest level since November 2009.

Actual deflation seems to be the reality in Spain, Portugal and Greece. Thus, the Wall Street Journal reports that ..

Spain’s preliminary estimate for March said the European Union-harmonized consumer-price index slipped 0.2% compared with the same month last year, down from February’s increase of 0.1%.

The data makes Spain the latest euro-zone country to slip into deflation. Prices have been dropping in Greece since early last year, and Portugal last month recorded its first year-over-year decline since 2009.

Japan is special among advanced industrial economies in having inflation which dipped into deflation several times for relatively extended periods, dating back to the 1980’s.

Japan is especially relevant, of course, because the Bank of Japan (BOJ) was the first to experiment with the zero-bound on short term interest rates and its aggressive bond-buying program – both features found in the US and European central bank environments today.

What is the Forecasting Problem?

Forecasting inflation rates have been one of the signature efforts of many macroeconomic forecasting organizations and agencies.

As a result, there is considerable research on the stochastic nature of inflation/deflation time series and the relative performance of univariate (autoregressive) versus multivariate forecasting models. It’s interesting to look over this literature with an eye to evaluating, in specific contexts, whether declining inflation rates can mean price change will dip below zero in key economic regions.

[I will add the finishing touches to this post, when I can. But meanwhile, this reflects my current thinking. Soon – hyperinflation]

Links – end of March

US Economy and Social Issues

Reasons for Declining Labor Force Participation

LFchartVital Signs: Still No Momentum in Business Spending

investment

Urban Institute Study – How big is the underground sex economy in eight cities employs an advanced statistical design. It’s sort of a model study, really.

Americans Can’t Retire When Bill Gross Sees Repression

Feeble returns on the safest investments such as bank deposits and fixed-income securities represent a “financial repression” transferring money from savers to borrowers, says Bill Gross, manager of the world’s biggest bond fund.

Robert Reich – The New Billionaire Political Bosses

American democracy used to depend on political parties that more or less represented most of us. Political scientists of the 1950s and 1960s marveled at American “pluralism,” by which they meant the capacities of parties and other membership groups to reflect the preferences of the vast majority of citizens.

Then around a quarter century ago, as income and wealth began concentrating at the top, the Republican and Democratic Parties started to morph into mechanisms for extracting money, mostly from wealthy people.

Finally, after the Supreme Court’s “Citizen’s United” decision in 2010, billionaires began creating their own political mechanisms, separate from the political parties. They started providing big money directly to political candidates of their choice, and creating their own media campaigns to sway public opinion toward their own views.

Global Economy

Top global risks you can’t ignore – good, short read

How Can Africa’s Water and Sanitation Shortfall be Solved? – interesting comments by experts on the scene, including –

Most African water utilities began experiencing a nose-dive in the late 1970s under World Bank and IMF policies. Many countries were suffering from serious trade deficits which had enormous implications for their budgets, incomes, and their abilities to honour loan obligations to, among others, bilateral and multilateral partners. These difficulties for African countries coincided around that period, with a major shift in global economic thought; a shift from heterodox economic thinking which favoured state intervention in critical sectors of the economy, to neoliberal economic thought which is more hostile to state intervention and prefers the deregulation of markets and their unfettered operation. This thought became dominant in the IMF and World Bank and influenced structural adjustment austerity packages that the two institutions prescribed to the struggling African economies at the time. This point is fundamental and cannot be divorced from any comprehensive analysis of the access deficit in African countries.

The austerity measures enforced by the Bank and IMF ensured a drastic reduction of state funding to the utilities, resulting in deterioration of facilities, poor conditions for staff and a mass exodus of expert staff. In the face of the resulting difficulties, the Bank and IMF held out only one option for the governments; the option of full cost recovery and of privatisation. This sealed the expectations of any funding for the sector as the private sector found the water sector highly risky to invest in. Following the common interventions set out by the World Bank, the countries achieved mostly poor results.

Contrary to much mainstream discourse, neither privatisation nor commercialisation constitute an adequate or sustainable way of managing urban water utilities to ensure access to people in Africa given the extreme poverty that confronts a significant portion of the population. The solution lies in a progressive tax-supported water delivery system that ensures access for all, supported by a management structure and a balanced set of incentives that ensure performance.

Analytics

Machine Learning in 7 Pictures

Basic machine learning concepts of Bias vs Variance Tradeoff, Avoiding overfitting, Bayesian inference and Occam razor, Feature combination, Non-linear basis functions, and more – explained via pictures

The Universe

Great picture of the planet Mercury https://twitter.com/Iearnsomething/status/448165339290173440/photo/1

Mercury

Interest Rates – Forecasting and Hedging

A lot relating to forecasting interest rates is encoded in the original graph I put up, several posts ago, of two major interest rate series – the federal funds and the prime rates.

IratesFRED1

This chart illustrates key features of interest rate series and signals several important questions. Thus, there is relationship between a very short term rate and a longer term interest rates – a sort of two point yield curve. Almost always, the federal funds rate is below the prime rate. If for short periods this is not the case, it indicates a radical reversion of the typical slope of the yield curve.

Credit spreads are not illustrated in this figure, but have been shown to be significant in forecasting key macroeconomic variables.

The shape of the yield curve itself can be brought into play in forecasting future rates, as can typical spreads between interest rates.

But the bottom line is that interest rates cannot be forecast with much accuracy beyond about a two quarter forecast horizon.

There is quite a bit of research showing this to be true, including –

Professional Forecasts of Interest Rates and Exchange Rates: Evidence from the Wall Street Journal’s Panel of Economists

We use individual economists’ 6-month-ahead forecasts of interest rates and exchange rates from the Wall Street Journal’s survey to test for forecast unbiasedness, accuracy, and heterogeneity. We find that a majority of economists produced unbiased forecasts but that none predicted directions of changes more accurately than chance. We find that the forecast accuracy of most of the economists is statistically indistinguishable from that of the random walk model when forecasting the Treasury bill rate but that the forecast accuracy is significantly worse for many of the forecasters for predictions of the Treasury bond rate and the exchange rate. Regressions involving deviations in economists’ forecasts from forecast averages produced evidence of systematic heterogeneity across economists, including evidence that independent economists make more radical forecasts

Then, there is research from the London School of Economics Interest Rate Forecasts: A Pathology

In this paper we have demonstrated that, in the two countries and short data periods studied, the forecasts of interest rates had little or no informational value when the horizon exceeded two quarters (six months), though they were good in the next quarter and reasonable in the second quarter out. Moreover, all the forecasts were ex post and, systematically, inefficient, underestimating (overestimating) future outturns during up (down) cycle phases. The main reason for this is that forecasters cannot predict the timing of cyclical turning points, and hence predict future developments as a convex combination of autoregressive momentum and a reversion to equilibrium

Also, the Chapter in the Handbook of Forecasting Forecasting interest rates is relevant, although highly theoretical.

Hedging Interest Rate Risk

As if in validation of this basic finding – beyond about two quarters, interest rate forecasts generally do not beat a random walk forecast – interest rate swaps, are the largest category of interest rate contracts of derivatives, according to the Bank of International Settlements (BIS).

BIStable

Not only that, but interest rate contracts generally are, by an order of magnitude, the largest category of OTC derivatives – totaling more than a half a quadrillion dollars as of the BIS survey in July 2013.

The gross value of these contracts was only somewhat less than the Gross Domestic Product (GDP) of the US.

A Bank of International Settlements background document defines “gross market values” as follows;

Gross positive and negative market values: Gross market values are defined as the sums of the absolute values of all open contracts with either positive or negative replacement values evaluated at market prices prevailing on the reporting date. Thus, the gross positive market value of a dealer’s outstanding contracts is the sum of the replacement values of all contracts that are in a current gain position to the reporter at current market prices (and therefore, if they were settled immediately, would represent claims on counterparties). The gross negative market value is the sum of the values of all contracts that have a negative value on the reporting date (ie those that are in a current loss position and therefore, if they were settled immediately, would represent liabilities of the dealer to its counterparties).  The term “gross” indicates that contracts with positive and negative replacement values with the same counterparty are not netted. Nor are the sums of positive and negative contract values within a market risk category such as foreign exchange contracts, interest rate contracts, equities and commodities set off against one another. As stated above, gross market values supply information about the potential scale of market risk in derivatives transactions. Furthermore, gross market value at current market prices provides a measure of economic significance that is readily comparable across markets and products.

Clearly, by any account, large sums of money and considerable exposure are tied up in interest rate contracts in the over the counter (OTC) market.

A Final Thought

This link between forecastability and financial derivatives is interesting. There is no question but that, in practical terms, business is putting eggs in the basket of managing interest rate risk, as opposed to refining forecasts – which may not be possible beyond a certain point, in any case.

What is going to happen when the quantitative easing maneuvers of central banks around the world cease, as they must, and long term interest rates rise in a consistent fashion? That’s probably where to put the forecasting money.

Credit Spreads As Predictors of Real-Time Economic Activity

Several distinguished macroeconomic researchers, including Ben Bernanke, highlight the predictive power of the “paper-bill” spread.

The following graphs, from a 1993 article by Benjamin M. Friedman and Kenneth N. Kuttner, show the promise of credit spreads in forecasting recessions – indicated by the shaded blocks in the charts.

CPTBspread

Credit spreads, of course, are the differences in yields between various corporate debt instruments and government securities of comparable maturity.

The classic credit spread illustrated above is the difference between six-month commercial paper rates and 6 month Treasury bill rates.

Recent Research

More recent research underlines the importance of building up credit spreads from metrics relating to individual corporate bonds , rather than a mishmash of bonds with different duration, credit risk and other characteristics.

Credit Spreads as Predictors of Real-Time Economic Activity: A Bayesian Model-Averaging Approach is key research in this regard.

The authors first note that,

the “paper-bill” spread—the difference between yields on nonfinancial commercial paper and comparable-maturity Treasury bills—had substantial forecasting power for economic activity during the 1970s and the 1980s, but its predictive ability vanished in the subsequent decade

They then acknowledge that credit spreads based on indexes of speculative-grade or “junk” corporate bonds work fairly well for the 1990s, but their performance is uneven.

Accordingly, Faust, Gilchrist, Wright, and Zakrajsek (GYZ) write that

In part to address these problems, GYZ constructed 20 monthly credit spread indexes for different maturity and credit risk categories using secondary market prices of individual senior unsecured corporate bonds.. [measuring]..the underlying credit risk by the issuer’s expected default frequency (EDF™), a market-based default-risk indicator calculated by Moody’s/KMV that is more timely that the issuer’s credit rating]

Their findings indicate that these credit spread indexes have substantial predictive power, at both short- and longer-term horizons, for the growth of payroll employment and industrial production. Moreover, they significantly outperform the predictive ability of the standard default-risk indicators, a result that suggests that using “cleaner” measures of credit spreads may, indeed, lead to more accurate forecasts of economic activity.

Their research applies credit spreads constructed from the ground up, as it were, to out-of-sample forecasts of

…real economic activity, as measured by real GDP, real personal consumption expenditures (PCE), real business fixed investment, industrial production, private payroll employment, the civilian unemployment rate, real exports, and real imports over the period from 1986:Q1 to 2011:Q3. All of these series are in quarter-over-quarter growth rates (actually 400 times log first differences), except for the unemployment rate, which is simply in first differences

The results are forecasts which significantly beat univariate (autoregressive) model forecass, as shown in the following table.

Cspreadresults

Here BMA is an abbreviation for Bayesian Model Averaging, the author’s method of incorporating these calculated credit spreads in predictive relationships.

Additional research validates the usefulness of credit spreads so constructed for predicting macroeconomic dynamics in several European economies –

We find that credit spreads and excess bond premiums, when used alongside monetary policy tightness indicators and leading indicators of economic performance, are highly significant for predicting the growth in the index of industrial production, employment growth, the unemployment rate and real GDP growth at horizons ranging from one quarter to two years ahead. These results are confirmed for individual countries in the euroarea and for the United Kingdom, and are robust to different measures of the credit spread. It is the unpredictable part associated with the excess bond premium that has greater influence on real activity compared to the predictable part of the credit spread. The implications of our results are that careful selection of the bonds used to construct the credit spreads, excluding those with embedded options and or illiquid secondary markets, delivers a robust indicator of financial market tightness that is distinct from tightness due to monetary policy measures or leading indicators of economic activity.

The Situation Today

A Morgan Stanley Credit Report for fixed income, released March 21, 2014, notes that

Spreads in both IG and HY are at the lowest levels we have seen since 2007, roughly 110bp for IG and 415bp for HY. A question we are commonly asked is how much tighter can spreads go in this cycle

So this is definitely something to watch. 

Sales and new product forecasting in data-limited (real world) contexts