Periodically and especially this time of year, it’s a good idea to scan the forecast/prediction space.
This week I want to highlight tech predictions, beginning with the Futurist magazine’s interesting The Future Is Not a Destination.
I got a kick out of the Futurist top forecasts for 2014 for a couple of reasons. One is evidence of “out-of-the-box” thinking, such as predictions (probably correct) that some countries will see depopulation by 2020 and that buying and owning things will go out of style (probably wrong). Also, the forecasts come with commentary about why they may and may not happen.
Futurist top 10 forecasts for 2014 and beyond
1. Thanks to big data, the environment around you will anticipate your every move.
“Computerized sensing and broadcasting abilities are being incorporated into our physical environment, creating what is sometimes called an ‘Internet of things.’ Data flowing from sensor networks, RFID tags, surveillance cameras, unmanned aerial vehicles, and geo-tagged social-media posts will telegraph where we’ve been and where we are going. In the future, these data streams will be integrated into services, platforms, and programs that will provide a window into the lives, and futures, of billions of people.”
2. We will revive recently extinct species…
3. By 2020 populations will shrink, and wealth will shrink with them.
The forecast: “By 2020, half of the human race will live in countries where the birthrates have fallen below the death rates, and consequently, populations are shrinking. The cause is the combination of older adults living longer and fewer children being born. The countries will grapple with shrinking tax bases and workforces despite widening pools of retirees demanding social-security and health-care payouts. Society will survive, but GDPs will fall markedly throughout the world and probably never fully rise back up…”
“[T]he more technologically developed a society becomes, the fewer offspring couples will have, the easier it is for them to raise their living standards, the more that progress lowers their desire for large families. The result is the spiral of modern technological population decline—a new but now universal pattern.”
4. Doctors will see brain diseases many years before they arise.
The forecast: “Brain scans can warn doctors if a patient will suffer Alzheimer’s, dementia, Lou Gehrig’s, or a number of other brain disorders as many as 10–15 years ahead of physical symptoms. Researchers at the Washington University School of Medicine in St. Louis are learning to identify distinct chemical biomarkers within patients’ body and brain functions. Doctors could then slow the progression of the diseases if they start administering treatments years earlier.”
5. Buying and owning things will go out of style.
The forecast: “The markets for housing, automobiles, music, books, and many other products show a common trend: Younger consumers opting to rent or subscribe to pay-per-use arrangements instead of buying and owning the physical products. Shared facilities will overtake established offices, renting units will become more common than owning a home, and sales of books and music might never become popular again.” From “Consumption 2.0,” by Hugo Garcia, January–February, 2013.
One of the key consequences of the 2008 recession is that nearly half of recent college graduates in the United States are unemployed or underemployed. Those who went to college have better prospects than their peers but are carrying around record amounts of debt—an average of $27,000 per grad. Variations of this story are playing out in Europe, where student debt is lower but unemployment among young people is higher.…
This high unemployment is being met by a second trend: social startups that make sharing a lot easier. Consider Getaround, an app-based car share service that allows anyone to rent out her car. You select the renter and send your customer a code to unlock your car and turn on the engine. When the contract expires, the code no longer works.
6. Quantum computing could lead the way to true artificial intelligence..
7. Phytoplankton death will further disrupt aquatic ecosystems…
8. The future of science is in the hands of crowdsourcing amateurs…
The forecast: “So-called ‘citizen science,’ which uses networks of volunteers in scientific research, is on its way to becoming the favored twenty-first-century model for conducting large-scale scientific research.”
9. Fusion-fueled rockets could significantly reduce the potential time and cost of sending humans to Mars…
10. Atomically precise manufacturing will make machinery, infrastructure, and other systems more productive and less expensive.
The forecast: Atom-by-atom production of everything from solar panels to computers will allow for extraordinary improvements in manufacturing all things.
Daily Beast Top 10 Predictions for Technology in 2014
Also along the lines of more imaginative, discursive forecasts are the top 10 Predictions for Technology in 2014 from Mark Anderson featured in the Daily Beast. In a nutshell, Microsoft may get its mojo back, smartphones will get cheap, and we’re about to enter the Year of Encryption.
- 1. Siris Move Into Silos.
- 2. Visualization Goes Mainstream.
- 3. The Cheaper Factor. Low price becomes a critical driver in global consumer-electronics product creation, as emerging economies absorb a dramatically larger fraction of all devices sold. The result of bringing hundreds of millions out of poverty is a shift in design motivation from the radically innovative, to incremental change at low cost, driven by the creation of a new purchaser segment in consumer electronics.
- 4. Sub-$100 Smartphones dominate the phone category
- 5. Sub-$250 pads dominate the pad /CarryAlong category.
- 6. Software Plays on a Flat Hardware Field, as We Build Out the Global Computer. Even as hardware continues to advance, software is where most of the energy, innovation, and action occur.
- 7. The New Microsoft That No One Expected. Microsoft gets a new CEO, with a new power structure that encourages cooperation instead of warring factions, and which leads to improved success in consumer markets.
- 8. Micromapping arrives.
- 9. The Quantified Self Goes Mainstream. The idea of knowing more and more detail about your personal health and characteristics goes from being a science story to a jogger’s delight to a mainstream market.
- 10. Encryption Everywhere. The direct commercial result of Edward Snowden’s leaks will be a massive move by large technology companies, both in enterprise and consumer markets, to evolve new encryption technologies and products that use them. While NSA-proofing will be the motivator, the real benefit may be improved protection of commercial IP from theft by China and other nations.
Tech Market Research Vendor Forecasts
A handful of market research vendors dominate the IT space, including IDC, Gartner, and Forrester Research.
Forbes magazine has been keeping current with the end-of-year pronouncements from these companies, in articles such as Gil Press’ recent IDC: Top 10 Technology Predictions For 2014 or Peter High’s reporting on Forrester.
One take-away is that competition is intensifying, as some companies scramble to develop smart-phones, tablets, and offerings for the cloud. The PC is being left in the dust.
So Worldwide sales of smartphones (12% growth) and tablets (18%) will continue at a “torrid pace” (accounting for over 60% of total IT market growth) at the expense of PC sales which will continue to decline.
There are contests between Android and Apple, and expectations Amazon, as well as Google, may start taking on traditional IT suppliers.
Amazon Web Services’ “avalanche of platform-as-a-service offerings for developers and higher value services for businesses” will force traditional IT suppliers to “urgently reconfigure themselves.”
In 2014, the number of smart connected devices shipped in emerging markets will be almost double that shipped in developed markets and emerging markets will be a hotbed of Internet of Things market development.
Spending on cloud services and the technology to enable these services “will surge by 25% in 2014, reaching over $100 billion.” IDC predicts “a dramatic increase in the number of datacenters as cloud players race to achieve global scale.”
Cloud service providers will increasingly drive the IT market
IDC predicts spending of more than $14 billion on big data technologies and services or 30% growth year-over-year, “as demand for big data analytics skills continues to outstrip supply.” The cloud will play a bigger role with IDC predicting a race to develop cloud-based platforms capable of streaming data in real time. There will be increased use by enterprises of externally-sourced data and applications and “data brokers will proliferate.” IDC predicts explosive growth in big data analytics services, with the number of providers to triple in three years. 2014 spending on these services will exceed $4.5 billion, growing by 21%.
Finally, we have the Internet of Things and the digitization of all industries.
IDC is currently running a series of webinars showcasing its 2014 tech predictions. See http://www.idc.com/research/Predictions14/index.jsp
I’m maintaining that a simple ordinary least squares (OLS) regression model presented in these posts is solid empirical proof that rational expectations and the efficient market hypothesis cannot be correct, at least in the period since 2008. Of course, the more interesting point may be that this Trading Model can achieve solid gains and probably can be optimized or improved.
But to pin down the significance of the more theoretical point, let me quote Paul Vocker, the former Chairman of the Federal Reserve under Reagan, who wrote recently that, “It should be clear that among the causes of the recent financial crisis was an unjustified faith in rational expectations, market efficiencies, and the techniques of modern finance.”
Wikipedia has this to say about the weak form of the efficient market hypothesis -
In weak-form efficiency, future prices cannot be predicted by analyzing prices from the past. Excess returns cannot be earned in the long run by using investment strategies based on historical share prices or other historical data. Technical analysis techniques will not be able to consistently produce excess returns, though some forms of fundamental analysis may still provide excess returns. Share prices exhibit no serial dependencies, meaning that there are no “patterns” to asset prices. This implies that future price movements are determined entirely by information not contained in the price series. Hence, prices must follow a random walk. This ‘soft’ EMH does not require that prices remain at or near equilibrium, but only that market participants not be able to systematically profit from market ‘inefficiencies’. However, while EMH predicts that all price movement (in the absence of change in fundamental information) is random (i.e., non-trending), many studies have shown a marked tendency for the stock markets to trend over time periods of weeks or longer.. and that, moreover, there is a positive correlation between degree of trending and length of time period studied (but note that over long time periods, the trending is sinusoidal in appearance)… Various explanations for such large and apparently non-random price movements have been promulgated.
Returns From the Trading Model and Random Trading Strategies
Here’s the deal. I have collected 6007 daily closing (and other relevant) values of the S&P 500 stock index, along with similar data for the VIX volatility index. Note that early values of the VIX volatility index are derived by its formulator, since the actual index did not come into play until 1993.
In any case, the Trading Model is an autoregressive ordinary least squares model with 60 explanatory variables – including 30 lagged values of the S&P 500 daily returns, and 30 lagged values of the VIX volatility index. This model predicts daily trading returns on a one-day-ahead basis. The idea is that at the close of a trading day, I (a) determine the rate of return for that trading day against the previous day’s closing value, and (b) buy an equivalent for the S&P 500 index or hold these assets, if the predicted daily return for the next day is positive. Otherwise, I sell stock or hold cash.
Currently, I create this Trading Model with a regression with data for the S&P 500 from 1/4/90 to 2/15/2008 with appropriate lagged values (which means I throw out the first 30 observations).
This regression gives me the following cumulative returns for the subsequent period through October 2013.
In other words, the Trading Program, if followed on a daily basis over this recent period, would build a $1000 investment into a $2500 cumulative gain. This compared with a much lower cumulative gain from a Buy & Hold Strategy.
But then, could this cumulative gain of $2,500 be simply a lucky strike, and be reproducible by a random pattern of trading?
To test this, I develop a Monte Carlo simulation. I randomize trading for each trading day in the period to early 2008, by buying stock if a random number exceeds 0.5, and selling stock or holding cash otherwise.
Here is the distribution of final cumulative gains for this setup run 10,000 times.
So cumulative gains of $2500 are within the right tail of this distribution. The odds that random trading could generate returns of this amount or greater are less than 35 times in 10,000. So very unlikely.
While we are discussing distributions, here is the distribution of S&P daily returns, compared with a normal distribution.
We are talking significant excess kurtosis here – a leptokurtotic distribution.
I am trying to figure out whether this violation of the classic linear regression (CLR) model accounts for the fact that autocorrelations appear statistically weak, even though they operate in a predictive model to beat a random trading setup. On a first pass with relevant articles, I think this type of violation of normality does not impact the estimates of statistical significance of coefficients much, although I reserve the right to change that opinion.
A Voice in the Wilderness?
There is other research on the predictability of stock returns, for example research from 2008 which focuses on out-of-sample performance. Then there is the whole extensive discussion of technical indicators and whether they work.
Here is a skeptical YouTube video on technical indicators.
So anyway, very entertaining and quite possibly correct in its own frame of reference.
The fact remains, however, that since 2008, it has been possible to predict the direction of change of daily S&P returns enough to generate cumulative returns which are almost surely greater than those which would have been produced by random trading.
Merry Christmas. Here is a trading setup which returns about 1% per month, based on a trading model governing hypothetical trades of the Standard & Poor 500 Index at the close of each trading day. This performance is “out-of-sample,” which means I use different data to estimate the model, than I use to test its performance.
The trading model is based on data prior to 2008, and has fixed coefficients for all subsequent trading days, achieving a little better than 51% accuracy overall in predicting the direction or sign of S&P daily returns.
Thus, this is really the simplest of Trading Programs, although it is based on 60 explanatory variables. It is nothing more than a standard ordinary least squares (OLS) regression, developed with S&P 500 and VIX volatility index data over the period beginning 2/15/1990 and extending through 2/14/2008.
If you invested $1000 at the beginning of the week 2/19/2008, your cumulative earnings at the end of October 2013 would be about $2500. On the other hand, if you had adopted a Buy & Hold trading strategy since 2/19/2008, your cumulative gains would have totaled only $1,305.00.
Rules for the Trading Program
The rules for my Trading Program are simple. I develop an OLS regression on S&P 500 daily returns dating back to 1990, ending my estimation sample in early 2008. Then, I use the regression coefficients to predict daily S&P 500 returns out-of-sample from early 2008 to the end of October 2013.
I look at the series I have generated, and, at the close of each trading day, if the prediction for the next day is for a positive return, I invest; otherwise, I sell or, if I am presently not holding stock, stand pat with cash.
The returns from a Buy & Hold strategy are easy to compute. Simply multiply the daily return for a day by the cumulative gain for the previous day.
The returns from the Trading Program are calculated by multiplying the previous day cumulative gain by today’s daily return, if the prediction as of the close yesterday was for a positive daily return. Otherwise, the cumulative gain does not change today.
Predictive Accuracy of the Trading Model – Direction of Change of Daily Returns
This chart compares the 30-day moving average of correctly predict signs of the daily returns with the cumulative gain.
The performance of the trading model deteriorates somewhat over the forecast horizon, as one would expect with an OLS regression built with data stopping in early 2008. Nevertheless, the trading model generally achieves a little better than 50 percent accuracy (51.9%) in predicting the direction of change of daily S&P returns on a one-day-ahead basis.
Structure of the Trading Model
This trading model is within a family of models I have blogged about in recent weeks. It basically is an autoregressive model against lagged values of the S&P 500 daily returns and the VIX volatility index daily returns. I utilize lags of 30 trading days for both the S&P 500 and the VIX volatility index, meaning I have 60 explanatory variables. I do not bother with tests of statistical significance for the lagged variables, not do I concern myself, in this model, with testing the residuals to see whether they are white noise. This is really a canned model, and it works remarkably well.
Anyone can validate this model. Simply download the daily closing numbers for the S&P 500 and VIX for this period, calculate daily returns based on these closing values, and set up the data to include 61 columns. The first column has the values of the S&P 500 daily returns. The second column has these values at a lag of 1, and so forth for 30 columns. Then, put in the values of the VIX daily returns at a lag of 1, then 2, and so forth for 30 lags. That’s the data.
You can’t really estimate this with an Excel spreadsheet without huge memory, but Matlab’s econometrics or statistics package does the job nicely.
Here is a Matlab program to do the job, where s&p500vix is an Excel spreadsheet with the data, as described above.
Optimizing Trading Models
Further optimization of this model is straightforward. First, instead of using just the initial values of the dataset to early 2008 to estimate a single regression to apply for all the trading days since then – you can develop an adaptive regression. This, allows for gradual shift in the regression coefficients over time, improving accuracy and boosting the cumulative gains from the trading program.
The application of neural networks is another tactic to boost performance.
Recently, I traded some emails with Didier Sornette of the Swiss Federal Technical Institute about a neural network trading program. His comment was interesting. Professor Sornette asked whether I was sure the neural network program was somehow not grabbing future information, in developing its parameters – whether it truly generated out-of-sample estimates. This question puts me in a software program audit mode from which I have not yet emerged.
However, the rationale for using neural nets in this context is straight-forward. There are clear indications of nonlinearities in these S&P 500 daily returns.
What This Means
Of course, one thing this means is that I am writing these posts from an exclusive tropical watering hole this Holiday Season – missing the huge cold snap in the US. Just kidding. Did not cash in yet, but plan to — ho, ho.
This does mean in my opinion that the Federal Reserve quantitative easing policies have created a situation in the US stock market more or less equivalent to shooting fish in a barrel. Nice for people who have funds to invest. When the party ends, some will lose big-time.
And yes, I also think the existence of this idiot-proof trading model shows that “rational expectations” is nonsense, at least when the central bank is making sustained and major intervention.
POSTSCRIPT: The Difference Between In-Sample and Out-of-Sample Predictive Results
Ok, so a friend asked me the other day, why I put this information out on the Web in a blog. Why not just keep it to yourself and use it? Part of the answer is that blogging keeps me honest and focused.
For example, after posting yesterday, I looked carefully at the Matlab program I copied into the post, discovering that, really, I used the matrices x and y, rather than the matrices xtrain and ytrain to estimate my OLS regression. That means my metrics of performance related to in-sample, rather than out-of-sample, cumulative gains.
So, just based on this training sample ending in early 2008, the out-of-sample cumulative gains to the end of October 2013 are approximately $2,500, rather than as the graph initially shown suggested (see the Way Back Machine, I guess) of more than $4,000.
So, I am actually pleased to highlight differences between in-sample and out-of-sample performance of an OLS regression predictive model.
And I am relieved that my original conclusions still hold.
If a simple OLS regression can generate parameters which work to forecast the direction of change of the S&P daily returns for more than four years, I think we can safely conclude that there is something seriously amiss with the “rational expectations” hypothesis of stock pricing. Thus, this simple Trading Model beats the Buy & Hold Strategy, and obviously can be improved by taking an adaptive approach – that is recalculating the regression parameters after each trading day.
Let me note too that while the parameters of the OLS model are fixed from early 2008, the data keeps updating each trading day. So, for example, the prediction for two days hence will be based on the S&P and VIX index returns from tomorrow, along with 58 other lagged values of these time series.
I will report on the adaptive regression results, and hopefully follow with an augmented discussion of neural network models for this data.
Google unveils ten-year plan to build its ROBOT ARMY of the future
According to those familiar with the project, the team’s first goal is to build robots that can be used in large-scale manufacturing and logistics…”I feel with robotics it’s a green field. We’re building hardware, we’re building software. We’re building systems, so one team will be able to understand the whole stack.”
See also Google’s Andy Rubin is secretly building an army of robots, but for what?
6 companies that dominate 6 industries thanks to data
..executives from six companies from health care, fashion, education, media, transportation, and business shared examples of how they are using data to create opportunities that never existed before — and create a more personalized experience for their customers.
Is China Buying the World? Article kind of grinds on, using funny language that evokes ideological dispute, but confronts an important topic – will global business let China assume a role commensurate with its export strength and reserves of international currency?
It is highly likely that China will continue in the attempt that it has sustained throughout its ‘reform and opening up’ to build a group of globally competitive large companies. However slow and painstaking the process might be they will ‘never give up’ (yong bu fang qi). The main body of China’s national champion firms are in a group of strategic industries including banking, metals and mining, construction, electricity generation and distribution, transport, and telecoms services. They have been protected by the fact that they are state-owned. They benefit from state procurement policy and the fact that they buy each others’ products. The non-financial firms benefit also from loans from state owned banks. They have benefited greatly from the high speed growth of the domestic economy.
However, expanding the position of state-owned national champion firms in a large and fast-growing domestic economy is different from constructing globally competitive firms in the international arena. Despite significant progress China has not yet nurtured a group of globally competitive ‘national champion’ firms with leading global technologies and brands, that can compete within the high income countries. Despite widespread perceptions in the international media that Chinese firms are ‘buying the world’, their presence in the high income countries is negligible. This is a remarkable situation for a country that is the world’s largest exporter, and its second largest economy and manufacturer. In other words, ‘we’ are inside ‘them’, but ‘they’ are not inside ‘us’.
Mexico’s Surprising Engineering Strength related to growing automobile production
When Might the Federal Funds Rate Lift Off? Check this out vis a vis when tapering will begin.
This Commentary considers the question of when the FOMC’s unemployment and inflation thresholds might be breached. After taking into account the range of possible outcomes, the most likely outcome based on our model is that at least one threshold for raising the funds rate will be satisfied as of 2015:Q1. We show that this is two quarters earlier than if we were to look at the outlook for unemployment and projected inflation separately.
Our model can also consider the impact of changes to the forward guidance, such as introducing an inflation floor. An inflation floor of 1.5 percent would have a modest impact on the probability of satisfying both the floor and at least one of the thresholds, while an inflation floor of 1.75 percent could delay the point at which both the floor is satisfied and a threshold is crossed by about one year. This exercise suggests that the choice of an inflation floor could exert a considerable delay on the liftoff of the federal funds rate from the zero lower bound.
7 Reasons to be Cautious about the stock market, excerpted in Pragmatic Capitalism
- The median price-to-revenue ratio of the S&P 500 is now at an historic high, eclipsing even the 2000 level.
- The Shiller P/E is above 25, exceeding all observations prior to the late-1990s’ bubble except for three weeks in 1929.
- Market cap-to-GDP is already past its 2007 peak and is approaching the 2000 extreme. (This ratio is stretched at over two standard deviations above its long-term average.
- The implied profit margin in the Shiller P/E (denominator of Shiller P/E divided by S&P 500 revenue) is 18% above the historical norm. On normal profit margins, the Shiller P/E would already be 30.
- If one examines the data, these raw valuation measures typically have a fraction of the relationship to subsequent S&P 500 total returns as measures that adjust for the cyclicality of profit margins (or are unaffected by those variations), such as Shiller P/E, price-to-revenue, market cap-to-GDP and even price-to-cyclically-adjusted-forward-operating-earnings. Because the deficit of one sector must emerge as the surplus of another, one can show that corporate profits (as a share of GDP) move inversely to the sum of government and private savings, particularly with a four- to six-quarter lag.
- The record profit margins of recent years are the mirror-image of record deficits in combined government and household savings, which began to normalize about a few quarters ago. The impact on profit margins is almost entirely ahead of us.
- The impact of 10-year Treasury yields (duration 8.8 years) on an equity market with a 50-year duration (duration in equities mathematically works out to be close to the price-to-dividend ratio) is far smaller than one would assume. Ten-year bonds are too short to impact the discount rate applied to the long tail of cash flows that equities represent. In fact, prior to 1970, and since the late-1990s, bond yields and stock yields have had a negative correlation. The positive correlation between bond yields and equity yields is entirely a reflection of the strong inflation-disinflation cycle from 1970 to about 1998.”
Back to Housing Bubbles Nouriel Roubini
It is widely agreed that a series of collapsing housing-market bubbles triggered the global financial crisis of 2008-2009, along with the severe recession that followed. While the United States is the best-known case, a combination of lax regulation and supervision of banks and low policy interest rates fueled similar bubbles in the United Kingdom, Spain, Ireland, Iceland, and Dubai.
Now, five years later, signs of frothiness, if not outright bubbles, are reappearing in housing markets in Switzerland, Sweden, Norway, Finland, France, Germany, Canada, Australia, New Zealand, and, back for an encore, the UK (well, London). In emerging markets, bubbles are appearing in Hong Kong, Singapore, China, and Israel, and in major urban centers in Turkey, India, Indonesia, and Brazil.
Signs that home prices are entering bubble territory in these economies include fast-rising home prices, high and rising price-to-income ratios, and high levels of mortgage debt as a share of household debt. In most advanced economies, bubbles are being inflated by very low short- and long-term interest rates. Given anemic GDP growth, high unemployment, and low inflation, the wall of liquidity generated by conventional and unconventional monetary easing is driving up asset prices, starting with home prices.
This is the title of a very recent paper by Petr Geraskin and Dean Fantazzini in the European Journal of Finance. These Russian academics summarize micro-models of trading behind log-periodic power laws for the growth, bursting and rapid decline of asset bubbles, highlighting several tests for an approaching critical point for a bubble, and applying these tests to the gold price bubble of 2009.
One of the most interesting tests is the crash lock-in plot or CLIP. Here are two examples of CLIPS for the S&P 500 Index in 2007 and the Shanghai Composite Index which crashed in 2009 (click to enlarge).
The idea behind these plots is that crash or critical dates (Tc) are predicted by log periodic power models (LPPL) models based on observations that start from some historical point and include successively later “last observations” in a dataset. As the “last observations” become later and later, the crash dates stabilize or “lock-in” for both datasets. Both curves in the graphs above were generated by data that extended until one trading day prior to the actual crashes.
Log Periodic Power Laws
The idea of log-periodic power laws in finance is closely associated with the work of Didier Sornette, who has published extensively on this topic. What I was not really aware of before reading Geraskin/Fantazzini was the micro-modeling of trading which goes into the LPPL. There can be traders who follow “rational expectations” and traders who base their actions more on market momentum. When the imitation factor is great enough, a bubble can build, even though there is widespread perception that prices may crash. Indeed, the higher the probability of the crash, the faster the price should grow to compensate investors for the increased risk of a crash in the market
As reported in earlier posts here, Sornette and associates have applied LPPL theory to a variety of stock and other asset market bubbles and their subsequent crashes, generally preferring a definition of a “bubble” which is more formally mathematical, than related, as in other theories, to “fundamental value.”
LPPL theory is mathematically interesting and complex, and can be tied to underlying micro-models of trading. However, in practice, LPPL models are challenging to estimate, because of the characteristic presence of many local maxima or minima affecting likelihood functions and estimation metrics. Initially, Sornette and others proposed LPPL models with so many parameters that it really challenged the data; then, some of these parameters are suggested to be “slaved” to others. One of the appeals of the Geraskin/Fantazzini paper is that they discuss newer and probably more adequate estimation techniques, and, at the same time, deal with critical work of Feigenbaum and others.
It’s interesting to me that this work emanates in part from physical science, for example, with earthquake studies. As in physics, the LPPL equations are approximations, and an important test is whether they are, in fact, predictive.
Collapse of an asset bubble is a probabilistic event. It becomes highly likely at some point, but even then is not necessarily determined.
Banks and Investment Banks
Major banks and investment institutions release their 2014 outlook now and for the next few weeks. The emerging theme is that 2014 is the year the economic crisis really comes to an end – when we will see stable employment growth and gradually swelling business activity.
This is basically Vincent Rinehart’s call on Bloomberg recently -
and is highlighted dramatically in Nomura’s catch-phrase the “end of the end of the world.”
This is a different spin on the earlier view echoed by Marc Chandler of Brown Brothers Harriman for no change or slightly less growth -
The macro picture remains largely unchanged. Janet Yellen is expected to lead the Federal Reserve into a new phase by beginning to taper its long-term asset purchases early next year. The ECB is expect to move in the other direction, leaning against the tightening of financial conditions and the disinflationary forces. Although deflation appears to be being beaten back by the aggressive monetary policy by the Bank of Japan, in the face of capital gains and retail sales tax hikes, many expect the BOJ to have to do even more to achieve its 2% core (excludes fresh food but includes energy) inflation target.
Growth in the world’s second large economy, China, appears to have downshifted, but at 7.5% (Q3) or 7.8% (consensus for Q4), it is still among the fastest growing economies. To round out the five largest economic regions, the recent data shows that the German economy has recovered from the slowdown earlier this year, though at a little more than 1%, its growth is unimpressive. Even this overstates German demand contribution, as its exports about 40% of what it produces.
Survey of Professional Forecasters
The recently released Fourth Quarter 2013 Survey of Professional Forecasters calls for a steady outlook in the US with a healthier labor market. Median real GDP forecasts for the fourth quarter of 2013 and the first quarter of 2014 are somewhat lower than predicted in the previous survey.
The outlook for growth in the U.S. economy is little changed from the survey of three months ago, according to 42 forecasters surveyed by the Federal Reserve Bank of Philadelphia. The forecasters expect real GDP to grow at an annual rate of 1.8 percent this quarter and 2.5 percent next quarter and to rise to 2.9 percent in the second quarter of 2014. On an annual-average over annual-average basis, the forecasters see real GDP growing 1.7 percent in 2013, 2.6 percent in 2014, 2.8 percent in 2015, and 2.7 percent in 2016. These projections are nearly the same as those of three months ago.
The projections for unemployment over the next three years are slightly below those of the last survey. Unemployment is projected to be an annual average of 7.5 percent in 2013, before falling to 7.0 percent in 2014, 6.4 percent in 2015, and 6.0 percent in 2016.
On the employment front, the forecasters see higher growth in jobs over the next four quarters.
(click to enlarge)
IMF projects global growth at 2.9 percent in 2013, rising to 3.6 percent in 2014 with growth driven more by advanced economies, and emerging markets weaker than expected. The most recent October release says risks remain on the downside.
Again, there are slight reductions in this late 2013 forecast, over the one released earlier in the year.
The Big “If”
All these outlooks are predicated on no repeat of the government shutdown and budget/debt ceiling impasse seen in October.
According to US Office of Management and Budget (OMB), independent estimates estimate the shutdown significantly reduced GDP growth in the 4th calendar quarter,
Standard and Poor’s: “We believe that, to date, the shutdown has shaved at least 0.6% off of annualized fourth-quarter 2013 GDP growth…”
Macroeconomic Advisers: “Calibrating [the 1995-1996 shutdowns] to today’s economy, we estimate that a two-week shutdown would directly trim about 0.3 percentage point from fourth quarter growth, mainly by interrupting the flow of services produced by federal employees.”
Goldman Sachs projected that the shutdown would reduce GDP growth by 0.14 percentage points per week, even after most furloughed Department of Defense employees returned to work.
Mark Zandi, Moody’s: “The 16-day Federal shutdown and political brinksmanship around the Treasury debt ceiling hurt the economy. The hit to fourth quarter real GDP is estimated at… half a percentage point of growth.”
The shutdown affected the Gallup Economic Confidence Index, which rose in November but not to levels prior to the shutdown.
Now the debt ceiling technical deadline is February 7, 2014. Let’s hope Reinhart’s optimism that politicians don’t interfere with the economy in even-numbered years is correct.
Vernon Smith is a pioneer in experimental economics. One of his most famous experiments concerns the genesis of asset bubbles.
Here is a short video about this widely replicated experiment.
Stefan Palan recently surveyed these experiments, and also has a downloadable working paper (2013) which collates data from them.
This article is based on the results of 33 published articles and 25 working papers using the experimental asset market design introduced by Smith, Suchanek and Williams (1988). It discusses the design of a baseline market and goes on to present a database of close to 1600 individual bubble measure observations from experiments in the literature, which may serve as a reference resource for the quantitative comparison of existing and future findings.
A typical pattern of asset bubble formation emerges in these experiments.
As Smith relates in the video, the experimental market is comprised of student subjects who can both buy and sell and asset which declines in value to zero over a fixed period. Students can earn real money at this, and cannot communicate with others in the experiment.
Noahpinion has further discussion of this type of bubble experiment, which, as Palan writes, is the best-documented experimental asset market design in existence and thus offers a superior base of comparison for new work.
There are convergent lines of evidence about the reality and dynamics of asset bubbles, and a growing appreciation that, empirically, asset bubbles share a number of characteristics.
That may not be enough to convince the mainstream economics profession, however, as a humorous piece by Hirshleifer (2001), quoted by a German researcher a few years back, suggests -
In the muddled days before the rise of modern finance, some otherwise-reputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices. What if the creators of asset pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately reflect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model (or DAPM), in which proxies for market misevaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies (such as book/market, earnings/price, and past returns) and mood indicators such as amount of sunlight, turned out to be strong predictors of future returns. At this point, it would seem that the deficient markets hypothesis was the best-confirmed theory in the social sciences.
Maybe I’ve been tilting against windmills. I’ve been trying to tease out consequences of fairly simple predictive models for stock market returns, offering findings as evidence “rational expectations” must fail, at least for certain periods. Also, I’ve highlighted warnings of a stock market bubble in the US, real estate bubbles in China and elsewhere, as well as analysts who look more deeply into the phenomena of asset bubbles.
There is a common thread to the debunking of these ideas – and that is the idea that rationality is pervasive in human choice.
This is one reason the work of Daniel Kahneman and his associates over the years, Amos Tversky, Richard Thaler, Paul Slovic, and interpreters such as Detlof von Winterfeldt, is so important. Kahneman, of course, was awarded the 2002 Nobel Prize in economics, the only psychologist apart from Herbert Simon to have received this honor. Even David Brooks, the generally conservative New York Times columnist, admires Kahneman.
I mention Detlof von Winterfeldt, because his early book Decision Analysis and Behavioral Research, now coming out in a 2nd edition, influenced me when I was studying risk analysis some years back.
I think it is safe to say at this point that the evidence for violations of expected utility theory or other standards of rationality in individual and group choices is widespread, even systematic.
The area of disaster insurance provides many good examples. There is a well-established ratchet pattern of purchase of flood insurance after flood events. Thus, a year after a major flood in an area, people widely purchase flood insurance. As time goes on without another major flood event, many let their insurance lapse, so coverage drifts down. Then, another flood occurs, and coverage spikes, etc. This is a consequence, really, of not understanding the meaning of “hundred year flood.”
But there are many other examples. Some work in psychology, for example, demonstrates the nontransitivity of choice in groups. Thus, a group of people can vote preferences for outcome or option B over option A. Similarly, the group may indicate a majority preference for option C over outcome B, while not preferring outcome C over option A. Very confusing, but not terribly difficult to set up in a situation of collective choice.
In this regard, Kahneman and Tversky introduced prospect theory in the late 1970’s. This provides an explanation for violations of expected utility maximization, and has been offered as explanation for economic “puzzles,” such as the equity premium puzzle.
Here is a brief selection from Kahneman’s TED talk, focusing on the experiencing versus remembering self. There are fascinating research findings on recall of pain in colonoscopy in the 1990’s, before the flexible scopes of today and performed with less medication.
Here is a charming interview with Kahneman about his latest book dealing with what he calls slow and fast thinking.
I guess one of the problems with accepting the idea that a great deal of what we do kind of shoots from the hip, and is fraught with all sorts of judgmental biases, is that we lose the big mechanistic models suggested by more vintage economics and mathematical psychology. Researchers can’t just sit back doing set theory and the calculus of variations to come up with what is going on the world, or what may happen. And then what is the norm? And who is above bias when it comes to determining normative outcomes and enforcing them?
This is one reason for my continuing fascination and dedication, I guess, to forecasting.
One of my little secrets has been that I am interested in the extent to which human behavior can be predicted, in almost any context. The business forecasting context is very attractive, however, precisely because the stakes are well-defined, and, I guess, because it pays. Business can earn extra profits from success in forecasting, and, presumeably, these added profits justify paying the forecasting service.
But the underlying focus is simply on the dynamics of what is and will happen. If theory helps, so much the better. But it is possible to do without much theory and wallow in the data. the proof is in the pudding.