Google unveils ten-year plan to build its ROBOT ARMY of the future
According to those familiar with the project, the team’s first goal is to build robots that can be used in large-scale manufacturing and logistics…”I feel with robotics it’s a green field. We’re building hardware, we’re building software. We’re building systems, so one team will be able to understand the whole stack.”
See also Google’s Andy Rubin is secretly building an army of robots, but for what?
6 companies that dominate 6 industries thanks to data
..executives from six companies from health care, fashion, education, media, transportation, and business shared examples of how they are using data to create opportunities that never existed before — and create a more personalized experience for their customers.
Is China Buying the World? Article kind of grinds on, using funny language that evokes ideological dispute, but confronts an important topic – will global business let China assume a role commensurate with its export strength and reserves of international currency?
It is highly likely that China will continue in the attempt that it has sustained throughout its ‘reform and opening up’ to build a group of globally competitive large companies. However slow and painstaking the process might be they will ‘never give up’ (yong bu fang qi). The main body of China’s national champion firms are in a group of strategic industries including banking, metals and mining, construction, electricity generation and distribution, transport, and telecoms services. They have been protected by the fact that they are state-owned. They benefit from state procurement policy and the fact that they buy each others’ products. The non-financial firms benefit also from loans from state owned banks. They have benefited greatly from the high speed growth of the domestic economy.
However, expanding the position of state-owned national champion firms in a large and fast-growing domestic economy is different from constructing globally competitive firms in the international arena. Despite significant progress China has not yet nurtured a group of globally competitive ‘national champion’ firms with leading global technologies and brands, that can compete within the high income countries. Despite widespread perceptions in the international media that Chinese firms are ‘buying the world’, their presence in the high income countries is negligible. This is a remarkable situation for a country that is the world’s largest exporter, and its second largest economy and manufacturer. In other words, ‘we’ are inside ‘them’, but ‘they’ are not inside ‘us’.
Mexico’s Surprising Engineering Strength related to growing automobile production
When Might the Federal Funds Rate Lift Off? Check this out vis a vis when tapering will begin.
This Commentary considers the question of when the FOMC’s unemployment and inflation thresholds might be breached. After taking into account the range of possible outcomes, the most likely outcome based on our model is that at least one threshold for raising the funds rate will be satisfied as of 2015:Q1. We show that this is two quarters earlier than if we were to look at the outlook for unemployment and projected inflation separately.
Our model can also consider the impact of changes to the forward guidance, such as introducing an inflation floor. An inflation floor of 1.5 percent would have a modest impact on the probability of satisfying both the floor and at least one of the thresholds, while an inflation floor of 1.75 percent could delay the point at which both the floor is satisfied and a threshold is crossed by about one year. This exercise suggests that the choice of an inflation floor could exert a considerable delay on the liftoff of the federal funds rate from the zero lower bound.
7 Reasons to be Cautious about the stock market, excerpted in Pragmatic Capitalism
- The median price-to-revenue ratio of the S&P 500 is now at an historic high, eclipsing even the 2000 level.
- The Shiller P/E is above 25, exceeding all observations prior to the late-1990s’ bubble except for three weeks in 1929.
- Market cap-to-GDP is already past its 2007 peak and is approaching the 2000 extreme. (This ratio is stretched at over two standard deviations above its long-term average.
- The implied profit margin in the Shiller P/E (denominator of Shiller P/E divided by S&P 500 revenue) is 18% above the historical norm. On normal profit margins, the Shiller P/E would already be 30.
- If one examines the data, these raw valuation measures typically have a fraction of the relationship to subsequent S&P 500 total returns as measures that adjust for the cyclicality of profit margins (or are unaffected by those variations), such as Shiller P/E, price-to-revenue, market cap-to-GDP and even price-to-cyclically-adjusted-forward-operating-earnings. Because the deficit of one sector must emerge as the surplus of another, one can show that corporate profits (as a share of GDP) move inversely to the sum of government and private savings, particularly with a four- to six-quarter lag.
- The record profit margins of recent years are the mirror-image of record deficits in combined government and household savings, which began to normalize about a few quarters ago. The impact on profit margins is almost entirely ahead of us.
- The impact of 10-year Treasury yields (duration 8.8 years) on an equity market with a 50-year duration (duration in equities mathematically works out to be close to the price-to-dividend ratio) is far smaller than one would assume. Ten-year bonds are too short to impact the discount rate applied to the long tail of cash flows that equities represent. In fact, prior to 1970, and since the late-1990s, bond yields and stock yields have had a negative correlation. The positive correlation between bond yields and equity yields is entirely a reflection of the strong inflation-disinflation cycle from 1970 to about 1998.”
Back to Housing Bubbles Nouriel Roubini
It is widely agreed that a series of collapsing housing-market bubbles triggered the global financial crisis of 2008-2009, along with the severe recession that followed. While the United States is the best-known case, a combination of lax regulation and supervision of banks and low policy interest rates fueled similar bubbles in the United Kingdom, Spain, Ireland, Iceland, and Dubai.
Now, five years later, signs of frothiness, if not outright bubbles, are reappearing in housing markets in Switzerland, Sweden, Norway, Finland, France, Germany, Canada, Australia, New Zealand, and, back for an encore, the UK (well, London). In emerging markets, bubbles are appearing in Hong Kong, Singapore, China, and Israel, and in major urban centers in Turkey, India, Indonesia, and Brazil.
Signs that home prices are entering bubble territory in these economies include fast-rising home prices, high and rising price-to-income ratios, and high levels of mortgage debt as a share of household debt. In most advanced economies, bubbles are being inflated by very low short- and long-term interest rates. Given anemic GDP growth, high unemployment, and low inflation, the wall of liquidity generated by conventional and unconventional monetary easing is driving up asset prices, starting with home prices.
This is the title of a very recent paper by Petr Geraskin and Dean Fantazzini in the European Journal of Finance. These Russian academics summarize micro-models of trading behind log-periodic power laws for the growth, bursting and rapid decline of asset bubbles, highlighting several tests for an approaching critical point for a bubble, and applying these tests to the gold price bubble of 2009.
One of the most interesting tests is the crash lock-in plot or CLIP. Here are two examples of CLIPS for the S&P 500 Index in 2007 and the Shanghai Composite Index which crashed in 2009 (click to enlarge).
The idea behind these plots is that crash or critical dates (Tc) are predicted by log periodic power models (LPPL) models based on observations that start from some historical point and include successively later “last observations” in a dataset. As the “last observations” become later and later, the crash dates stabilize or “lock-in” for both datasets. Both curves in the graphs above were generated by data that extended until one trading day prior to the actual crashes.
Log Periodic Power Laws
The idea of log-periodic power laws in finance is closely associated with the work of Didier Sornette, who has published extensively on this topic. What I was not really aware of before reading Geraskin/Fantazzini was the micro-modeling of trading which goes into the LPPL. There can be traders who follow “rational expectations” and traders who base their actions more on market momentum. When the imitation factor is great enough, a bubble can build, even though there is widespread perception that prices may crash. Indeed, the higher the probability of the crash, the faster the price should grow to compensate investors for the increased risk of a crash in the market
As reported in earlier posts here, Sornette and associates have applied LPPL theory to a variety of stock and other asset market bubbles and their subsequent crashes, generally preferring a definition of a “bubble” which is more formally mathematical, than related, as in other theories, to “fundamental value.”
LPPL theory is mathematically interesting and complex, and can be tied to underlying micro-models of trading. However, in practice, LPPL models are challenging to estimate, because of the characteristic presence of many local maxima or minima affecting likelihood functions and estimation metrics. Initially, Sornette and others proposed LPPL models with so many parameters that it really challenged the data; then, some of these parameters are suggested to be “slaved” to others. One of the appeals of the Geraskin/Fantazzini paper is that they discuss newer and probably more adequate estimation techniques, and, at the same time, deal with critical work of Feigenbaum and others.
It’s interesting to me that this work emanates in part from physical science, for example, with earthquake studies. As in physics, the LPPL equations are approximations, and an important test is whether they are, in fact, predictive.
Collapse of an asset bubble is a probabilistic event. It becomes highly likely at some point, but even then is not necessarily determined.
Banks and Investment Banks
Major banks and investment institutions release their 2014 outlook now and for the next few weeks. The emerging theme is that 2014 is the year the economic crisis really comes to an end – when we will see stable employment growth and gradually swelling business activity.
This is basically Vincent Rinehart’s call on Bloomberg recently -
and is highlighted dramatically in Nomura’s catch-phrase the “end of the end of the world.”
This is a different spin on the earlier view echoed by Marc Chandler of Brown Brothers Harriman for no change or slightly less growth -
The macro picture remains largely unchanged. Janet Yellen is expected to lead the Federal Reserve into a new phase by beginning to taper its long-term asset purchases early next year. The ECB is expect to move in the other direction, leaning against the tightening of financial conditions and the disinflationary forces. Although deflation appears to be being beaten back by the aggressive monetary policy by the Bank of Japan, in the face of capital gains and retail sales tax hikes, many expect the BOJ to have to do even more to achieve its 2% core (excludes fresh food but includes energy) inflation target.
Growth in the world’s second large economy, China, appears to have downshifted, but at 7.5% (Q3) or 7.8% (consensus for Q4), it is still among the fastest growing economies. To round out the five largest economic regions, the recent data shows that the German economy has recovered from the slowdown earlier this year, though at a little more than 1%, its growth is unimpressive. Even this overstates German demand contribution, as its exports about 40% of what it produces.
Survey of Professional Forecasters
The recently released Fourth Quarter 2013 Survey of Professional Forecasters calls for a steady outlook in the US with a healthier labor market. Median real GDP forecasts for the fourth quarter of 2013 and the first quarter of 2014 are somewhat lower than predicted in the previous survey.
The outlook for growth in the U.S. economy is little changed from the survey of three months ago, according to 42 forecasters surveyed by the Federal Reserve Bank of Philadelphia. The forecasters expect real GDP to grow at an annual rate of 1.8 percent this quarter and 2.5 percent next quarter and to rise to 2.9 percent in the second quarter of 2014. On an annual-average over annual-average basis, the forecasters see real GDP growing 1.7 percent in 2013, 2.6 percent in 2014, 2.8 percent in 2015, and 2.7 percent in 2016. These projections are nearly the same as those of three months ago.
The projections for unemployment over the next three years are slightly below those of the last survey. Unemployment is projected to be an annual average of 7.5 percent in 2013, before falling to 7.0 percent in 2014, 6.4 percent in 2015, and 6.0 percent in 2016.
On the employment front, the forecasters see higher growth in jobs over the next four quarters.
(click to enlarge)
IMF projects global growth at 2.9 percent in 2013, rising to 3.6 percent in 2014 with growth driven more by advanced economies, and emerging markets weaker than expected. The most recent October release says risks remain on the downside.
Again, there are slight reductions in this late 2013 forecast, over the one released earlier in the year.
The Big “If”
All these outlooks are predicated on no repeat of the government shutdown and budget/debt ceiling impasse seen in October.
According to US Office of Management and Budget (OMB), independent estimates estimate the shutdown significantly reduced GDP growth in the 4th calendar quarter,
Standard and Poor’s: “We believe that, to date, the shutdown has shaved at least 0.6% off of annualized fourth-quarter 2013 GDP growth…”
Macroeconomic Advisers: “Calibrating [the 1995-1996 shutdowns] to today’s economy, we estimate that a two-week shutdown would directly trim about 0.3 percentage point from fourth quarter growth, mainly by interrupting the flow of services produced by federal employees.”
Goldman Sachs projected that the shutdown would reduce GDP growth by 0.14 percentage points per week, even after most furloughed Department of Defense employees returned to work.
Mark Zandi, Moody’s: “The 16-day Federal shutdown and political brinksmanship around the Treasury debt ceiling hurt the economy. The hit to fourth quarter real GDP is estimated at… half a percentage point of growth.”
The shutdown affected the Gallup Economic Confidence Index, which rose in November but not to levels prior to the shutdown.
Now the debt ceiling technical deadline is February 7, 2014. Let’s hope Reinhart’s optimism that politicians don’t interfere with the economy in even-numbered years is correct.
Vernon Smith is a pioneer in experimental economics. One of his most famous experiments concerns the genesis of asset bubbles.
Here is a short video about this widely replicated experiment.
Stefan Palan recently surveyed these experiments, and also has a downloadable working paper (2013) which collates data from them.
This article is based on the results of 33 published articles and 25 working papers using the experimental asset market design introduced by Smith, Suchanek and Williams (1988). It discusses the design of a baseline market and goes on to present a database of close to 1600 individual bubble measure observations from experiments in the literature, which may serve as a reference resource for the quantitative comparison of existing and future findings.
A typical pattern of asset bubble formation emerges in these experiments.
As Smith relates in the video, the experimental market is comprised of student subjects who can both buy and sell and asset which declines in value to zero over a fixed period. Students can earn real money at this, and cannot communicate with others in the experiment.
Noahpinion has further discussion of this type of bubble experiment, which, as Palan writes, is the best-documented experimental asset market design in existence and thus offers a superior base of comparison for new work.
There are convergent lines of evidence about the reality and dynamics of asset bubbles, and a growing appreciation that, empirically, asset bubbles share a number of characteristics.
That may not be enough to convince the mainstream economics profession, however, as a humorous piece by Hirshleifer (2001), quoted by a German researcher a few years back, suggests -
In the muddled days before the rise of modern finance, some otherwise-reputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices. What if the creators of asset pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately reflect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model (or DAPM), in which proxies for market misevaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies (such as book/market, earnings/price, and past returns) and mood indicators such as amount of sunlight, turned out to be strong predictors of future returns. At this point, it would seem that the deficient markets hypothesis was the best-confirmed theory in the social sciences.
Maybe I’ve been tilting against windmills. I’ve been trying to tease out consequences of fairly simple predictive models for stock market returns, offering findings as evidence “rational expectations” must fail, at least for certain periods. Also, I’ve highlighted warnings of a stock market bubble in the US, real estate bubbles in China and elsewhere, as well as analysts who look more deeply into the phenomena of asset bubbles.
There is a common thread to the debunking of these ideas – and that is the idea that rationality is pervasive in human choice.
This is one reason the work of Daniel Kahneman and his associates over the years, Amos Tversky, Richard Thaler, Paul Slovic, and interpreters such as Detlof von Winterfeldt, is so important. Kahneman, of course, was awarded the 2002 Nobel Prize in economics, the only psychologist apart from Herbert Simon to have received this honor. Even David Brooks, the generally conservative New York Times columnist, admires Kahneman.
I mention Detlof von Winterfeldt, because his early book Decision Analysis and Behavioral Research, now coming out in a 2nd edition, influenced me when I was studying risk analysis some years back.
I think it is safe to say at this point that the evidence for violations of expected utility theory or other standards of rationality in individual and group choices is widespread, even systematic.
The area of disaster insurance provides many good examples. There is a well-established ratchet pattern of purchase of flood insurance after flood events. Thus, a year after a major flood in an area, people widely purchase flood insurance. As time goes on without another major flood event, many let their insurance lapse, so coverage drifts down. Then, another flood occurs, and coverage spikes, etc. This is a consequence, really, of not understanding the meaning of “hundred year flood.”
But there are many other examples. Some work in psychology, for example, demonstrates the nontransitivity of choice in groups. Thus, a group of people can vote preferences for outcome or option B over option A. Similarly, the group may indicate a majority preference for option C over outcome B, while not preferring outcome C over option A. Very confusing, but not terribly difficult to set up in a situation of collective choice.
In this regard, Kahneman and Tversky introduced prospect theory in the late 1970’s. This provides an explanation for violations of expected utility maximization, and has been offered as explanation for economic “puzzles,” such as the equity premium puzzle.
Here is a brief selection from Kahneman’s TED talk, focusing on the experiencing versus remembering self. There are fascinating research findings on recall of pain in colonoscopy in the 1990’s, before the flexible scopes of today and performed with less medication.
Here is a charming interview with Kahneman about his latest book dealing with what he calls slow and fast thinking.
I guess one of the problems with accepting the idea that a great deal of what we do kind of shoots from the hip, and is fraught with all sorts of judgmental biases, is that we lose the big mechanistic models suggested by more vintage economics and mathematical psychology. Researchers can’t just sit back doing set theory and the calculus of variations to come up with what is going on the world, or what may happen. And then what is the norm? And who is above bias when it comes to determining normative outcomes and enforcing them?
This is one reason for my continuing fascination and dedication, I guess, to forecasting.
One of my little secrets has been that I am interested in the extent to which human behavior can be predicted, in almost any context. The business forecasting context is very attractive, however, precisely because the stakes are well-defined, and, I guess, because it pays. Business can earn extra profits from success in forecasting, and, presumeably, these added profits justify paying the forecasting service.
But the underlying focus is simply on the dynamics of what is and will happen. If theory helps, so much the better. But it is possible to do without much theory and wallow in the data. the proof is in the pudding.
Well, It’s Thanksgiving Day – truly my favorite holiday. I love the focus on friends, family, and food. A feast day, and a day to remember and give thanks.
OK, in the previous post, I make what may seem to some to be an outlandish claim. I claim to have a predictive model for the S&P 500 daily returns (trading day) which, since 1990 yields cumulative returns of about a 70-80 multiple of the original investment. Let me say too, I dedicate the development of this concept to my old friend Joe Oliger, who did better than any of us, ending up at Stanford Computer Science and heading up important projects at the NASA/Ames Computing Laboratories near there – before falling victim to pancreatic cancer.
Professor Didier Sornette of the Swiss Federal Institute of Technology (with many other affiliations) kindly offered to audit aspects of my estimates of gain, and we will hear back about that soon.
One reason I am confident my neural network model is performing as claimed is that I have done work with simpler, regression models.
So quickly this morning, I did a double-check. Did I imagine all this? What sorts of gains do the regression models provide over this period?
Thanksgiving Recipe for an Excel Spreadsheet Which Shows Rational Expectations Fails After 2008
First, download the daily S&P 500 Index back to 1990. Then, invert the series so the data go from days in January 1990 to recently, rather than the other way. I used data from 6008 trading days. The next step is to calculate trading day by trading day returns, a simple formula taking the ratio of the current from the past trading day index value and subtracting 1.
Once you have this time series of trading day returns, put in several lagged values on the spreadsheet, and then set up an adaptive regression with the Excel TREND() function. If you make the address of the initial value of the target and the explanatory variable values absolute, the regression estimate will include greater and greater numbers of observations. Be sure and stop the target and explanatory variable data blocks one row before the vector of current observations on the daily returns – so you use data from the past to predict out-of-sample what the current value will be, given the current lagged values in the spreadsheet.
This will generate a series of estimated daily returns on a one-step-ahead-basis.
In about 15 minutes this morning, I create two such setups. In one, I estimate adaptive regressions with lags of 1,3,5,7,9,11,13,15,17,19,21,23,25,27, and 29. I also estimate adaptive regressions with lags of 1,3,5,7,9,11, and 13.
The results, applied to a Trading Program, are shown in the chart below.
So, several points emerge from this quick exercise.
First, time depth makes a difference in the autoregressive model. The model with 15 variables going back altogether 30 trading days (skipping some lags) yields up a multiple of about 8, on an initial $1000 dollar investment, compared with a model with only 7 variables going back about 15 lags.
Secondly, I have not put up the Buy & Hold strategy cumulative gains, but those actually surpass the 7 variable model.
Thirdly, something extraordinary happens starting in around 2008 to the 15 variable model – the gains start really taking off.
A word on how I set up the Trading Program – If I am holding cash at the end of a trading day and the next day predicted rate of return is positive, I purchase stock with my “nest egg.” Then, I hold through until the end of the next trading day and decide whether to continue holding (prediction for next day positive) or sell. So my trading actions are determined by the forecast model, and my cumulative gains are determined by the actual S&P 500 daily returns, which apply, of course, only when I am holding stock, the cash retaining its value over a trading day or days.
The catch – The catch, if there is any, relates to both (a) determining the current daily rate of return, and (b) trading at the end of trading days. I need to further investigate the volatility of the S&P 500 on an intra-day basis, especially in the minutes and seconds before the end of the trading day. I assume it is possible to replicate my results in the real world, providing I do everything close enough (possibly within fractions of a second) to the closing bell.
What This May Mean
There are many versions of the rational expectation hypothesis, applied to the stock market. But generally, rational expectations says that public information, because it is instantly incorporated in market behavior, is useless in predicting stock prices. In effect, stock prices are random walks of one sort or another. Violations of this nonstationarity result may exist in niche-markets, narrowly-traded markets, and with a low degree of predictability.
However, this is the Standard & Poor 500 Index we are talking about. Definitely not a niche market. And the gains I get from this simple autoregressive model, estimated by ordinary least square (OLS) regression indicate that at around 2008, it was possible to double your money trading the S&P 500 in about five to six years, using a Trading Program based on the regression model.
The nut kernel question is why does this occur after 2008? It’s not instantly obvious to me. A Buy & Hold strategy initiated in 2008 would do less well than the 15 variable autoregressive model. So just waiting for the market to go up in this period yields gain, but it is possible to do much better than that with a little analysis.
Frankly, this almost seems like a case of “the King Has No Clothes.”
What’s the causal loop here? Does it, as I suggest in the previous post, have something to do with the US Federal Reserve quantitative easing launched in 2008? If so, how does that work? Can you follow the money and show how a little analysis goes a long way in this period?
Really, I think this is a first order problem. Clearly, if it is solved, it may “kill the goose that lays the golden egg.” But we may find out something even more valuable.
I’ve developing a trading program to predict daily S&P 500 stock returns. The post previous to this one, for example, outlines a neural network (NN) program, estimated with the Matlab Neural Network Toolbox, that predicts S&P 500 trading day returns somewhat more successfully than my earlier efforts with regression analysis.
So this program has about a 55 percent success rate in predicting the direction of change (positive or negative) of S&P daily stock price returns on a one-trading-day-ahead basis. It’s a nonlinear autoregressive model with a hidden layer containing 30 neurons connected to lags from 1 to 30 of the S&P 500 returns. If the sign of the one-day-ahead prediction for the daily return is the same as the sign of the actual daily return for that trading day, I score a one (1); otherwise zero.
So the overall average ratio of 1’s to the total number of trading days analyzed is around 0.55.
This NN trading program performs fairly well during recessionary periods too, as the chart below for 2001-2002 shows.
Here, the average for this period is just above 50 %, which is not too tacky. Thus, following the NN trading program during this difficult period would enable an investor to just about break even.
And, with respect to performance in terms of trading gains and losses, I’ve calculated a cumulative return from March 1, 1990 through October 2013, comparing it with a “Buy & Hold” strategy. To estimate cumulative returns from the NN trading program, I just buy when the next day rate of return is predicted to be positive, and sell today, at the end of the closing day, when the next day rate of return is predicted to be negative.
The following chart compares cumulative returns from the NN trading program and a Buy & Hold strategy, beginning trading March 1, 1990.
So, following this NN trading program yields up returns at a multiple of between 70 and 80. Investing $1000 in 1990, thus, would yield up more than $700,000 in cumulative gains, not counting taxes and broker fees. This compares with a multiple of about 5 for the Buy & Hold strategy.
Obviously, this is the best thing since sliced bread, but wait a minute. Why do the returns start skyrocketing just as the Great Recession gets underway?
And somewhere I’ve seen another curve similar to the NN Trading Program Cumulative curve in the chart above…
Oh yeah, that looks like the graph of the increase in the US monetary base, available from FRED.
In fact, let’s put that graph together with the above two, to make further comparison, again indexing the first period base to 1, so we can see simple multiples in the chart.
Wow – look at that. The huge climb in cumulative gains from the NN trading program starts exactly when the US Federal Reserve policies – unconventional policies, I might add – cause the US monetary base to surge. Furthermore, there is more fine-grain detail that might be teased out from the orange and blue lines.
What does this mean, and can it be a coincidence?
OK, so I’m going to go agnostic at this point. I’ve looked at whether rolling averages of percent same signs increase in this period, and apart from rebounding from a deep dip early in 2008, I don’t see much evidence of a trend. But it seems that actual magnitudes of daily returns predicted by the NN model provide a closer fit in this period, even though about the same percentage errors for getting the sign right obtain.
And I have checked everything exhaustively in this model development. The numbers are correct.
Bottom line – I am going to say that QE has introduced more predictability into stock returns. Notice that the Buy & Hold strategy does not work particularly well in this period either. So this predictability is more than stocks just going up (in one direction) in value.
I always liked the history of science chestnut about early modern biologists (1500’s) discovering that Aristotle made a mistake in the number of teeth in a horses’ head. These guys actually looked in a horses’ mouth and counted.
Something roughly similar may be going on here. I would say this is proof that rational expectations is complete nonsense, at least in this recent period.
So why don’t I just exploit this fact and get rich, and keep quiet?
Interesting question. But one thing is I am using the closing values to estimate the daily returns. So I need to study how close to the closing bell you can get in order to capture a S&P 500 value that will be close to the closing value on a reliable basis. I suspect this is tricky, and may be one reason why I need to install high speed connections between my office here in Colorado and Wall Street – it may be a matter of microseconds, ultimately.
I also believe there are ways to optimize my NN program, and plan to focus on that. Something like this needs intense study, production of many iterations and versions.
Contact me at firstname.lastname@example.org for specific information on this modeling effort, or signup with your email to get further posts on this topic.
Yeah, I experimented with neural networks (NN’s) extensively, when they surfaced as the Next Big Thing in the 1990’s. I wrote programs to train multilayer neural networks with backpropagation of errors in the matrix language GAUSS.
My feeling at the time was that NN’s were accuracy boosters – if you wanted to pick up that last increment of forecast accuracy. The cost came in terms of modeling time and modeling complexity.
I read somewhere recently that these issues were reasons support vector machines and networks gained favor more recently.
But neural networks are great for one thing – nonparametric estimation of nonlinear relationships. This is because the standard neural network function at the nodes of the network is a sigmoid function.
Here is a simple schematic Matlab provides, as a part of its neural network estimation package. The diagram refers to a nonlinear autoregressive neural network, which is trained to predict a time series from the past values of that time series.
The time series, in this case around 6000 values of daily returns for the S&P 500, dating back to January 4, 1990, is designated y(t) in the diagram.
There is a hidden layer of neurons in the first box going to the right – thirty of such neurons. This model also looks at 30 lags of daily S&P 500 returns in order to predict on a next-training-day-basis.
The weights are indicated by W. Basically, each of 30 lagged values of these daily returns is multiplied by a weight in these W vectors and added together to go into a nonlinear transformation in the neurons – of which there also are 30.
These 30 transformed sums then passed into the Output setup which involves a single collection of all these 30 values multiplied again by a weight and left untransformed – as indicated by the straight 45 degree line in the second box to the right.
The Matlab neural network toolbox has the above panel to help you train your network.
Training a network involves minimizing mean square error of on a training sample, usually, and mapping the error rate on validation and test samples.
The S&P 500 daily returns data, discussed before in the blog (see here and here) is a good candidate for neural network modeling, because a linear regression model leaves residuals or errors which bunch distinctly around the zero error point, both positive and negative.
And, my initial exploration of this tool shows encouraging results. I am able to duplicate the prediction rates for the correct sign of the next day returns for my best adaptive regression model. This adaptive regression model is based on both S&P daily returns and VIX daily returns. However, currently, my NN model only uses the S&P 500 series. So I get about 54-56 percent of the signs with this first cut NN model. I expect when I add the VIX series in and use another Matlab NN function call narxnet, I will find the accuracy boost I often have noticed with a NN model.
There is one advanced NN model I am particularly looking forward to deploying – the time delay neural network estimated by the Matlab function timedelaynet.
I’ll have a more complete report in a few days. Stay tuned. I recommend signing up for the email and I’ll keep you posted.