Quartz predicts Toyota probably will use the weaker yen to add better components or accessories — rather than cut prices, although….. those improvements might actually push down government measures of consumer vehicle prices… because the US government’s price watchers reflect quality as lower prices, thanks to something called “hedonic pricing.”
Rascoff: We also come out with Zestimates [for] new homes, based on the property records that either come to us from the county [where the home is located] or from the listings feeds [sent by] home builders. Also, when we get new information about a home, the Zestimate will change.
Knowledge@Wharton: Susan [Wharton Professor Susan Wachter], what is your view of the process Rascoff just described? How sound, from a pricing standpoint, is it?
Wachter: It’s a great process. It’s using what we call hedonic analysis. It’s taking the attributes of the property and weighting them, to efficiently use it statistically to come out with the best estimate.
Hedonic analysis or pricing is an, in some ways fairly simple, but powerful, method of valuing attributes or features bundled together in products. It works with actual market data, usually discoverable through the internet, as opposed to usually more costly survey data from a conjoint analysis.
Basically, what you do is create a database like the following (Click to enlarge).
Here you have a bunch of products, 30 in all, down the first column.
Then, the features of these products are tabulated in the subsequent ten columns. In other words, for this product, there are ten significant features or options. Note that only Product 1 is “fully-featured.” It also has the highest price – where the prices are listed in the final column of the table.
Now in order to discover the values or hedonic prices of the individual product features, we can estimate a regression,
Ln(Price) = β0 + β1x1+ β2x2+…+ β10x10
If we do this for the above dataset, we obtain the following ranking of the proportion of features to product price
So Feature 5 is more than twice as valuable to customers as feature 2, and its presence represents nearly a third of a fully-featured product’s value.
This approach, which I am only sketching here, is useful in configuring new products and their prices.
There also are applications to evaluating changes in the quality of goods. These methods have usually been developed with a focus on building better price indexes, such as the Consumer Price Index (CPI).
In the previous post, I put down some ideas I have had for a long time on the context of forecasting, what you might call, its meta-theory. Time and again, I have seen the forecasting operation be suspended when business went bad. And I have also seen forecasts, based on fairly mechanical extrapolation of past patterns, be embraced by businesses and groups which, by all rights, should be making more aggressive adaptations.
Incidentally, I exempt Microsoft from these remarks, although there is clearly need for Microsoft to be vigilant about change. To Microsoft’s credit, however, forecasting was supported internally during the worst period of the financial crisis of 2008-2009.
But I wanted to build on what Bill Gross said in the video I embedded. Gross looked older in that video, and perhaps was not as attractive a spokesman as might have been good in the situation.
What Bill Gross said, basically, is that a lot of what we see today can be chalked up to the actions of the US Federal Reserve Bank and, now, the new-found activism of the Bank of Japan. And their activism is going to have an endpoint, probably in 2015 or 2016, when interest rates will inch up again and central banks will begin to unwind or at least not increase their positions in long-term assets, like mortgage-backed securities.
The Interest Rate, a Fundamental Constant in the Economic Process, a Reward for “Waiting” – NOT
When I was learning economics and business in my teens and twenties, I got the idea that the interest rate was like a constant of nature – maybe like g, the gravitational constant. Of course, there is a spectrum of interest rates, ranging from the very short term, like the federal funds rate, to rates on ten year, or even longer term Treasury bonds. But the prime rate, the rate banks charge their best customers, can be understood as a composite of some risk-free interest rate plus the inflation rate. And these variables – the risk-free interest rate and the long-term inflation rate don’t change. They are more or less fixed historically, or so I thought.
Then, later, I learned that the planners of the Soviet Union had made a fatal mistake, when they did not have proper interest rates, to reflect the cost of capital.
Well, it would seem to me that these ideas about the sanctity or specialness of the interest rate must be wrong. Interest rates are artifacts, depending on someone’s convenience.
A lot of my conservative friends, and lots of other people are surprised, shocked, and appalled to learn that central banks, like the US Federal Reserve Bank, can “create money” at the stroke of a pen, or, more accurately, the click of a computer mouse.
It somehow seems “not right.” Most people slave long hours for a boss who may be domineering, put off seeing loved ones during the day, even develop medical conditions from job-related stress, all so that someone will pay them a few of these dollars.
Yet, the banker of bankers can go to its computer files which record commercial bank reserves deposited in the Federal Reserve Banking system. The banker of bankers can pull this stuff up on screen and manually, as it were, add $1 million dollars to Bank X’s reserve account, corresponding to purchase of mortgages issues by Bank X, or possibly some other asset of Bank X.
It’s clear I could go on a rant here, but let’s return to “what is going to happen.”
Global Growth Slowing
Scientifically, I see the global system in a slowing mode. This would be a huge understatement for Europe, which is falling deeper and deeper into recession, led, apparently, by managerial pygmies, not to disparage that noble group of residents of the deep jungle in Africa.
The US is doing as well as can be expected, given a high degree of political dysfunction. But, as everyone observes, the economic recovery has been disappointing, from the standpoint of employment and economic growth – even though profits have hit record high levels in recent years. So now, it appears that we have moved through the period of maximum profits, and it is really the US Fed continuing to pump money into the system which is responsible for the buoyancy in the stock market.
Economic growth in China is slowing. Of course, this means annual GDP growth on the order of 6-7 percent, rather than 8-10 percent. But constraints on Chinese growth are emerging. They include wage and labor demands, the property bubble which has absorbed Chinese middle class savings, and, of course, the new, downward prospects of Europe – a major trading partner.
Only Japan is a bright spot in the firmament. I don’t want to comment much further at this time on the Japanese situation, although I have resources in that area to draw on. By itself, Japan cannot pull the global economy out of its doldrums.
In an alternative political universe, the United States and Europe also would substantially increase spending on crumbling or inadequate infrastructure and on educating the next generation of workers. This could be financed by additional government debt and possibly new taxes, but would be counterbalanced by corresponding increases in productivity, at least in the longer term. Such policies could help avert a slide through what looks like two years of slowing growth and increasing political frustration, increasing social conflict, related to the embrace of austerity policies, capped by some type of even worse economic crisis.
As it stands, it is really only the continuing input of liquidity from central banks which is causing major markets to be buoyant.
Puzzles and the Need to Move on From the Absurd
Paul Krugman, who I in many respects admire, has said that worrying about financial bubbles is tantamount to wanting the Fed to stop trying to push unemployment down.
But I don’t know. It seems to me that, because of mergers and bankruptcies, the US banking system, at least, is even more concentrated than before 2008.
Many people say that increased regulation of banks and shadow banks is necessary to avoid a repeat of 2008-2009, when the global financial system was almost paralyzed.
But again, I don’t know, the problem is deep, if you think about it. For example, the value of outstanding derivatives is on the order of 1 quadrillion dollars (see the International Bank of Settlements). The total global GDP, according to the International Monetary Fund, is on the order of 60 trillion. So, think of it, the “system” can offer a bribe. Of course, this bribe is based on smoke and mirrors, but, nevertheless, enough of the bribe might be fungible to be compelling.
So we need to rethink regulation. The regulators, if they are human, are simply too easy to corrupt, given the high stakes involved in this “game.”
So, if, dear reader, you are still wondering, “what is going to happen” I really can’t help you.
The handwriting is on the wall.
But don’t despair entirely. There is interesting stuff. For example, I’m fascinated about the possibilities for analyzing “fat” databases, where there are fewer observations than predictors. There also is neat stuff on pricing new products, and measuring for quality in products. And just to remind us to stay humble, I’ve been seeing studies which liken consumer visits to their favorite stores over time to our animal cousins visiting the same places with predictable regularity and so forth. There is a lot of cool math and data analytics out there.
And indeed, mathematics and analytics may prove to be our salvation, in the sense that we definitely need new thinking. It is ridiculous to replicate the automobile economy with current emissions characteristics in Beijing, at the cost of hundreds of thousands of cases of severe respiratory distress. It is absurd to allow only killing people who are different to be the prime mover of any economy – killing the Other. Some savant has to emerge – perhaps from central Asia (just a thought) or Uruguay – to lead us out of the wilderness.
I guess my personal faith is that we will at least recognize the new insights if we continue to do our calculations, observe curious patterns in the numbers, and so forth.
The toughest time for a forecaster in an organization is when everything is going to hell in a handbasket. You can try to make the case that analysis can mark which lines of the business are dropping fastest, and that your services can help triage resources to sustain revenues and profits to the extent possible. But it’s probably just as likely your boss will decide to make immediate cost savings by cutting your position. Nuances in a situation of collapse are lost. Everybody jumps for the lifeboats. There has been a lamentable tendency, since the time of ancient wars, to kill the messenger bringing bad news.
The demand for forecasting picks up, however, when a downturn drags on. People are looking for signs of recovery, for the “green shoots.” Standard procedure in the business press is to note “the rate of decline has slowed” or “the market has probably hit bottom.”
The big secret, as far as I am concerned, is that the demand for forecasting is probably strongest within an organization that has done well and which is looking out on markets that may be getting choppier. Perhaps, that organization’s product line is somewhat long-in-the-tooth or dated, and new competitors are nipping away at market share. Then, the rhythm and cadence of forecasts based on past patterns is most happily accepted and celebrated.
Clearly, this is dangerous, but, on the other hand, it is remarkable how long obvious things can kind of hang suspended in the air, before the other shoe drops.
I’m invoking a lot of metaphors here partly because I’m flying on the “intuitive autopilot.” Unless there is some big game-changer, like a major war flaring up somewhere, everybody seems pretty clear what the situation is and how long it is likely to last. That’s my intuition.
I attach a recent interview with Bill Gross, founder of Pimco, the bond giant, because he sort of does the trick with his “smiley face model” of global finance. The US Federal Reserve program of continued purchases of longer term assets, such as mortgage-backed securities, plus the new stimulus program of the Bank of Japan – these actions by central banks are sustaining and prolonging the runup of the US stock market and, in Gross’s words.
Tablet apps will generate $8.8 billion in revenue in 2013, compared to the $16.4 billion expected from smartphone apps, according to the latest forecasts from ABI Research. Of the combined $25 billion, 65% will come from Apple’s iOS ecosystem, 27% from Google’s Android, and the remaining 8% from the other mobile platforms.
By 2017, ABI predicts, table apps will “nearly match the smartphone application revenues and surpass them in 2018, when the combined revenue base will reach $92 billion.”
The Apple store releases counts of active applications by month, shown below.
The number of active apps shows a clearly exponential growth at a rate of about 5% per month, or more than 50% per year.
Identifying seasonal variation and seasonal “adjustment” has been the bread and butter of applied social and economic statistics for some time.
The topic can become truly baroque. For example, the US Census X-12 ARIMA and US Census X-13 ARIMA seasonal adjustment procedures have undergone continuous update and modification since they were introduced by Julian Shiskin and others at the U.S. Bureau of the Census, beginning in 1954.
But the general facts are well known. Demand peaks for many products and services late in the (solar) calendar year – in the fourth quarter. This is sometimes called the “Christmas effect.”
Supply chain effects for goods, such as consumer electronics destined for the Christmas market, can translate demands for inputs back up the supply chain. Thus, IT research shows semiconductors must be fabricated weeks to months earlier.
Obviously, the impact of complementary seasons in the southern Hemisphere can mean a different cadence of business, especially where agriculture is still a big economic activity.
Big exceptions relate to what might be called “calendar effects”.
Similar sales effects, of course, exist for important US holidays, e.g. the Mother’s Day effect.
In practical terms, identifying and integrating seasonality into forecasting depends on the available time depth in the data.
Let me make some comments in this post about what you can do if the time depth of data is, so to speak, small to moderate – say a few years of quarterly or monthly data.
Using a Buys-Ballot Table
A Buys Ballot table can help quickly assess the plausibility of additive and multiplicative seasonal effects on a monthly or quarterly basis.
The following chart shows Australian beer production, 1989-1994, from Rob Hyndman’s time series archives, maintained by DataMarket.
I’ve arranged this monthly production data in a Buys-Ballot table. I list the monthly figures down columns for each year with totals and average monthly consumption at the bottom of the table.
Conventionally, there are two possibilities – additive and multiplicative seasonality. So, I calculate two further tables, each premised on one of these assumptions.
The monthly numbers in the table are developed either by adding the average (additive) differences in red in the top table to average beer production for a year, or by multiplying the proportions at the right column of the lower table (also in red) against the total beer production for a year.
Overall, as the annual totals show, Australian beer production in this period, 1989-1994, shows evidence of a declining trend.
Interestingly, there is very little difference in the predictive power of the proportions or multiplicative model and the additive model, as the following mean absolute percentage error (MAPE) calculations by month show.
While one method works better than the other in specific years, there is little overall difference the MAPE.
Notice that the average additive component or multiplicative factor for a month is applied to the knownannual average or total production by year.
So what we have here is a process of analysis which casts light on how we might breakout monthly figures from totals, were we to somehow to know what those are.
In fact, with this amount of data it is possible to integrate estimation of trend and seasonal factors. This can be done with a Winters or three-parameter exponential smoothing model or a Box-Jenkins model.
Using Automatic Forecasting Software
Realistically, people no longer write their own computer programs to estimate exponential smoothing or Box Jenkins models. There are too many excellent off-the-shelf software packages to estimate both these types of models.
The only point I want to make in the rest of this post, apart from illustrating output of Forecast Pro for these Australian beer production data, is that both models – three-parameter or Winters exponential smoothing and Box-Jenkins can be suggested by diagnostic criteria for the same time series, depending on the period, amount of data, and so forth.
Thus, if we utilize the whole series shown in the initial chart above, Forecast Pro indicates that a Box-Jenkins models is best, based on the MAD or mean average deviation for out-of-sample errors in backcasting.
The output is shown below.
Notice the format of the Box-Jenkins or ARIMA model, ARIMA(0,0,0)*(0,1,0). I’m going to develop a post on this type of symbolism soon – it is inevitable. But for the moment, just scan the output of these two pages. Notice that forecasts are produced for 1995, and we can check those against the actuals for several months, it turns out. The forecast confidence intervals do in fact contain the actual numbers through August 1995, which are available.
Now let’s trim the data down to 1989 to 1993. Doing this and running the automatic forecast setup, we find that a Winters exponential smoothing model is recommended.
I hope this provides a feel for hands-on analysis in this situation. Clearly, there is a need to cover background on the Winters and Box Jenkins models.
But except for including trading-day and other calendar effects, these procedures are kind of the backbone of applied forecasting for seasonal patterns in a real world context. Start with simple procedures to get an idea of what might be going on. Then, apply off-the-shelf software, probably looking at what is recommended over several periods within the dataset. Then, go forward with the forecast writeup.
To understand what financial markets are doing today and where they may go in the future, you have to understand how financial markets have changed over the past 300 years.
I talked with Ralph Dillon, Vice-President of Institutional Sales, the other day. Started in 1992 by Dr. Bryan Taylor, Global Financial Data numbers major hedge funds, brokers, and researchers among its clients. It’s about $25,000 for basic business access with special pricing for academic research.
Significantly, Harvard’s Reinhart and Rogoff cite GFD in the references for their popular 2009 book This Time Is Different(a later post will highlight how data issues in a paper by these authors went viral).
The Issue of “Extensions” and Back-Extrapolations of Data
Unlike the 10-year bond yield series, based on bonds actually issued by the US Treasury starting in the late 1700’s, some GFD series are obvious constructions.
There is lots here to ponder, such as apparent mean reversion in the GDP series, contesting the idea that we are dealing with a random walk with drift essentially, or issues with this low apparent long term inflation rate. And we might want to look further into whether the above-trend nominal GDP at the end of the series shown is complemented by a below-trend growth of nominal GDP 2008 and after.
I’m preparing blogs on a remarkable database which tracks the 10-year Treasury rate back to 1790, when the first US bonds were issued – and seasonal variation with a hat tip to the Chinese New Year and Islamic calendar, among other topics.
But, it’s Spring (in the Northern Hemisphere at least) and we all want to fly like the birds which are so happy.
So I have a dear friend who has put together a YouTube video about a scrapbook his mother, a Russian physicist, made when she was a young women. I think it is quite moving, and also maybe very true.
The total size of the world population is likely to increase from its current 7 billion to 8–10 billion by 2050. This uncertainty is because of unknown future fertility and mortality trends in different parts of the world. But the young age structure of the population and the fact that in much of Africa and Western Asia, fertility is still very high makes an increase by at least one more billion almost certain. Virtually, all the increase will happen in the developing world. For the second half of the century, population stabilization and the onset of a decline are likely.
This projection to 2050 does not suggest a point of or timing for peak global population, but there is more and more discussion of this, because of large and widespread drops in fertility in surprising places, e.g. Mexico, Brazil.
World population probably passed 7 billion persons in 2011, and is projected to exceed 9 billion by 2044.
There is a sharp contrast between developing and developed countries, when it comes to population growth. Perhaps 97 percent of recent population growth is in developing countries, with higher birth rates and a younger population.
Population growth In the developed world, on the other hand, is moving toward a standstill, even in some places decrease, due to lower fertility and an aging population. This is known as the “demographic divide.” Europe may be the first region to see long-term population decline, largely as a result of low birth rates in Eastern Europe and Russia.
1. Onebillion people are hungry, and 1 billion are obese.
2. Three billion people live on two dollars a day.
3. One billion people live in slums… about half the world lives in cities — let’s say 3.5 billion.. of those, a billion are living in slums without adequate sanitation, electricity, water, security, legal protection, transport, and inadequate housing conditions.
4. Over 200 million woman have unmet needs for contraception.
5. Today, 1.5 billion people live in rich countries. ..Europe, Western Europe mainly, the United States and Canada, the overseas English-speaking countries of Australia and New Zealand, Japan, and some of the Asian tigers.
6. Four billion people live in middle-income countries. .. recently emerged from poverty with fast-growing economies. .. China, India, Brazil, many countries in Latin America
7. Economically at the bottom are 1.5 billion people. .. sub-Saharan Africa, but …Haiti, and .. many provinces of South Asia in.. Pakistan, Afghanistan, India, Bangladesh.
8. Seniors now outnumber toddlers, and this trend will continue to increase… In the year 2000, …10 percent of the world’s people were age 0-4… about 10 percent were age 60+. ..by 2050, we anticipate that the number of people 60+ will be about 3.5 times the number of people age 0-4.
9. More than half of Earth’s inhabitants today live in cities, and two-thirds will live in cities by 2050.
10. More than half of women today have fewer children than the number needed to replace themselves and their partner.
It’s also true that current projections locate the fastest growing populations precisely in those areas which are disadvantaged from the standpoint of economy and infrastructure.
Methods of Forecasting Population
Extrapolation is the most common approach in demographic forecasting. There also are structural models, which explain demographic rates in terms of the underlying economics, sociology, and other factors. One of the major approaches is cohort-survival analysis.
There also is a considerable literature on population forecast errors. Familiar themes emerge, such as -
Personally, I see the movement of people around the world from rural to urban environments to be one of the most crucial trends, and probably one of the most certain. Yet, as Cohen points out in his above-noted ten points discussion, this creates immense need for new infrastructure. And the fact is, there already are great unmet needs for water and sanitation systems, as well as other centralized services.
With respect to the longer term future, there are the technological optimists and what you might call the realists. Two TED talks, by Erik Brynjolfsson and Robert Gordon, beautifully illustrate this tension, and are a good place to start trying to evaluate what may lie in store for us.
Brynjolfsson is an MIT Professor and one of the few economists to seriously study the impacts of information technology (IT) on production.
Robert Gordon is a distinguished macroeconomist and Professor at Northwestern University, whose father, mother, and brother also made contributions to economics.
Following these contrasting talks, TED organized a direct debate between Gordon and Brynjolfsson
In making the case for technological optimism, Brynjolfsson admits a growing divergence between productivity growth and employment – his reference being US-centric. Some weeks ago, however, I was amazed when a colleague pointed out that Foxconn – the gigantic Chinese (originating from Taiwan) electronics contract manufacturer – announced plans to install a million robots in its factories – about the number currently of assembly workers.
The World Economic Outlook (WEO) of the IMF evaluates risks and prospects in the global economy, presenting multi-year forecasts by global region and more than 150 countries.
The International Monetary Fund, housed in the above building in Washington, D.C., projects slower growth in 2013 than 2014, globally, as the following table from the April 2013 release shows. The Euro Area economy is projected to actually contract in 2013, while emerging market and developing economies generally show sustained growth.
These projections come out twice a year, in the Spring and Fall, and are updated in January and Summer (click to enlarge).
How good are these projections?
I’m looking at this question with WEO reports dating back to 1999.
Here, for example, is a graph of next-year forecast errors for Spain, based on the Spring WEO reports.
So for the early part of the first decade after the turn of the century, IMF economists underestimated Spanish growth slightly, much of the time.
With the onset of the financial crisis in 2007-2008, however, the IMF forecasts were off by several percentage points. And then, the large errors return in 2011, as the EU zone authorities enforced a draconian austerity on Spain.
The chart measures the forecast error as the difference between the forecast next year percent growth of real GDP minus the actual percent growth for that year. For simplicity, it is based just on the Spring release of the World Economic Outlook.
Given these errors, I wondered what a 95 percent confidence interval for future IMF projections for real GDP growth for Spain might look like.
Bootstrapping time series data is more complicated than bootstrapping a simple random sample of independent observations. Most time series, and certainly much business and economic time data, exhibit time-dependencies, such as autocorrelation between adjacent or close-by values in the series.
The bootstrap, of course, is a method of obtaining confidence intervals and parameter values from resampling the original sample, usually with replacement.
Because of the time dependency of errors in the above graph, I applied block sampling with overlaps. This in essence involved randomly selecting blocks of three consecutive errors from those shown in the graph and constituting 1000 bootstrap samples of length 12.
The mean error of these 1000 bootstrap samples was 0.802, and the percentile method gives 95 percent confidence intervals between -0.13367 and 1.808167.
Let’s think about what this means, and in what respect it is a reasonable estimate.
Basically, the suggestion is that, on average, the IMF in the current period is over-forecasting Spanish real economic growth by about 1 percent per annum. The bootstrap estimate confirms this, and indicates that forecast errors can well range as much as two percent too much.
But, one might counter, these bootstrap estimates are based on using the very large forecast error in 2008 – more than five percentage points error.
If the circumstances of 2008 are considered rare, a once-in-a-lifetime occurrence, then it is not reasonable, really, to include this in the boostrapping.
On the other hand, if one considers the situation in the Eurozone currently, with economic contraction on the agenda, following a year or two of record level unemployment in several countries (including Spain), a breakup of the eurozone would seem to be a continuing possibility. Of course, breakup of the eurozone would wreck unpredictable havoc on the global financial system, not to mention the further negative consequences for economic growth in Europe.
In this sense, 2008 may not be an outlier, and should be included in the bootstrap.
The bootstrap distribution for the 1000 bootstrap samples, shown below, looks to me more informative, than the simple year-by-year chart of errors above.
I find these techniques for bootstrapping parameters very intriguing, and I note the extensive technical literature on how best to guarantee fast convergence to unbiased values.