Cycles -1

I’d like  to focus on cycles in business and economic forecasting for the next posts.

The Business Cycle

“Cycles” – in connection with business and economic time series – evoke the so-called business cycle.

Immediately after World War II, Burns and Mitchell offered the following characterization –

Business cycles are a type of fluctuation found in the aggregate economic activity of nations that organize their work mainly in business enterprises: a cycle consists of expansions occurring at about the same time in many economic activities, followed by similarly general recessions, contractions, and revivals which merge into the expansion phase of the next cycle

Earlier, several types of business and economic cycles were hypothesized, based on their average duration. These included the 3 to 4 year Kitchin inventory investment cycle, a 7 to 11 year Juglar cycle associated with investment in machines, the 15 to 25 year Kuznets cycle, and the controversial Kondratieff cycle of from 48 to 60 years.

Industry Cycles

I have looked at industry cycles relating to movements of sales and prices in semiconductor and computer markets. While patterns may be changing, there is clear evidence of semi-regular pulses of activity in semiconductors and related markets. These stochastic cycles probably are connected with Moore’s Law and the continuing thrust of innovation and new product development.

Methods

Spectral analysis, VAR modeling, and standard autoregressive analysis are tools for developing evidence for time series cycles. STAMP, now part of the Oxmetrics suite of software, fits cycles with time-varying parameters.

Sometimes one hears of estimations in the time domain moving into the frequency domain. Time series, as normally graphed with time on the horizontal axis, are in the “time domain.” This is where VAR and autoregressive models operate. The frequency domain is where we get indications of the periodicity of cycles and semi-cycles in a time series.

Cycles as Artifacts

There is something roughly analogous to spurious correlation in regression analysis in the identification of cyclical phenomena in time series. Eugen Slutsky, a Russian mathematical economist and statistician, wrote a famous “unknown” paper on how moving averages of random numbers can create the illusion of cycles. Thus, if we add or average together elements of a time series in a moving window, it is easy to generate apparently cyclical phenomena. This can be demonstrated with the digits in the irrational number π, for example, since the sequence of digits 1 through 9 in its expansion is roughly random.

Significances

Cycles in business have sort of reassuring effect, it seems to me. And, of course, we are all very used to any number of periodic phenomena, ranging from the alternation of night and day, the phases of the moon, the tides, and the myriad of biological cycles.

As a paradigm, however, they probably used to be more important in business and economic circles, than they are today. There is perhaps one exception, and that is in rapidly changing high tech fields of which IT (information technology) is still in many respects a subcategory.

I’m looking forward to exploring some estimations, putting together some quantitative materials on this.

Links – late July

First post with my Android, so there are some minor items that need polishing – mainly how to embed links. Presently I am just laying the URL’s above the title of the piece.

In any case,  there are couple of fairly deep pieces here.

Enjoy.

http://www.nanex.net/aqck2/4661.html

A detailed exposé on how the market is rigged from a data-centric approach

We received trade execution reports from an active trader who wanted to know why his large orders almost never completely filled, even when the amount of stock advertised exceeded the number of shares wanted. For example, if 25,000 shares were at the best offer, and he sent in a limit order at the best offer price for 20,000 shares, the trade would, more likely than not, come back partially filled. In some cases, more than half of the amount of stock advertised (quoted) would disappear immediately before his order arrived at the exchange. This was the case, even in deeply liquid stocks such as Ford Motor Co (symbol F, market cap: $70 Billion, NYSE DMM is Barclays). The trader sent us his trade execution reports, and we matched up his trades with our detailed consolidated quote and trade data to discover that the mechanism described in Michael Lewis’s “Flash Boys” was alive and well on Wall Street.

This is just beautifully done. clean, simple, irrefutable. i hope it gets read far and wide. –Michael Lewis after reading this article

ww.counterpunch.org/2014/07/16/did-the-other-shoe-just-drop/

Did the Other Shoe Just Drop? Black Rock and PIMCO Sue Banks for $250 Billion

http://www.politico.com/story/2014/07/rand-paul-tech-donors-bay-area-108998.html

Rand Paul eyes tech-oriented donors, geeks in Bay Area. The libertarian wedge in a liberal-dem stronghold.

http://www.mydigitalfc.com/op-ed/predictive-analytics-world-cup-965

Predictive analytics at World Cup – Goldman Sachs does a big face plant, predicts Brazil would win. Importance of crowd-sourcing.

http://www.alphaarchitect.com/blog/2014/07/14/a-hands-on-lesson-in-return-forecasting-models/#.U8qojoXnbMI

A Hands-on Lesson in Return Forecasting Models I’ve almost never seen a longer blog post, and it ends up dissing the predictive models it exhaustively covers. But I think you will want to bookmark this one, and return to it for examples and ideas.

http://globaleconomicanalysis.blogspot.com/2014/07/yellen-yap-silliness-outright-lies-and.html

Yellen Yap: Silliness, Outright Lies, and Some Refreshingly Accurate Reporting Point of concord between libertarian free market advocates and progressive-left commentators.

Video Friday – Andrew Ng’s Machine Learning Course

Well, I signed up for Andrew Ng’s Machine Learning Course at Stanford. It began a few weeks ago, and is a next generation to lectures by Ng circulating on YouTube. I’m going to basically audit the course, since I started a little late, but I plan to take several of the exams and work up a few of the projects. This course provides a broad introduction to machine learning, datamining, and statistical pattern recognition. Topics include: (i) Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks). (ii) Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning). (iii) Best practices in machine learning (bias/variance theory; innovation process in machine learning and AI). The course will also draw from numerous case studies and applications, so that you’ll also learn how to apply learning algorithms to building smart robots (perception, control), text understanding (web search, anti-spam), computer vision, medical informatics, audio, database mining, and other areas. I like the change in format. The YouTube videos circulating on the web are lengthly, and involve Ng doing derivations on white boards. This is a more informal, expository format. Here is a link to a great short introduction to neural networks. Ngrobot Click on the link above this picture, since the picture itself does not trigger a YouTube. Ng’s introduction on this topic is fairly short, so here is the follow-on lecture, which starts the task of representing or modeling neural networks. I really like the way Ng approaches this is grounded in biology. I believe there is still time to sign up. Comment on Neural Networks and Machine Learning I can’t do much better than point to Professor Ng’s definition of machine learning – Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI. In this class, you will learn about the most effective machine learning techniques, and gain practice implementing them and getting them to work for yourself. More importantly, you’ll learn about not only the theoretical underpinnings of learning, but also gain the practical know-how needed to quickly and powerfully apply these techniques to new problems. Finally, you’ll learn about some of Silicon Valley’s best practices in innovation as it pertains to machine learning and AI. And now maybe this is the future – the robot rock band.

networks

Seasonal Adjustment – A Swirl of Controversies

My reading on procedures followed by the Bureau of Labor Statistics (BLS) and the Bureau of Economic Analysis (BLS) suggests some key US macroeconomic data series are in a profound state of disarray. Never-ending budget cuts to these “non-essential” agencies, since probably the time of Bill Clinton, have taken their toll.

For example, for some years now it has been impossible for independent analysts to verify or replicate real GDP and many other numbers issued by the BEA, since, only SA (seasonally adjusted) series are released, originally supposedly as an “economy measure.” Since estimates of real GDP growth by quarter are charged with political significance in an Election Year, this is a potential problem. And the problem is immediate, since the media naturally will interpret a weak 2nd quarter growth – less than, say, 2.9 percent – as a sign the economy has slipped into recession.

Evidence of Political Pressure on Government Statistical Agencies

John Williams has some fame with his site Shadow Government Statistics. But apart from extreme stances from time to time (“hyperinflation”), he does document the politicization of the BLS Consumer Price Index (CPI).

In a recent white paper called No. 515—PUBLIC COMMENT ON INFLATION MEASUREMENT AND THE CHAINED-CPI (C-CPI), Williams cites Katharine Abraham, former commissioner of the Bureau of Labor Statistics, when she notes,

“Back in the early winter of 1995, Federal Reserve Board Chairman Alan Greenspan testified before the Congress that he thought the CPI substantially overstated the rate of growth in the cost of living. His testimony generated a considerable amount of discussion. Soon afterwards, Speaker of the House Newt Gingrich, at a town meeting in Kennesaw, Georgia, was asked about the CPI and responded by saying, ‘We have a handful of bureaucrats who, all professional economists agree, have an error in their calculations. If they can’t get it right in the next 30 days or so, we zero them out, we transfer the responsibility to either the Federal Reserve or the Treasury and tell them to get it right.’”[v]

Abraham is quoted in newspaper articles as remembering sitting in Republican House Speaker Newt Gingrich’s office:

“ ‘He said to me, If you could see your way clear to doing these things, we might have more money for BLS programs.’ ” [vi]

The “things” in question were to move to quality adjustments for the basket of commodities used to calculate the CPI. The analogue today, of course, is the chained-CPI measure which many suggest is being promoted to slow cost-of-living adjustments in Social Security payments.

Of course, the “real” part in real GDP is linked with the CPI inflation outlook though a process supervised by the BEA.

Seasonal Adjustment Procedures for GDP

Here is a short video by Johnathan H. Wright, a young economist whose Unseasonal Seasonals? is featured in a recent issue of the Brookings Papers on Economic Activity.

Wright’s research is interesting to forecasters, because he concludes that algorithms for seasonally adjusting GDP should be selected based on their predictive performance.

Wright favors state-space models, rather than the moving-average techniques associated with the X-12 seasonal filters that date back to the 1980’s and even the 1960’s.

Given BLS methods of seasonal adjustment, seasonal and cyclical elements are confounded in the SA nonfarm payrolls series, due to sharp drops in employment concentrated in the November 2008 to March 2009 time window.

The upshot - initially this effect pushed reported seasonally adjusted nonfarm payrolls up in the first half of the year and down in the second half of the year, by slightly more than 100,000 in both cases…

One of his prime exhibits compares SA and NSA nonfarm payrolls, showing that,

The regular within-year variation in employment is comparable in magnitude to the effects of the 1990–1991 and 2001 recessions. In monthly change, the average absolute difference between the SA and NSA number is 660,000, which dwarfs the normal month-over-month variation in the SA data.

SEASnonseas

The basic procedure for this data and most releases since 2008-2009 follows what Wright calls the X-12 process.

The X-12 process focuses on certain types of centered moving averages with a fixed weights, based on distance from the central value.

A critical part of the X-12 process involves estimating the seasonal factors by taking weighted moving averages of data in the same period of different years. This is done by taking a symmetric n-term moving average of m-term averages, which is referred to as an n × m seasonal filter. For example, for n = m = 3, the weights are 1/3 on the year in question, 2/9 on the years before and after, and 1/9 on the two years before and after.16 The filter can be a 3 × 1, 3 × 3, 3 × 5, 3 × 9, 3 × 15, or stable filter. The stable filter averages the data in the same period of all available years. The default settings of the X-12…involve using a 3 × 3, 3 × 5, or 3 × 9 seasonal filter, depending on [various criteria]

Obviously, a problem arises at the beginning and at the end of the time series data. A work-around is to use an ARIMA model to extend the time series back and forward in time sufficiently to calculate these centered moving averages.

Wright shows these arbitrary weights and time windows lead to volatile seasonal adjustments, and that, predictively, the BEA and BLS would be better served with a state-space model based on the Kalman filter.

Loopy seasonal adjustment leads to controversy that airs on the web – such as this piece by Zero Hedge from 2012 which highlights the “ficititious” aspect of seasonal adjustments of highly tangible series, such as the number of persons employed -

What is very notable is that in January, absent BLS smoothing calculation, which are nowhere in the labor force, but solely in the mind of a few BLS employees, the real economy lost 2,689,000 jobs, while net of the adjustment, it actually gained 243,000 jobs: a delta of 2,932,000 jobs based solely on statistical assumptions in an excel spreadsheet!

To their credit, Census now documents an X-13ARIMA-SEATS Seasonal Adjustment Program with software incorporating elements of the SEATS procedure originally developed at the Bank of Spain and influenced by the state space models of Andrew Harvey.

Maybe Wright is getting some traction.

What Is The Point of Seasonal Adjustment?

You can’t beat the characterization, apparently from the German Bundesbank, of the purpose and objective of “seasonal adjustment.”

..seasonal adjustment transforms the world we live in into a world where no seasonal and working-day effects occur. In a seasonally adjusted world the temperature is exactly the same in winter as in the summer, there are no holidays, Christmas is abolished, people work every day in the week with the same intensity (no break over the weekend)..

I guess the notion is that, again, if we seasonally adjust and see a change in direction of a time series, why then it probably is a change in trend, rather than from special uses of a certain period.

But I think most of the professional forecasting community is beyond just taking their cue from a single number. It would be better to have the raw or not seasonally adjusted (NSA) series available with every press release, so analysts can apply their own models.

110516_gas_pump

Analyzing Complex Seasonal Patterns

When time series data are available in frequencies higher than quarterly or monthly, many forecasting programs hit a wall in analyzing seasonal effects.

Researchers from the Australian Monash University published an interesting paper in the Journal of the American Statistical Association (JASA), along with an R program, to handle this situation – what can be called “complex seasonality.”

I’ve updated and modified one of their computations – using weekly, instead of daily, data on US conventional gasoline prices – and find the whole thing pretty intriguing.

tbatschart

If you look at the color codes in the legend below the chart, it’s a little easier to read and understand.

Here’s what I did.

I grabbed the conventional weekly US gasoline prices from FRED. These prices are for “regular” – the plain vanilla choice at the pump. I established a start date of the first week in 2000, after looking the earlier data over. Then, I used tbats(.) in the Hyndman R Forecast package which readers familiar with this site know can be downloaded for use in the open source matrix programming language R.

Then, I established an end date for a time series I call newGP of the first week in 2012, forecasting ahead with the results of applying tbats(.) to the historic data from 2000:1 to 2012:1 where the second number refers to weeks which run from 1 to 52. Note that some data scrubbing is needed to shoehorn the gas price data into 52 weeks on a consistent basis. I averaged “week 53” with the nearest acceptable week (either 52 or 1 in the next year), and then got rid of the week 53’s.

The forecast for 104 weeks is shown by the solid red line in the chart above.

This actually looks promising, as if it might encode some useful information for, say, US transportation agencies.

A draft of the JASA paper is available as a PDF download. It’s called Forecasting time series with complex seasonal patterns using exponential smoothing and in addition to daily US gas prices, analyzes daily electricity demand in Turkey and bank call center data.

I’m only going part of the way to analyzing the gas price data, since I have not taken on daily data yet. But the seasonal pattern identified by tbats(.) from the weekly data is interesting and is shown below.

tbatsgasprice

The weekly frequency may enable us to “get inside” a mid-year wobble in the pattern with some precision. Judging from the out-of-sample performance of the model, this “wobble” can in some cases be accentuated and be quite significant.

Trignometric series fit to the higher frequency data extract the seasonal patterns in tbats(.), which also features other advanced features, such as a capability for estimating ARMA (autoregressive moving average) models for the residuals.

I’m not fully optimizing the estimation, but these results are sufficiently strong to encourage exploring the toggles and switches on the routine.

Another routine which works at this level of aggregation is the stlf(.) routine. This is uses STL decomposition described in some detail in Chapter 36 Patterns Discovery Based on Time-Series Decomposition in a collection of essays on data mining.

Thoughts

Good forecasting software elicits sort of addictive behavior, when initial applications of routines seem promising. How much better can the out-of-sample forecasts be made with optimization of the features of the routine? How well does the routine do when you look at several past periods? There is even the possibility of extracting further information from the residuals through bootstrapping or bagging at some point. I think there is no other way than exhaustive exploration.

The payoff to the forecaster is the amazement of his or her managers, when features of a forecast turn out to be spot-on, prescient, or what have you – and this does happen with good software. An alternative, for example, to the Hyndman R Forecast package is the program STAMP I also am exploring. STAMP has been around for many years with a version running – get this – on DOS, which appears to have had more features than the current Windows incarnation. In any case, I remember getting a “gee whiz” reaction from the executive of a regional bus district once, relating to ridership forecasts. So it’s fun to wring every possible pattern from the data.

stevens_figure2_climate_ksm

Seasonal Sales Patterns – Stylized Facts

Seasonal sales patterns in the United States are more or less synchronized with Europe, Japan, China, and, to a lesser extent, the rest of the world.

Here are some stylized facts:

  1. Sales tend to peak at the end of the calendar year. This is the well-known “Christmas effect,” and is a strong enough factor to “cannibalize” demand, to an extent, at the first of the following year.
  2. Sales of final goods tend to be lower – in terms of growth rates and, in some cases, absolutely, in the first calendar quarter of the year.
  3. Supply chain effects, related to pulses of sales of final goods, can be identified for various lines of production depending on production lead times. Semiconductor orders, for example, tend to peak earlier than sales of consumer electronics, which are sharply influenced by the Christmas season.

To validate this picture, let me offer some evidence.

First, consider retail and food service sales data for the US, a benchmark of consumer activity – the recently discussed data downloaded from FRED.

Applying the automatic model selection of the Hyndman R Forecast package, we get a decomposition of this time series into level, trend, and seasonals, as shown in the following diagram.

Rplotrs

The optimal exponential smoothing forecast model is a model with a damped trend and multiplicative seasonals.

If we look at the lower part of this diagram, we see that the seasonal factor for December – which is shown by the major peaks in the curve – is a multiple of more than 1.15. On the other hand, the immediately following month – January – shows a multiple of 0.9. These factors are multiplied into the product of the level and trend to get the sales for December and January. In other words, you can suppose that, roughly speaking, December retail sales will be 15 percent above trend, while January sales will be 90 percent of trend.

And, if you inspect this diagram in the lower panel carefully, you can detect the lull in late summer and fall in retail sales.

With “just-in-time” inventories and lean production models, actual production activity closely tracks these patterns in final demand – although it does take some lead time to produce stuff.

These stylized facts have not changed in their outlines since the ground-breaking research of Jeffrey Miron in the the late 1980’s. Miron refers to a worldwide seasonal cycle in aggregate economic activity whose major features are a fourth quarter boom in output.., a third quarter trough in manufacturing production, and a first quarter trough in all economic activity.

The Effects of Different Calendars – the Chinese New Year and Ramadan

The Gregorian calendar has achieved worldwide authority, and almost every country follows on the conventions of counting the year (currently 2014).

The Chinese calendar, however, is still important for determining the timing of festivals for Chinese communities around the world, and, especially, in China.

GRAPHICS TEMPLATE 2006

Similarly, the Islamic calendar governs the timing of important ritual periods and religious festivals – such as the month of Ramadan, which falls in June and July in 2014.

Because these festival periods overlap with multiple Gregorian months, there can be significant localized impacts on estimates of seasonal variation of economic activity.

Taiwanese researchers looking at this issue find significant holiday effects, related the fact that,

The three most important Chinese holidays, Chinese New Year, the Dragon-boat Festival, and Mid-Autumn Holiday have dates determined by a lunar calendar and move between two solar months. Consumption, production, and other economic behavior in countries with large Chinese population including Taiwan are strongly affected by these holidays. For example, production accelerates before lunar new year, almost completely stops during the holidays and gradually rises to an average level after the holidays.

Similarly, researchers in Pakistan consider the impacts of the Islamic festivals on standard macroeconomic and financial time series.

shutterstock_102937733

Seasonal Variation

Evaluating and predicting seasonal variation is a core competence of forecasting, dating back to the 1920’s or earlier. It’s essential to effective business decisions. For example, as the fiscal year unfolds, the question is “how are we doing?” Will budget forecasts come in on target, or will more (or fewer) resources be required? Should added resources be allocated to Division X and taken away from Division Y? To answer such questions, you need a within-year forecast model, which in most organizations involves quarterly or monthly seasonal components or factors.

Seasonal adjustment, on the other hand, is more mysterious. The purpose is more interpretive. Thus, when the Bureau of Labor Statistics (BLS) or Bureau of Economic Analysis (BEA) announce employment or other macroeconomic numbers, they usually try to take out special effects (the “Christmas effect”) that purportedly might mislead readers of the Press Release. Thus, the series we hear about typically are “seasonally adjusted.”

You can probably sense my bias. I almost always prefer data that is not seasonally adjusted in developing forecasting models. I just don’t know what magic some agency statistician has performed on a series – whether artifacts have been introduced, and so forth.

On the other hand, I take the methods of identifying seasonal variation quite seriously. These range from Buys-Ballot tables and seasonal dummy variables to methods based on moving averages, trigonometric series (Fourier analysis), and maximum likelihood estimation.

Identifying seasonal variation can be fairly involved mathematically.

But there are some simple reality tests.

Take this US retail and food service sales series, for example.

retailfs

Here you see the highly regular seasonal movement around a trend which, at times, is almost straight-line.

Are these additive or multiplicative seasonal effects? If we separate out the trend and the seasonal effects, do we add them or are the seasonal effects “factors” which multiply into the level for a month?

Well, for starters, we can re-arrange this time series into a kind of Buys-Ballot table. Here I only show the last two years.

BBTab

The point is that we look at the differences between the monthly values in a year and the average for that year. Also, we calculate the ratios of each month to the annual total.

The issue is which of these numbers is most stable over the data period, which extends back to 1992 (click to enlarge).

additive

mult

Now here Series N relates to the Nth month, e.g. Series 12 = December.

It seems pretty clear that the multiplicative factors are more stable than the additive components in two senses. First, some additive components have a more pronounced trend; secondly, the variability of the additive components around this trend is greater.

This gives you a taste of some quick methods to evaluate aspects of seasonality.

Of course, there can be added complexities. What if you have daily data, or suppose there are other recurrent relationships. Then, trig series may be your best bet.

What if you only have two, three, or four years of data? Well, this interesting problem is frequently encountered in practical applications.

I’m trying to sort this material into posts for this coming week, along with stuff on controversies that swirl around the seasonal adjustment of macro time series, such as employment and real GDP.

Stay tuned.

Top image from http://www.livescience.com/25202-seasons.html

D-Wave_Technology_E_IMG_5948_EXPORT

Video Friday – Quantum Computing

I’m instituting Video Friday. It’s the end of the work week, and videos introduce novelty and pleasant change in communications.

And we can keep focusing on matters related to forecasting applications and data analytics, or more generally on algorithmic guides to action.

Today I’m focusing on D-Wave and quantum computing. This could well could take up several Friday’s, with cool videos on underlying principles and panel discussions with analysts from D-Wave, Google and NASA. We’ll see. Probably, I will treat it as a theme, returning to it from time to time.

A couple of introductory comments.

First of all, David Wineland won a Nobel Prize in physics in 2012 for his work with quantum computing. I’ve heard him speak, and know members of his family. Wineland did his work at the NIST Laboratories in Boulder, the location for Eric Cornell’s work which was awarded a Nobel Prize in 2001.

I mention this because understanding quantum computing is more or less like trying to understand quantum physics, and, there, I think engineering has a role to play.

The basic concept is to exploit quantum superimposition, or perhaps quantum entanglement, as a kind of parallel processor. The qubit, or quantum bit, is unlike the bit of classical computing. A qubit can be both 0 and 1 simultaneously, until it’s quantum wave equation is collapsed or dispersed by measurement. Accordingly, the argument goes, qubits scale as powers of 2, and a mere 500 qubits could more than encode all atoms in the universe. Thus, quantum computers may really shine at problems where you have to search through all different combinations of things.

But while I can write the quantum wave equation of Schrodinger, I don’t really understand it in any basic sense. It refers to a probability wave, whatever that is.

Feynman, whose lectures (and tapes or CD’s) on physics I proudly own, says it is pointless to try to “understand” quantum weirdness. You have to be content with being able to predict outcomes of quantum experiments with the apparatus of the theory. The theory is highly predictive and quite successful, in that regard.

So I think D-Wave is really onto something. They are approaching the problem of developing a quantum computer technologically.

Here is a piece of fluff Google and others put together about their purchase of a D-Wave computer and what’s involved with quantum computing.

OK, so now here is Eric Ladizinsky in a talk from April of this year on Evolving Scalable Quantum Computers. I can see why Eric gets support from DARPA and Bezos, a range indeed. You really get the “ah ha” effect listening to him. For example, I have never before heard a coherent explanation of how the quantum weirdness typical for small particles gets dispersed with macroscopic scale objects, like us. But this explanation, which is mathematically based on the wave equation, is essential to the D-Wave technology.

It takes more than an hour to listen to this video, but, maybe bookmark it if you pass on from a full viewing, since I assure you that this is probably the most substantive discussion I have yet found on this topic.

But is D-Wave’s machine a quantum computer?

Well, they keep raising money.

D-Wave Systems raises $30M to keep commercializing its quantum computer

But this infuriates some in the academic community, I suspect, who distrust the announcement of scientific discovery by the Press Release.

There is a brilliant article recently in Wired on D-Wave, which touches on a recent challenge to its computational prowess (See Is D-Wave’s quantum computer actually a quantum computer?)

The Wired article gives Geordie Rose, a D-Wave founder, space to rebut at which point these excellent comments can be found:

Rose’s response to the new tests: “It’s total bullshit.”

D-Wave, he says, is a scrappy startup pushing a radical new computer, crafted from nothing by a handful of folks in Canada. From this point of view, Troyer had the edge. Sure, he was using standard Intel machines and classical software, but those benefited from decades’ and trillions of dollars’ worth of investment. The D-Wave acquitted itself admirably just by keeping pace. Troyer “had the best algorithm ever developed by a team of the top scientists in the world, finely tuned to compete on what this processor does, running on the fastest processors that humans have ever been able to build,” Rose says. And the D-Wave “is now competitive with those things, which is a remarkable step.”

But what about the speed issues? “Calibration errors,” he says. Programming a problem into the D-Wave is a manual process, tuning each qubit to the right level on the problem-solving landscape. If you don’t set those dials precisely right, “you might be specifying the wrong problem on the chip,” Rose says. As for noise, he admits it’s still an issue, but the next chip—the 1,000-qubit version codenamed Washington, coming out this fall—will reduce noise yet more. His team plans to replace the niobium loops with aluminum to reduce oxide buildup….

Or here’s another way to look at it…. Maybe the real problem with people trying to assess D-Wave is that they’re asking the wrong questions. Maybe his machine needs harder problems.

On its face, this sounds crazy. If plain old Intels are beating the D-Wave, why would the D-Wave win if the problems got tougher? Because the tests Troyer threw at the machine were random. On a tiny subset of those problems, the D-Wave system did better. Rose thinks the key will be zooming in on those success stories and figuring out what sets them apart—what advantage D-Wave had in those cases over the classical machine…. Helmut Katzgraber, a quantum scientist at Texas A&M, cowrote a paper in April bolstering Rose’s point of view. Katzgraber argued that the optimization problems everyone was tossing at the D-Wave were, indeed, too simple. The Intel machines could easily keep pace..

In one sense, this sounds like a classic case of moving the goalposts…. But D-Wave’s customers believe this is, in fact, what they need to do. They’re testing and retesting the machine to figure out what it’s good at. At Lockheed Martin, Greg Tallant has found that some problems run faster on the D-Wave and some don’t. At Google, Neven has run over 500,000 problems on his D-Wave and finds the same....

..it may be that quantum computing arrives in a slower, sideways fashion: as a set of devices used rarely, in the odd places where the problems we have are spoken in their curious language. Quantum computing won’t run on your phone—but maybe some quantum process of Google’s will be key in training the phone to recognize your vocal quirks and make voice recognition better. Maybe it’ll finally teach computers to recognize faces or luggage. Or maybe, like the integrated circuit before it, no one will figure out the best-use cases until they have hardware that works reliably. It’s a more modest way to look at this long-heralded thunderbolt of a technology. But this may be how the quantum era begins: not with a bang, but a glimmer.

paperpeople

Links – July 10, 2014

Did China Just Crush The US Housing Market? Zero Hedge has established that Chinese money is a major player in the US luxury housing market with charts like these.

pieNAR

NASbarchart

Then, looking within China, it’s apparent that the source of this money could be shut off – a possibility which evokes some really florid language from Zero Hedge -

Because without the Chinese bid in a market in which the Chinese are the biggest marginal buyer scooping up real estate across the land, sight unseen, and paid for in laundered cash (which the NAR blissfully does not need to know about due to its AML exemptions), watch as suddenly the 4th dead cat bounce in US housing since the Lehman failure rediscovers just how painful gravity really is.

IPO market achieves liftoff More IPO’s coming to market now.

IPO

The Mouse That Wouldn’t Die: How a Lack of Public Funding Holds Back a Promising Cancer Treatment Fascinating. Dr. Zheng Cui has gone from identifying, then breeding cancer resistant mice, to discovering the genetics and mechanism of this resistance, focusing on a certain type of white blood cell. Then, moving on to human research, Dr. Cui has identified similar genetics in humans, and successfully treated advanced metastatic cancer in trials. But somehow – maybe since transfusions are involved and Big pharma can’t make money on it – the research is losing support.

Scientists Create ‘Dictionary’ of Chimp Gestures to Decode Secret Meanings

Some of those discovered meanings include the following:

•When a chimpanzee taps another chimp, it means “Stop that”

•When a chimpanzee slaps an object or flings its hand, it means “Move away” or “Go away”

•When a chimpanzee raises its arm, it means “I want that”

Chimp-Gestures

Medicine w/o antibiotics

The Hillary Clinton Juggernaut Courts Wall Street and Neocons Describes Hillary as the “uber-establishment candidate.”

US-AFGHANISTAN-WOMEN-RIGHTS-CLINTON

smoothing

Wrap on Exponential Smoothing

Here are some notes on essential features of exponential smoothing.

  1. Name. Exponential smoothing (ES) algorithms create exponentially weighted sums of past values to produce the next (and subsequent period) forecasts. So, in simple exponential smoothing, the recursion formula is Lt=αXt+(1-α)Lt-1 where α is the smoothing constant constrained to be within the interval [0,1], Xt is the value of the time series to be forecast in period t, and Lt is the (unobserved) level of the series at period t. Substituting the similar expression for Lt-1 we get Lt=αXt+(1-α) (αXt-1+(1-α)Lt-2)= αXt+α(1-α)Xt-1+(1-α)2Lt-2, and so forth back to L1. This means that more recent values of the time series X are weighted more heavily than values at more distant times in the past. Incidentally, the initial level L1 is not strongly determined, but is established by one ad hoc means or another – often by keying off of the initial values of the X series in some manner or another. In state space formulations, the initial values of the level, trend, and seasonal effects can be included in the list of parameters to be established by maximum likelihood estimation.
  2. Types of Exponential Smoothing Models. ES pivots on a decomposition of time series into level, trend, and seasonal effects. Altogether, there are fifteen ES methods. Each model incorporates a level with the differences coming as to whether the trend and seasonal components or effects exist and whether they are additive or multiplicative; also whether they are damped. In addition to simple exponential smoothing, Holt or two parameter exponential smoothing is another commonly applied model. There are two recursion equations, one for the level Lt and another for the trend Tt, as in the additive formulation, Lt=αXt+(1-α)(Lt-1+Tt-1) and Tt=β(Lt- Lt-1)+(1-β)Tt-1 Here, there are now two smoothing parameters, α and β, each constrained to be in the closed interval [0,1]. Winters or three parameter exponential smoothing, which incorporates seasonal effects, is another popular ES model.
  3. Estimation of the Smoothing Parameters. The original method of estimating the smoothing parameters was to guess their values, following guidelines like “if the smoothing parameter is near 1, past values will be discounted further” and so forth. Thus, if the time series to be forecast was very erratic or variable, a value of the smoothing parameter which was closer to zero might be selected, to achieve a longer period average. The next step is to set up a sum of the squared differences of the within sample predictions and minimize these. Note that the predicted value of Xt+1 in the Holt or two parameter additive case is Lt+Tt, so this involves minimizing the expression Sqerroreq Currently, the most advanced method of estimating the value of the smoothing parameters is to express the model equations in state space form and utilize maximum likelihood estimation. It’s interesting, in this regard, that the error correction version of ES recursion equations are a bridge to this approach, since the error correction formulation is found at the very beginnings of the technique. Advantages of using the state space formulation and maximum likelihood estimation include (a) the ability to estimate confidence intervals for point forecasts, and (b) the capability of extending ES methods to nonlinear models.
  4. Comparison with Box-Jenkins or ARIMA models. ES began as a purely applied method developed for the US Navy, and for a long time was considered an ad hoc procedure. It produced forecasts, but no confidence intervals. In fact, statistical considerations did not enter into the estimation of the smoothing parameters at all, it seemed. That perspective has now changed, and the question is not whether ES has statistical foundations – state space models seem to have solved that. Instead, the tricky issue is to delineate the overlap and differences between ES and ARIMA models. For example, Gardner makes the statement that all linear exponential smoothing methods have equivalent ARIMA models. Hyndman points out that the state space formulation of ES models opens the way for expressing nonlinear time series – a step that goes beyond what is possible in ARIMA modeling.
  5. The Importance of Random Walks. The random walk is a forecasting benchmark. In an early paper, Muth showed that a simple exponential smoothing model provided optimal forecasts for a random walk. The optimal forecast for a simple random walk is the current period value. Things get more complicated when there is an error associated with the latent variable (the level). In that case, the smoothing parameter determines how much of the recent past is allowed to affect the forecast for the next period value.
  6. Random Walks With Drift. A random walk with drift, for which a two parameter ES model can be optimal, is an important form insofar as many business and economic time series appear to be random walks with drift. Thus, first differencing removes the trend, leaving ideally white noise. A huge amount of ink has been spilled in econometric investigations of “unit roots” – essentially exploring whether random walks and random walks with drift are pretty much the whole story when it comes to major economic and business time series.
  7. Advantages of ES. ES is relatively robust, compared with ARIMA models, which are sensitive to mis-specification. Another advantage of ES is that ES forecasts can be up and running with only a few historic observations. This comment applied to estimation of the level and possibly trend, but does not apply in the same degree to the seasonal effects, which usually require more data to establish. There are a number of references which establish the competitive advantage in terms of the accuracy of ES forecasts in a variety of contexts.
  8. Advanced Applications.The most advanced application of ES I have seen is the research paper by Hyndman et al relating to bagging exponential smoothing forecasts.

The bottom line is that anybody interested in and representing competency in business forecasting should spend some time studying the various types of exponential smoothing and the various means to arrive at estimates of their parameters.

For some reason, exponential smoothing reaches deep into actual process in data generation and consistently produces valuable insights into outcomes.

Sales and new product forecasting in data-limited (real world) contexts