Oil and Gas Prices – a “Golden Swan”?

Crude oil prices plummeted last week, moving toward $80/Bbl for West Texas Intermediate (WTI) – the spot pricing standard commodity.


OPEC – the Organization of Petroleum Exporting Counties – is a key to trajectory of oil prices, accounting for about 40 percent of global oil output.

Media reports that the Saudi Arabian Kingdom, which is the largest producer in OPEC, is advising that it will not cut oil production at the current time. The US Energy Information Agency (EIA) has a graph on its website underlining the importance of Saudi production to global oil prices.


Officially, there is very little in the media to pin down the current Saudi policy, although, off-the-record, Saudi representatives apparently have indicated they could allow crude prices to drift between $80 and $90 a barrel for a couple of years. This could impact higher cost producers, such as Iran and burgeoning North American shale oil production.

At the same time, several OPEC members, such as Venezuela and Libya, have called for cuts in output to manage crude prices going forward. And a field jointly maintained by Saudi Arabia and Kuwait just has been shut down, ostensibly for environmental upgrades.

OPEC’s upcoming November 27 meeting in Vienna, Austria should be momentous.

US Oil Production

Currently, US oil production is running at 8.7 million barrels a day, a million barrels a day higher than in a comparable period of 2013, and the highest level since 1986.

The question of the hour is whether US production can continue to increase with significantly lower oil prices.

Many analysts echo the New York Times, which recently compared throttling back US petroleum activity to slowing a freight train.

Most companies make their investment decisions well in advance and need months to slow exploration because of contracts with service companies. And if they do decide to cut back some drilling, they will pick the least prospective fields first as they continue developing the richest prospects.

At the same time, the most recent data suggest US rig activity is starting to slip.

Economic Drivers

It’s all too easy to engage in arm-waving, when discussing energy supplies and prices and their relationship to the global economy.

Of course, we have supply and demand, as one basis. Supplies have been increasing, in part because of new technologies in US production and Libyan production coming back on line.

Demands have not been increasing, on the other hand, as rapidly as in the past. This reflects slowing growth in China and continuing energy conservation.

One imponderable is the influence of speculators on oil prices. Was there a “bubble” before 2009, for example, and could speculators drive oil prices significantly lower in coming months?

Another factor that is hard to assess is whether 2015 will see a recession in major parts of the global economy.

The US Federal Reserve has been planning on eliminating Quantitative Easing (QE) – its program of long-term bond purchases – and increasing the federal funds rate from its present level of virtually zero. Many believe that these actions will materially slow US and global economic growth. Coupled with the current deflationary environment in Europe, there have been increasing signs that factors could converge to trigger a recession sometime in 2015.

However, low energy prices usually are not part of the prelude for a recession, although they can develop after the recession takes hold.

Instead, prices at the pump in the US could fall below $3.00 a gallon, providing several hundred dollars extra in discretionary income over the course of a year. This, just prior to the Christmas shopping season.

So – if US oil production continues to increase and prices at the pump fall below $3.00, there will be jobs and cheap gas, a combination likely to forstall a downturn, at least in the US for the time being.

Top image courtesy of GameDocs

Forecasting in the Supply Chain

The Foresight Practitioner’s Conference held last week in on the campus of Ohio State University highlighted business gains in forecasting and the bottom line from integration across the supply chain.

Officially, the title of the Conference was “From S&OP to Demand-Supply Integration: Collaboration Across the Supply Chain.”
S&OP is an important practice in many businesses right now – Sales and Operations Planning. By itself it signifies business integration, but several speakers – starting off with Pete Alle of Oberweiss Dairyu – emphasized the importance of linking the S&OP manager with the General Manager directly, and of his sponsorship and support.

Luke Busby described revitalization of an S&OP process for Steris – a medical technology leader focusing on infection prevention, contamination control, surgical and critical care technologies. Problems encountered were that the old process was spreadsheet driven, used minimal analytics, led to finger pointing – “Your numbers!”, was not comprehensive – not all products and plants included, and embodied divergent goals.

Busby had good things to say about software called Smoothie from Demand Works in facilitating the new Steris process. Busby described benefits from the new implementation at a high level of detail, including the ability, for example, to drill down and segment the welter of SKU’s in the company product lines.

I found the talk especially interesting because of its attention to organization detail, such as shown in the following slide.


But this was more than an S&OP Conference, as underlined by Dr. Mark A. Moon’s presentation From S&OP to True Business Integration. Moon, Head, Department of Marketing and Supply Chain Management, University of Tennessee, Knoxville, started his talk with the following telling slide –


Glen Lewis of the University of California at Davis and formerly a Del Monte Director spoke on a wider integration of S&OP with Green Energy practices, focusing mainly on time management of peak electric power demands.

Thomas Goldsby, Professor of Logistics Fisher College of Business who introduced the concept of the supply web (shown below), and co-presented with Alicia Hammersmith, GM for Materials, General Electric Aviation. I finally learned what 3D printing was.


Probably the most amazing part of the Conference for me was the Beer Game, led by James Hill, Associate Professor of Management Sciences at The Ohio State University Fisher College of Business. Several tables were set up in a big auditorium in the Business School, each with a layout of production, product warehousing, distributor warehouses, and retail outlets. These four positions were staffed by Conference attendees, many expert in supply chain management.

The objective was to minimize inventory costs, where shortfalls earned a double penalty. No communication was permitted along these fictive supply chains for beer. Demand was unknown at retail, but when discovered resulted in orders being passed back along the chain, where lags were introduced in provisioning. The upshot was that every table created the famous “bullwhip effect” of intensifying volatility of inventory back along the supply chain.

Bottom line was that if you want to become a hero in an organization short-term, find a way to reduce inventory, since that results in immediate increases in cash flow.

All very interesting. Where does forecasting fit into this? Good question, and that was discussed in open sessions.

A common observation was that relying on the field sales teams to provide estimates of future orders can lead to bias.


Video Friday on Steroids

Here is a list of the URL’s for all the YouTube and other videos shown on this blog from January 2014 through May of this year. I encourage you to shop this list, clicking on the links. There’s a lot of good stuff, including several  instructional videos on machine learning and other technical topics, a series on robotics, and several videos on climate and climate change.

January 2014

The Polar Vortex Explained in Two Minutes

NASA – Six Decades of a Warming Earth

“CHASING ICE” captures largest video calving of glacier

Machine Learning and Econometrics

Can Crime Prediction Software Stop Criminals?

Analytics 2013 – Day 1

The birth of a salesman

Economies Improve

Kaggle – Energy Applications for Machine Learning

2014 Outlook with Jan Hatzius

Nassim Taleb Lectures at the NSF

Vernon Smith – Experimental Markets



Forecast Pro – Quick Tour

February 2014

Stephen Wolfram’s Introduction to the Wolfram Language


Econometrics – Quantile Regression

Quantile Regression Example

Brooklyn Grange – A New York Growing Season

Getting in Shape for the Sport of Data Science

Machine Learning – Decision Trees

Machine Learning – Random Forests

Machine Learning – Random Forecasts Applications

Malcolm Gladwell on the 10,000 Hour Rule

Sornette Talk

Head of India Central Bank Interview

March 2014

David Stockman

Partial Least Squares Regression

April 2014

Thomas Piketty on Economic Inequality

Bonobo builds a fire and tastes marshmellows

Future Technology

May 2014

Ray Kurzweil: The Coming Singularity

Paul Root Wolpe: Kurzweil Critique

The Future of Robotics and Artificial Intelligence

Car Factory – KIA Sportage Assembly Line

10 Most Popular Applications for Robots

Predator Drones

The Future of Robotic Warfare

Bionic Kangaroo

Ping Pong Playing Robot

Baxter, the Industrial Robot



Stylized Facts About Stock Market Volatility

Volatility of stock market returns is more predictable, in several senses, than stock market returns themselves.

Generally, if pt is the price of a stock at time t, stock market returns often are defined as ln(pt)-ln(pt-1). Volatility can be the absolute value of these returns, or as their square. Thus, hourly, daily, monthly or other returns can be positive or negative, while volatility is always positive.

Masset highlights several stylized facts about volatility in a recent paper –

  • Volatility is not constant and tends to cluster through time. Observing a large (small) return today (whatever its sign) is a good precursor of large (small) returns in the coming days.
  • Changes in volatility typically have a very long-lasting impact on its subsequent evolution. We say that volatility has a long memory.
  • The probability of observing an extreme event (either a dramatic downturn or an enthusiastic takeoff) is way larger than what is hypothesized by common data generating processes. The returns distribution has fat tails.
  • Such a shock also has a significant impact on subsequent returns. Like in an earthquake, we typically observe aftershocks during a number of trading days after the main shock has taken place.
  • The amplitude of returns displays an intriguing relation with the returns themselves: when prices go down – volatility increases; when prices go up – volatility decreases but to a lesser extent. This is known as the leverage effect … or the asymmetric volatility phenomenon.
  • Recently, some researchers have noticed that there were also some significant differences in terms of information content among volatility estimates computed at various frequencies. Changes in low-frequency volatility have more impact on subsequent high-frequency volatility than the opposite. This is due to the heterogeneous nature of market participants, some having short-, medium- or long-term investment horizons, but all being influenced by long-term moves on the markets…
  • Furthermore, … the intensity of this relation between long and short time horizons depends on the level of volatility at long horizons: when volatility at a long time horizon is low, this typically leads to low volatility at short horizons too. The reverse is however not always true…

Masset extends and deepens this type of result for bull and bear markets and developed/emerging markets. Generally, emerging markets display higher volatility with some differences in third and higher moments.

A key reference is Rami Cont’s Empirical properties of asset returns: stylized facts and statistical issues which provides this list of features of stock market returns, some of which directly relate to volatility. This is one of the most widely-cited articles in the financial literature:

  1. Absence of autocorrelations: (linear) autocorrelations of asset returns are often insignificant, except for very small intraday time scales (~20 minutes) for which microstructure effects come into play.
  2. Heavy tails: the (unconditional) distribution of returns seems to display a power-law or Pareto-like tail, with a tail index which is finite, higher than two and less than five for most data sets studied. In particular this excludes stable laws with infinite variance and the normal distribution. However the precise form of the tails is difficult to determine.
  3. Gain/loss asymmetry: one observes large drawdowns in stock prices and stock index values but not equally large upward movements.
  4. Aggregational Gaussianity: as one increases the time scale t over which returns are calculated, their distribution looks more and more like a normal distribution. In particular, the shape of the distribution is not the same at different time scales.
  5. Intermittency: returns display, at any time scale, a high degree of variability. This is quantified by the presence of irregular bursts in time series of a wide variety of volatility estimators.
  6. Volatility clustering: different measures of volatility display a positive autocorrelation over several days, which quantifies the fact that high-volatility events tend to cluster in time.
  7. Conditional heavy tails: even after correcting returns for volatility clustering (e.g. via GARCH-type models), the residual time series still exhibit heavy tails. However, the tails are less heavy than in the unconditional distribution of returns.
  8. Slow decay of autocorrelation in absolute returns: the autocorrelation function of absolute returns decays slowly as a function of the time lag, roughly as a power law with an exponent β ∈ [0.2, 0.4]. This is sometimes interpreted as a sign of long-range dependence.
  9. Leverage effect: most measures of volatility of an asset are negatively correlated with the returns of that asset.
  10. Volume/volatility correlation: trading volume is correlated with all measures of volatility.
  11. Asymmetry in time scales: coarse-grained measures of volatility predict fine-scale volatility better than the other way round.

Just to position the discussion, here are graphs of the NASDAQ 100 daily closing prices and the volatility of daily returns, since October 1, 1985.


The volatility here is calculated as the absolute value of the differences of the logarithms of the daily closing prices.



The Holy Grail of Business Forecasting – Forecasting the Next Downturn

What if you could predict the Chicago Fed National Activity Index (CFNAI), interpolated monthly values of the growth of nominal GDP, the Aruoba-Diebold-Scotti (ADS) Business Conditions Index, and the Kansas City Financial Stress Index (KCFSI) three, five, seven, even twelve months into the future? What if your model also predicted turning points in these US indexes, and also similar macroeconomic variables for countries in Asia and the European Union? And what if you could do all this with data on monthly returns on the stock prices of companies in the financial sector?

That’s the claim of Linda Allen, Turan Bali, and Yi Tang in a fascinating 2012 paper Does Systemic Risk in the Financial Sector Predict Future Economic Downturns?

I’m going to refer to these authors as Bali et al, since it appears that Turan Bali, shown below, did some of the ground-breaking research on estimating parametric distributions of extreme losses. Bali also is the corresponding author.


Bali et al develop a new macroindex of systemic risk that predicts future real economic downturns which they call CATFIN.

CATFIN is estimated using both value-at-risk (VaR) and expected shortfall (ES) methodologies, each of which are estimated using three approaches: one nonparametric and two different parametric specifications. All data used to construct the CATFIN measure are available at each point in time (monthly, in our analysis), and we utilize an out-of-sample forecasting methodology. We find that all versions of CATFIN are predictive of future real economic downturns as measured by gross domestic product (GDP), industrial production, the unemployment rate, and an index of eighty-five existing monthly economic indicators (the Chicago Fed National Activity Index, CFNAI), as well as other measures of real macroeconomic activity (e.g., NBER recession periods and the Aruoba-Diebold-Scott [ADS] business conditions index maintained by the Philadelphia Fed). Consistent with an extensive body of literature linking the real and financial sectors of the economy, we find that CATFIN forecasts aggregate bank lending activity.

The following graphic illustrates three components of CATFIN and the simple arithmetic average, compared with US recession periods.


Thoughts on the Method

OK, here’s the simple explanation. First, these researchers identify US financial companies based on definitions in Kenneth French’s site at the Tuck School of Business (Dartmouth). There are apparently 500-1000 of these companies for the period 1973-2009. Then, for each month in this period, rates of return of the stock prices of these companies are calculated. Then, three methods are used to estimate 1% value at risk (VaR) – two parametric methods and one nonparametric methods. The nonparametric method is straight-forward –

The nonparametric approach to estimating VaR is based on analysis of the left tail of the empirical return distribution conducted without imposing any restrictions on the moments of the underlying density…. Assuming that we have 900 financial firms in month t , the nonparametric measure of1%VaR is the ninth lowest observation in the cross-section of excess returns. For each month, we determine the one percentile of the cross-section of excess returns on financial firms and obtain an aggregate 1% VaR measure of the financial system for the period 1973–2009.

So far, so good. This gives us the data for the graphic shown above.

In order to make this predictive, the authors write that –


Like a lot of leading indicators, the CATFIN predictive setup “over-predicts” to some extent. Thus, there are there are five instances in which a spike in CATFIN is not followed by a recession, thereby providing a false positive signal of future real economic distress. However, the authors note that in many of these cases, predicted macroeconomic declines may have been averted by prompt policy intervention. Their discussion of this is very interesting, and plausible.

What This Means

The implications of this research are fairly profound – indicating, above all, the priority of the finance sector in leading the overall economy today. Certainly, this consistent with the balance sheet recession of 2008-2009, and probably will continue to be relevant going forward – since nothing really has changed and more concentration of ownership in finance has followed 2008-2009.

I do think that Serena Ng’s basic point in a recent review article probably is relevant – that not all recessions are the same. So it may be that this method would not work as well for, say, the period 1945-1970, before financialization of the US and global economies.

The incredibly ornate mathematics of modeling the tails of return distributions are relevant in this context, incidentally, since the nonparametric approach of looking at the empirical distributions month-by-month could be suspect because of “cherry-picking.” So some companies could be included, others excluded to make the numbers come out. This is much difficult in a complex maximum likelihood estimation process for the location parameters of these obscure distributions.

So the question on everybody’s mind is – WHAT DOES THE CATFIN MODEL INDICATE NOW ABOUT THE NEXT FEW MONTHS? Unfortunately, I am unable to answer that, although I have corresponded with some of the authors to inquire whether any research along such lines can be cited.

Bottom line – very impressive research and another example of how important science can get lost in the dance of prestige and names.


Links, end of September

Information Technology (IT)

This is how the “Shell Shock” bug imperils the whole internet

It’s a hacker’s wet dream: a software bug discovered in the practically ubiquitous computer program known as “Bash” makes hundreds of millions of computers susceptible to hijacking. The impact of this bug is likely to be higher than that of the Heartbleed bug, which was exposed in April. The National Vulnerability Database, a US government system which tracks information security flaws, gave the bug the maximum score for “Impact” and “Exploitability,” and rated it as simple to exploit.

The bug, which has been labeled “Shell Shock” by security experts, affects computers running Unix-based operating systems like Mac OS X and Linux. That means most of the internet: according to a September survey conducted by Netcraft, a British internet services company, just 13% of the busiest one million websites use Microsoft web servers. Almost everyone else likely serves their website via a Unix operating system that probably uses Bash.

Microsoft’s Bing Predicts correctly forecasted the Scottish Independence Referendum vote

Bing Predicts was beta tested in the UK for this referendum. The prediction engine uses machine-learning models to analyse and detect patterns from a range of big data sources such as the web and social activity in order to make accurate predictions about the outcome of events.

Bing got the yes/no vote right, but missed the size of the vote to stay united with England, Wales, and Northern Ireland.

Is the profession of science broken (a possible cause of the great stagnation)? Fascinating discussion which mirrors many friends’ comments that too much time is taken up applying for and administering grants, and not enough time is left for the actual research, for unconventional ideas.

What has changed is the bureaucratic culture. The increasing interpenetration of government, university, and private firms has led everyone to adopt the language, sensibilities, and organizational forms that originated in the corporate world. Although this might have helped in creating marketable products, since that is what corporate bureaucracies are designed to do, in terms of fostering original research, the results have been catastrophic.


Climate Science Is Not Settled The Wall Street Journal piece by a former Obama adviser and BP scientist inflamed the commentariat, after publication September 16, on the eve of the big climate talks and march in New York City. See On eve of climate march, Wall Street Journal publishes call to wait and do nothing for a critical perspective.

This chart, from NOAA, is one key – showing the divergence in heat stored in various layers of the oceans –


Nicholas Stern: The state of the climate — and what we might do about it TED talk.


The public response to the Ebola epidemic is ramping up, but the situation is still dire and total cases and deaths are still increasing exponentially.

Ebola outbreak: Death toll passes 3,000 as WHO warns numbers are ‘vastly underestimated’

“The Ebola epidemic ravaging parts of West Africa is the most severe acute public health emergency seen in modern times.Never before in recorded history has a biosafety level four pathogen infected so many people so quickly, over such a broad geographical area, for so long.”


Global Economy

What Does a ‘Good’ Chinese Adjustment Look Like? Michael Pettis argues that what some see as a “soft landing” is in fact a preparation for later financial collapse. Instead, based on an intricate argument regarding interest rates and the nominal GDP growth rates in China, he proposes a reduction in Chinese GDP growth going forward through control of credit – in order to rebalance the Chinese consumer economy. Pettis is to my way of thinking always relevant, and often brilliant in the way he makes his analysis.

What Went Wrong? Russia Sanctions, EU, and the Way Out

Washington, Brussels and Moscow are in a vicious circle, which would spare none of them and which has potential to undermine global recovery.

Venture Capital

22 Crowdfunding Sites (and How To Choose Yours!)


Video Friday – Volatility

Here are a couple of short YouTube videos from Bionic Turtle on estimating a GARCH (generalized autoregressive conditional heteroskedasticity) model and the simpler exponentially weighted moving average (EWMA) model.

GARCH models are designed to capture the clustering of volatility illustrated in the preceding post.

Forecast volatility with GARCH(1,1)

The point is that the parameters of a GARCH model are estimated over historic data, so the model can be utilized prospectively, to forecast future volatility, usually in the near term.

EWMA models, insofar as they put more weight on recent values, than on values more distant back in time, also tend to capture clustering phenomena.

Here is a comparison.

EWMA versus GARCH(1,1) volatility

Several of the Bionic Turtle series on estimating financial metrics are worth checking out.


Volatility – I

Greetings, Sports Fans. I’m back from visiting with some relatives in Kent in what is still called the United Kingdom (UK). I’ve had some time to think over the blog and possible directions in the next few weeks.

I’ve not made any big decisions – except to realize there is lots more to modern forecasting research, even on an applied level, than is encapsulated in any book I know of.

But I plan several posts on volatility.

What is Volatility in Finance?

Since this blog functions as a gateway, let’s talk briefly about volatility in finance generally.

In a word, financial volatility refers to the variability of prices of financial assets.

And how do you measure this variability?

Well, by considering something like the variance of a set of prices, or time series of financial prices. For example, you might take daily closing prices of the S&P 500 Index, calculate the daily returns, and square them. This would provide a metric for the variability of the S&P 500 over a daily interval, and would give you a chart looking like the following, where I have squared the running differences of the log of the closing prices.


Clearly, prices get especially volatile just before and during periods of economic recession, when there is a clustering of higher volatility measurements.

This clustering effect is one of the two or three well-established stylized facts about financial volatility.

Can You Forecast Volatility?

This is the real question.

And, obviously, the existence of this clustering of high volatility events suggests that some forecastability does exist.

And, notice also, that we are looking at a key element of a variance of these financial prices – the other elements more or less dropping by the wayside since they add (or subtract) or divide the series in the above chart by constants.

One immediate conclusion, therefore, is that the variability of the S&P 500 daily returns is heteroscedastic, which is the opposite of the usual assumption in regression and other statistical research that a nice series to model is one in which all the variances of the errors are constant.

Anyway, a GARCH model, such as described in the following screen capture, is one of the most popular ways of modeling this changing variance of the volatility of financial returns.


GARCH stands for generalized autoregressive conditional heteroscedascity, and the screen capture comes from a valuable recent work called Futures Market Volatility: What Has Changed?

The VIX Index

There are many related acronyms and a whole cottage industry in financial econometrics, but I want to first mention here the Chicago Board Options Exchange (CBOE) VIX or Volatility Index.

The VIX provides a measure of the implied volatility of options with a maturity of 30 days on the S&P500 index from eight different SPX option series. It therefore is a measure of the market expectation of volatility over the next 30 days. Also known as the “fear gauge,” the VIX index tends to rise in times of market turmoil and large price movements.

Futures Market Volatility: What Has Changed? Provides an overview of stock market volatility over time, and has an interesting accompanying table suggesting that upward spikes in the VIX are associated with unexpected macro or political developments.

volatilityhistoryThe 20-point table below is linked, of course, with the circled numbers in the chart.


Bottom Line

Obviously, if you could forcast volatility, that would probably provide useful information about the specific prediction of stock prices. Thus, I have developed models which indicate the direction of change on a one-day-ahead basis somewhat better than chance. If you could add a volatility forecast to this, you would have some idea of when a big change up or down might occur.

Similarly, forecasting the VIX might be helpful in forecasting stock market volatility generally.

At the present time, I might add, the VIX seems to have aroused itself from a slumber at low levels.

Stay tuned, and please, if you know something you would like to share, use the comments section, after you click on this particular post.

Lead graphic from Oyster Consulting


Links – mid-September

After highlighting billionaires by state, I focus on data analytics and marketing, and then IT in these links. Enjoy!

The Wealthiest Individual In Every State [Map]


Data Analytics and Marketing

A Predictive Analytics Primer

Has your company, for example, developed a customer lifetime value (CLTV) measure? That’s using predictive analytics to determine how much a customer will buy from the company over time. Do you have a “next best offer” or product recommendation capability? That’s an analytical prediction of the product or service that your customer is most likely to buy next. Have you made a forecast of next quarter’s sales? Used digital marketing models to determine what ad to place on what publisher’s site? All of these are forms of predictive analytics.

Making sense of Google Analytics audience data

Earlier this year, Google added Demographics and Interest reports to the Audience section of Google Analytics (GA). Now not only can you see how many people are visiting your site, but how old they are, whether they’re male or female, what their interests are, and what they’re in the market for.

Data Visualization, Big Data, and the Quest for Better Decisions – a Synopsis

Simon uses Netflix as a prime example of a company that gets data and its use “to promote experimentation, discovery, and data-informed decision-making among its people.”….

They know a lot about their customers.

For example, the company knows how many people binge-watched the entire season four of Breaking Bad the day before season five came out (50,000 people). The company therefore can extrapolate viewing patterns for its original content produced to appeal to Breaking Bad fans. Moreover, Netflix markets the same show differently to different customers based on whether their viewing history suggests they like the director or one of the stars….

The crux of their analytics is the visualization of “what each streaming customer watches, when, and on what devices, but also at what points shows are paused and resumed (or not) and even the color schemes of the marketing graphics to which individuals respond.”

How to Market Test a New Idea

Formulate a hypothesis to be tested. Determine specific objectives for the test. Make a prediction, even if it is just a wild guess, as to what should happen. Then execute in a way that enables you to accurately measure your prediction…Then involve a dispassionate outsider in the process, ideally one who has learned through experience how to handle decisions with imperfect information…..Avoid considering an idea in isolation. In the absence of choice, you will almost always be able to develop a compelling argument about why to proceed with an innovation project. So instead of asking whether you should invest in a specific project, ask if you are more excited about investing in Project X versus other alternatives in your innovation portfolio…And finally, ensure there is some kind of constraint forcing a decision.

Information Technology (IT)

5 Reasons why Wireless Charging Never Caught on

Charger Bundling, Limited handsets, Time, Portability, and Standardisation – interesting case study topic for IT

Why Jimmy the Robot Means New Opportunities for IT

While Jimmy was created initially for kids, the platform is actually already evolving to be a training platform for everyone. There are two versions: one at $1,600, which really is more focused on kids, and one at $16,000, for folks like us who need a more industrial-grade solution. The Apple I wasn’t just for kids and neither is Jimmy. Consider at least monitoring this effort, if not embracing it, so when robots go vertical you have the skills to ride this wave and not be hit by it.


Beyond the Reality Distortion Field: A Sober Look at Apple Pay

.. Apple Pay could potentially kick-start the mobile payment business the way the iPod and iTunes launched mobile music 13 years ago. Once again, Apple is leveraging its powerful brand image to bring disparate companies together all in the name of consumer convenience.

From Dr. 4Ward How To Influence And Persuade (click to enlarge)


Sales and new product forecasting in data-limited (real world) contexts