Tag Archives: technology forecasting

Links – early August, 2015

Well, I’m back, after deep dives into R programming and statistical modeling. I’d like to offer these links which I’ve bookmarked in recent days. The first four cover a scatter of topics, from impacts of the so-called sharing economy and climate developments to the currency impacts of the more and more certain moves by the US Federal Reserve to increase interest rates in September.

But then I’ve collected a number of useful links on robotics and artificial intelligence.

How the ‘sharing economy’ is upending the travel industry

DS: New York Attorney General Eric Schneiderman last October issued a report finding 72 percent of the reservations on Airbnb going back to 2010 were in violation of city law. What’s the industry doing to address these concerns?

MB: Listen, I think there are a lot of outdated regulations and a lot of outdated laws that were written in a time where you couldn’t possibly imagine the innovation that has come up from the sharing economy, and a lot of those need to be updated to meet the world that we live in today, and I think that’s important.  Sometimes you have regulations that are put in place by incumbent industries that didn’t want competition and you have some regulations that were put in place back in the ’60s and ’70s, where you couldn’t imagine any of these things, and so I think sometimes you need to see updates.

So there you go – laws on the books are outdated.

Brain-controlled prosthesis nearly as good as one-finger typing

The goal of all this research is to get thought-controlled prosthetics to people with ALS. Today these people may use an eye-tracking system to direct cursors or a “head mouse” that tracks the movement of the head. Both are fatiguing to use. Neither provides the natural and intuitive control of readings taken directly from the brain.

The U.S. Food and Drug Administration recently gave Shenoy’s team the green light to conduct a pilot clinical trial of their thought-controlled cursor on people with spinal cord injuries.

Jimmy Carter: The U.S. Is an “Oligarchy With Unlimited Political Bribery”

Unfortunately, very apt characterization from a formal standpoint of political science.


What to Expect from El Niño: North America

The only El Niño events in NOAA’s 1950-2015 database comparable in strength to the one now developing occurred in 1982-83 and 1997-98… Like other strong El Niño events, this one will almost certainly last just one winter. But at least for the coming wet season, it holds encouraging odds of well-above average precipitation for California. During a strong El Niño, the subtropical jet stream is energized across the southern U.S., while the polar jet stream tends to stay north of its usual winter position or else consolidate with the subtropical jet. This gives warm, wet Pacific systems a better chance to push northeast into California… Milder and drier a good bet for Pacific Northwest, Northern Plains, western Canada.. Rockies snowfall: The south usually wins out…Thanks to the jet-shifting effects noted above, snowfall tends to be below average in the Northern Rockies and above average in the Southern Rockies during strong El Niños. The north-south split extends to Colorado, where northern resorts such as Steamboat Springs typically lose out to areas like the San Juan and Sangre de Cristo ranges across the southern part of the state. Along the populous Front Range from Denver to Fort Collins, El Niño hikes the odds of a big snowstorm, especially in the spring and autumn. About half of Boulder’s 12” – 14” storms occur during El Niño, and the odds of a 20” or greater storm are quadrupled during El Niño as opposed to La Niña.

According to NOAA, the single most reliable El Niño outcome in the United States, occurring in more than 80% of El Niño events over the last century, is the tendency for wet wintertime conditions along and near the Gulf Coast, thanks to the juiced-up subtropical jet stream.

Emerging market currencies crash on Fed fears and China slump

The currencies of Brazil, Mexico, South Africa and Turkey have all crashed to multi-year lows as investors flee emerging markets and commodity prices crumble.

Robotics and Artificial Intelligence

Some of the most valuable research I’ve found so far on the job and societal impacts of robotics comes from a survey of experts conducted by the Pew Research Internet Project AI, Robotics, and the Future of Jobs,

Some 1,896 experts responded to the following question:

The economic impact of robotic advances and AI—Self-driving cars, intelligent digital agents that can act for you, and robots are advancing rapidly. Will networked, automated, artificial intelligence (AI) applications and robotic devices have displaced more jobs than they have created by 2025?

Half of these experts (48%) envision a future in which robots and digital agents have displaced significant numbers of both blue- and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.

The other half of the experts who responded to this survey (52%) expect that technology will not displace more jobs than it creates by 2025. To be sure, this group anticipates that many jobs currently performed by humans will be substantially taken over by robots or digital agents by 2025. But they have faith that human ingenuity will create new jobs, industries, and ways to make a living, just as it has been doing since the dawn of the Industrial Revolution.

Read this – the comments on both sides of this important question are trenchant, important.

The next most useful research comes from a 2011 publication of Brian Arthur in the McKinsey Quarterly The second economy – which is the part of the economy where machines transact just with other machines.

Something deep is going on with information technology, something that goes well beyond the use of computers, social media, and commerce on the Internet. Business processes that once took place among human beings are now being executed electronically. They are taking place in an unseen domain that is strictly digital. On the surface, this shift doesn’t seem particularly consequential—it’s almost something we take for granted. But I believe it is causing a revolution no less important and dramatic than that of the railroads. It is quietly creating a second economy, a digital one.

Twenty years ago, if you went into an airport you would walk up to a counter and present paper tickets to a human being. That person would register you on a computer, notify the flight you’d arrived, and check your luggage in. All this was done by humans. Today, you walk into an airport and look for a machine. You put in a frequent-flier card or credit card, and it takes just three or four seconds to get back a boarding pass, receipt, and luggage tag. What interests me is what happens in those three or four seconds. The moment the card goes in, you are starting a huge conversation conducted entirely among machines. Once your name is recognized, computers are checking your flight status with the airlines, your past travel history, your name with the TSA (and possibly also with the National Security Agency). They are checking your seat choice, your frequent-flier status, and your access to lounges. This unseen, underground conversation is happening among multiple servers talking to other servers, talking to satellites that are talking to computers (possibly in London, where you’re going), and checking with passport control, with foreign immigration, with ongoing connecting flights. And to make sure the aircraft’s weight distribution is fine, the machines are also starting to adjust the passenger count and seating according to whether the fuselage is loaded more heavily at the front or back.

These large and fairly complicated conversations that you’ve triggered occur entirely among things remotely talking to other things: servers, switches, routers, and other Internet and telecommunications devices, updating and shuttling information back and forth. All of this occurs in the few seconds it takes to get your boarding pass back. And even after that happens, if you could see these conversations as flashing lights, they’d still be flashing all over the country for some time, perhaps talking to the flight controllers—starting to say that the flight’s getting ready for departure and to prepare for that…

If I were to look for adjectives to describe this second economy, I’d say it is vast, silent, connected, unseen, and autonomous (meaning that human beings may design it but are not directly involved in running it). It is remotely executing and global, always on, and endlessly configurable. It is concurrent—a great computer expression—which means that everything happens in parallel. It is self-configuring, meaning it constantly reconfigures itself on the fly, and increasingly it is also self-organizing, self-architecting, and self-healing…

If I were to look for adjectives to describe this second economy, I’d say it is vast, silent, connected, unseen, and autonomous (meaning that human beings may design it but are not directly involved in running it). It is remotely executing and global, always on, and endlessly configurable. It is concurrent—a great computer expression—which means that everything happens in parallel. It is self-configuring, meaning it constantly reconfigures itself on the fly, and increasingly it is also self-organizing, self-architecting, and self-healing

I’m interested in how to measure the value of services produced in this “second economy.”

Finally, China’s adoption of robotics seems to signal something – as in this piece about a totally automatic factor for cell phone parts –

China sets up first unmanned factory; all processes are operated by robots

At the workshop of Changying Precision Technology Company in Dongguan, known as the “world factory”, which manufactures cell phone modules, 60 robot arms at 10 production lines polish the modules day .. The technical staff just sits at the computer and monitors through a central control system… In the plant, all the processes are operated by computer- controlled robots, computer numerical control machining equipment, unmanned transport trucks and automated warehouse equipment.

Out-Of-Sample R2 Values for PVAR Models

Out-of-sample (OOS) R2 is a good metric to apply to test whether your predictive relationship has out-of-sample predictability. Checking this for the version of the proximity variable model which is publically documented, I find OOS R2 of 0.63 for forecasts of daily high prices.

In other words, 63 percent of the variation of the daily growth in high prices for the S&P 500 is explained by four variables, documented in Predictability of the Daily High and Low of the S&P 500 Index.

This is a really high figure for any kind of predictive relationship involving security prices, so I thought I would put the data out there for anyone interested to check.


This metric is often found in connection with efforts to predict daily or other rates of return on securities, and is commonly defined as


See, for example, Campbell and Thompson.

The white paper linked above and downloadable from University of Munich archives shows –

Ratios involving the current period opening price and the high or low price of the previous period are significant predictors of the current period high or low price for many stocks and stock indexes. This is illustrated with daily trading data from the S&P 500 index. Regressions specifying these “proximity variables” have higher explanatory and predictive power than benchmark autoregressive and “no change” models. This is shown with out-of-sample comparisons of MAPE, MSE, and the proportion of time models predict the correct direction or sign of change of daily high and low stock prices. In addition, predictive models incorporating these proximity variables show time varying effects over the study period, 2000 to February 2015. This time variation looks to be more than random and probably relates to investor risk preferences and changes in the general climate of investment risk.

I wanted to provide interested readers with a spreadsheet containing the basic data and computations of this model, which I call the “proximity variable” model. The idea is that the key variables are ratios of nearby values.

And this is sort of an experiment, since I have not previously put up a spreadsheet for downloading on this blog. And please note the spreadsheet data linked below is somewhat different than the original data for the white paper, chiefly by having more recent observations. This does change the parameter estimates for the whole sample, since the paper shows we are in the realm of time-varying coefficients.

So here goes. Check out this link. PVARresponse

Of course, no spreadsheet is totally self-explanatory, so a few words.

First, the price data (open, high, low, etc) for the S&P 500 come from Yahoo Finance, although the original paper used other sources, too.

Secondly, the data matrix for the regressions is highlighted in light blue. The first few rows of this data matrix include the formulas with later rows being converted to numbers, to reduce the size of the file.

If you look in column K below about row 1720, you will find out-of-sample regression forecasts, created by using data from the immediately preceding trading day and before and current day opening price ratios.

There are 35 cases, I believe, in which the high of the day and the opening price are the same. These can easily be eliminated in calculating any metrics, and, doing so, in fact increases the OOS R2.

I’m sympathetic with readers who develop a passion to “show this guy to be all wrong.” I’ve been there, and it may help to focus on computational matters.

However, there is just no question but this approach is novel, and beats both No Change forecasts and a first order autoregressive forecasts (see the white paper) by a considerable amount.

I personally think these ratios are closely watched by some in the community of traders, and that other price signals motivating trades are variously linked with these variables.

My current research goes further than outlined in the white paper – a lot further. At this point, I am tempted to suggest we are looking at a new paradigm in predictability of stock prices. I project “waves of predictability” will be discovered in the movement of ensembles of security prices. These might be visualized like the wave at a football game, if you will. But the basic point is that I reckon we can show how early predictions of these prices changes are self-confirming to a degree, so emerging signs of the changes being forecast in fact intensify the phenomena being predicted.

Think big.

Keep the comments coming.

One-Month-Ahead Stock Market Forecasts

I have been spending a lot of time analyzing stock market forecast algorithms I stumbled on several months ago which I call the New Proximity Algorithms (NPA’s).

There is a white paper on the University of Munich archive called Predictability of the Daily High and Low of the S&P 500 Index. This provides a snapshot of the NPA at one stage of development, and is rock solid in terms of replicability. For example, an analyst replicated my results with Python, and I’ll probably will provide his code here at some point.

I now have moved on to longer forecast periods and more complex models, and today want to discuss month-ahead forecasts of high and low prices of the S&P 500 for this month – June.

Current Month Forecast for S&P 500

For the current month – June 2015 – things look steady with no topping out or crash in sight

With opening price data from June 1, the NPA month-ahead forecast indicates a high of 2144 and a low of 2030. These are slightly above the high and low for May 2015, 2,134.72 and 2,067.93, respectively.

But, of course, a week of data for June already is in, so, strictly speaking, we need a three week forecast, rather than a forecast for a full month ahead, to be sure of things. And, so far, during June, daily high and low prices have approached the predicted values, already.

In the interests of gaining better understanding of the model, however, I am going to “talk this out” without further computations at this moment.

So, one point is that the model for the low is less reliable than the high price forecast on a month-ahead basis. Here, for example, is the track record of the NPA month-ahead forecasts for the past 12 months or so with S&P 500 data.


The forecast model for the high tracks along with the actuals within around 1 percent forecast error, plus or minus. The forecast model for the low, however, has a big miss with around 7 percent forecast error in late 2014.

This sort of “wobble” for the NPA forecast of low prices is not unusual, as the following chart, showing backtests to 2003, shows.


What’s encouraging is the NPA model for the low price adjusts quickly. If large errors signal a new direction in price movement, the model catches that quickly. More often, the wobble in the actual low prices seems to be transitory.

Predicting Turning Points

One reason why the NPA monthly forecast for June might be significant, is that the underlying method does a good job of predicting major turning points.

If a crash were coming in June, it seems likely, based on backtesting, that the model would signal something more than a slight upward trend in both the high and low prices.

Here are some examples.

First, the NPA forecast model for the high price of the S&P 500 caught the turning point in 2007 when the market began to go into reverse.


But that is not all.

The NPA model for the month-ahead high price also captures a more recent reversal in the S&P 500.



Also, the model for the low did capture the bottom in the S&P 500 in 2009, when the direction of the market changed from decline to increase.


This type of accuracy in timing in forecast modeling is quite remarkable.

It’s something I also saw earlier with the Hong Kong Hang Seng Index, but which seemed at that stage of model development to be confined to Chinese market data.

Now I am confident the NPA forecasts have some capability to predict turning points quite widely across many major indexes, ETF’s, and markets.

Note that all the charts shown above are based on out-of-sample extrapolations of the NPA model. In other words, one set of historical data are used to estimate the parameters of the NPA model, and other data, outside this sample, are then plugged in to get the month-ahead forecasts of the high and low prices.

Where This Is Going

I am compiling materials for presentations relating to the NPA, its capabilities, its forecast accuracy.

The NPA forecasts, as the above exhibits show, work well when markets are going down or turning directions, as when in a steady period of trending growth.

But don’t mistake my focus on these stock market forecasting algorithms for a last minute conversion to the view that nothing but the market is important. In fact, a lot of signals from business and global data suggest we could be in store for some big changes later in 2015 or in 2016.

What I want to do, I think, is understand how stock markets function as sort of prisms for these external developments – perhaps involving Greek withdrawal from the Eurozone, major geopolitical shifts affecting oil prices, and the onset of the crazy political season in the US.

Five Day Forecasts of High and Low for QQQ, SPY, GE, and MSFT – Week of May 11-15

Here are high and low forecasts for two heavily traded exchange traded funds (ETF’s) and two popular stocks. Like the ones in preceding weeks, these are for the next five trading days, in this case Monday through Friday May 11-15.


The up and down arrows indicate the direction of change from last week – for the high prices only, since the predictions of lows are a new feature this week.

Generally, these prices are essentially “moving sideways” or with relatively small changes, except in the case of SPY.

For the record, here is the performance of previous forecasts.


Strong disclaimer: These forecasts are provided for information and scientific purposes only. This blog accepts no responsibility for what might happen, if you base investment or trading decisions on these forecasts. What you do with these predictions is strictly your own business.

Incidentally, let me plug the recent book by Andrew W. Lo and A. Craig McKinlay – A Non-Random Walk Down Wall Street from Princeton University Press and available as a e-book.

I’ve been reading an earlier book which Andrew Lo co-authored The Econometrics of Financial Markets.

What I especially like in these works is the insistence that statistically significant autocorrelations exist in stock prices and stock returns. They also present multiple instances in which stock prices fail tests for being random walks, and establish a degree of predictability for these time series.

Again, almost all the focus of work in the econometrics of financial markets is on closing prices and stock returns, rather than predictions of the high and low prices for periods.

Links May 10, 2015

I start these Links with how the polls in the UK Election fell on their face and why. The cell phone is somewhat implicated, and, almost by association, I move onto the Internet of Everything (IoE), then to thoughts on how the Internet and artificial intelligence (AI) is shaping things.

It’s important to keep things loose and open on occasion, since the world itself doesn’t show tremendous closure, but is open, and evolving.

Election Poll Predictions In La-La Land

Nate Silver, the celebrity forecaster heading up FiveThirtyEight, had a big miss in calling the recent British Election (See What We Got Wrong In Our 2015 U.K. General Election Model and Nate Silver: Polls are failing us).


A similar misfire happened in the recent Israeli elections, where Netanyuahu won by significant numbers in an election predicted to be neck-and-neck.

The cell phone may be partly to blame, as noted in British polling flop prompts global reassessments

..changes in communications are threatening the viability of public election polling in many developed countries where the landline phone was once a reliable medium for representative surveys.

This is going to be a big forecasting issue in the upcoming General Elections in the US.

The Internet of Everything (IoE)

From time to time, Cisco Systems produces projections and forecasts of Internet traffic volumes (presumably to some extent on its equipment). Now there is the Internet of Everything (IoE), a sort of expansion of the “internet of things.”


Internet of Everything: A $4.6 Trillion Public-Sector Opportunity

Peter Diamandis writes,

..Imagine a world in which everything is connected and packed with sensors. 50+ billion connected devices, loaded with a dozen or more sensors, will create a trillion-sensor ecosystem. These devices will create what I call a state of perfect knowledge, where we’ll be able to know what we want, where we want, when we want. Combined with the power of data mining and machine learning, the value that you can create and the capabilities you will have as an individual and as a business will be extraordinary.

Here are some examples posted by Vincent Granville at Data Science Central.

◾Retail: Beyond knowing what you purchased, stores will monitor your eye gaze, knowing what you glanced at… what you picked up and considered, and put back on the shelf. Dynamic pricing will entice you to pick it up again.

◾City Traffic: Cars looking for parking cause 40% of traffic in city centers. Parking sensors will tell your car where to find an open spot.

◾Lighting: Streetlights and house lights will only turn on when you’re nearby.

◾Dynamic pricing: In the future, everything has dynamic pricing where supply and demand drives pricing. Uber already knows when demand is high, or when I’m stuck miles from my house, and can charge more as a result.

◾Transportation: Self-driving cars and IoE will make ALL traffic a thing of the past.

◾Healthcare: You will be the CEO of your own health. Wearables will be tracking your vitals constantly, allowing you and others to make better health decisions.

◾Forests: With connected sensors placed on trees, you can make urban forests healthier and better able to withstand — and even take advantage of — the effects of climate change.

◾Office Furniture: Software and sensors embedded in office furniture are being used to improve office productivity, ergonomics and employee health.

◾Invisibles: Forget wearables, the next big thing is sensor-based technology that you can’t see, whether they are in jewelry, attached to the skin like a bandage, or perhaps even embedded under the skin or inside the body. By 2017, 30% of wearables will be “unobtrusive to the naked eye,” according to market researcher Gartner.

Daniel Kraft, a physician, is a name to watch in this @Daniel_Kraft.

Impact of Artificial Intelligence (AI)

Generally, the under-the-radar spread of AI – in cell phone and tablet features such as Siri, or Google’s Now, which, incidentally, may be pulling ahead in terms of sheer accuracy – meets criteria of technology which can fundamentally change things.

It’s so easy to drive along and ask Siri for directions, or where a good restaurant is. And people focus on their cell phones in public places. There’s even the cartoon about a couple out on a date texting each other across the table.

The impact of technology on society has always been one of my favorite topics. For more than a decade, the Internet and emergent IT companies, have triggered huge, on-the-ground changes, possibly not all good. But the absorption of advertising revenues by Google has been dramatic, and a game-changer for newspapers and magazines (print technology). Online book sales put Borders Books out of business, and impacts book stores everywhere. The music business has changed forever, with singers and bands now almost wholly reliant on tours and real audiences for real revenue, with record and song sales contributing only minor funds to most.

I’d be interested in adding to this list, if readers have thoughts on this.

Some Comments on Forecasting High and Low Stock Prices

I want to pay homage to Paul Erdős, the eccentric Hungarian-British-American-Israeli mathematician, whom I saw lecture a few years before his death. Erdős kept producing work in mathematics into his 70’s and 80’s – showing this is quite possible. Of course, he took amphetamines and slept on people’s couches while he was doing this work in combinatorics, number theory, and probability.


In any case, having invoked Erdős, let me offer comments on forecasting high and low stock prices – a topic which seems to be terra incognita, for the most part, to financial research.

First, let’s take a quick look at a chart showing the maximum prices reached by the exchange traded fund QQQ over a critical period during the last major financial crisis in 2008-2009.


The graph charts five series representing QQQ high prices over periods extending from 1 day to 40 days.

The first thing to notice is that the variability of these time series decreases as the period for the high increases.

This suggests that forecasting the 40 day high could be easier than forecasting the high price for, say, tomorrow.

While this may be true in some sense, I want to point out that my research is really concerned with a slightly different problem.

This is forecasting ahead by the interval for the maximum prices. So, rather than a one-day-ahead forecast of the 40 day high price (which would include 39 known possible high prices), I forecast the high price which will be reached over the next 40 days.

This problem is better represented by the following chart.


This chart shows the high prices for QQQ over periods ranging from 1 to 40 days, sampled at what you might call “40 day frequencies.”

Now I am not quite going to 40 trading day ahead forecasts yet, but here are results for backtests of the algorithm which produces 20-trading-day-ahead predictions of the high for QQQ.


The blue lines shows the predictions for the QQQ high, and the orange line indicates the actual QQQ highs for these (non-overlapping) 20 trading day intervals. As you can see, the absolute percent errors – the grey bars – are almost all less than 1 percent error.

Random Walk

Now, these results are pretty good, and the question arises – what about the random walk hypothesis for stock prices?

Recall that a simple random walk can be expressed by the equation xt=xt-1 + εt where εt is conventionally assumed to be distributed according to N(0,σ) or, in other words, as a normal distribution with zero mean and constant variance σ.

An interesting question is whether the maximum prices for a stock whose prices follow a random walk also can be described, mathematically, as a random walk.

This is elementary, when we consider that any two observations in a time series of random walks can be connected together as xt+k = xt + ω where ω is distributed according to a Gaussian distribution but does not necessarily have a constant variance for different values of the spacing parameter k.

From this it follows that the methods producing these predictions or forecasts of the high of QQQ over periods of several trading days also are strong evidence against the underlying QQQ series being a random walk, even one with heteroskedastic errors.

That is, I believe the predictability demonstrated for these series are more than cointegration relationships.

Where This is Going

While demonstrating the above point could really rock the foundations of finance theory, I’m more interested, for the moment, in exploring the extent of what you can do with these methods.

Very soon I’m going to post on how these methods may provide signals as to turning points in stock market prices.

Stay tuned, and thanks for your comments and questions.

Erdős picture from Encyclopaedia Britannica

Update and Extension – Weekly Forecasts of QQQ and Other ETF’s

Well, the first official forecast rolled out for QQQ last week.

It did relatively well. Applying methods I have been developing for the past several months, I predicted the weekly high for QQQ last week at 108.98.

In fact, the high price for QQQ for the week was 108.38, reached Monday, April 13.

This means the forecast error in percent terms was 0.55%.

It’s possible to look more comprehensively at the likely forecast errors with my approach with backtesting.

Here is a chart showing backtests for the “proximity variable method” for the QQQ high price for five day trading periods since the beginning of 2015.


The red bars are errors, and, from their axis on the right, you can see most of these are below 0.5%.

This is encouraging, and there are several adjustments which may improve forecasting performance beyond this level of accuracy I want to explore.

So here is the forecast of the high prices that will be reached by QQQ and SPY for the week of April 20-24.


As you can see, I’ve added SPY, an ETF tracking the S&P500.

I put this up on Businessforecastblog because I seek to make a point – namely, that I believe methods I have developed can produce much more accurate forecasts of stock prices.

It’s often easier and more compelling to apply forecasting methods and show results, than it is to prove theoretically or otherwise argue that a forecasting method is worth its salt.

Disclaimer –  These forecasts are for informational purposes only. If you make investments based on these numbers, it is strictly your responsibility. Businessforecastblog is not responsible or liable for any potential losses investors may experience in their use of any forecasts presented in this blog.

Well, I am working on several stock forecasts to add to projections for these ETF’s – so will expand this feature in forthcoming Mondays.

Let’s Get Real Here – QQQ Stock Price Forecast for Week of April 13-17

The thing I like about forecasting is that it is operational, rather than merely theoretical. Of course, you are always wrong, but the issue is “how wrong?” How close do the forecasts come to the actuals?

I have been toiling away developing methods to forecast stock market prices. Through an accident of fortune, I have come on an approach which predicts stock prices more accurately than thought possible.

After spending hundreds of hours over several months, I am ready to move beyond “backtesting” to provide forward-looking forecasts of key stocks, stock indexes, and exchange traded funds.

For starters, I’ve been looking at QQQ, the PowerShares QQQ Trust, Series 1.

Invesco describes this exchange traded fund (ETF) as follows:

PowerShares QQQ™, formerly known as “QQQ” or the “NASDAQ- 100 Index Tracking Stock®”, is an exchange-traded fund based on the Nasdaq-100 Index®. The Fund will, under most circumstances, consist of all of stocks in the Index. The Index includes 100 of the largest domestic and international nonfinancial companies listed on the Nasdaq Stock Market based on market capitalization. The Fund and the Index are rebalanced quarterly and reconstituted annually.

This means, of course, that QQQ has been tracking some of the most dynamic elements of the US economy, since its inception in 1999.

In any case, here is my forecast, along with tracking information on the performance of my model since late January of this year.


The time of this blog post is the morning of April 13, 2015.

My algorithms indicate that the high for QQQ this week will be around $109 or, more precisely, $108.99.

So this is, in essence, a five day forecast, since this high price can occur in any of the trading days of this week.

The chart above shows backtests for the algorithm for ten weeks. The forecast errors are all less than 0.65% over this history with a mean absolute percent error (MAPE) of 0.34%.

So that’s what I have today, and count on succeeding installments looking back and forward at the beginning of the next several weeks (Monday), insofar as my travel schedule allows this.

Also, my initial comments on this post appear to offer a dig against theory, but that would be unfair, really, since “theory” – at least the theory of new forecasting techniques and procedures – has been very important in my developing these algorithms. I have looked at residuals more or less as a gold miner examines the chat in his pan. I have considered issues related to the underlying distribution of stock prices and stock returns – NOTE TO THE UNINITIATED – STOCK PRICES ARE NOT NORMALLY DISTRIBUTED. There is indeed almost nothing about stocks or stock returns which is related to the normal probability distribution, and I think this has been a huge failing of conventional finance, the Black Scholes Theorem, and the like.

So theory is important. But you can’t stop there.

This should be interesting. Stay tuned. I will add other securities in coming weeks, and provide updates of QQQ forecasts.

Readers interested in the underlying methods can track back on previous blog posts (for example, Pvar Models for Forecasting Stock Prices or Time-Varying Coefficients and the Risk Environment for Investing).

Peer-to-Peer Lending – Disruptive Innovation

Today, I chatted with Emmanuel Marot, CEO and Co-founder at LendingRobot.

We were talking about stock market forecasting, for the most part, but Marot’s peer to peer (P2P) lending venture is fascinating.


According to Gilad Golan, another co-founder of LendingRobot, interviewed in GeekWire Startup Spotlight May of last year,

With over $4 billion in loans issued already, and about $500 million issued every month, the peer lending market is experiencing phenomenal growth. But that’s nothing compared to where it’s going. The market is doubling every nine months. Yet it is still only 0.2 percent of the overall consumer credit market today.

And, yes, P2P lending is definitely an option for folks with less-than-perfect credit.

In addition to lending to persons with credit scores lower than currently acceptable to banks (700 or so), P2P lending can offer lower interest rates and larger loans, because of lower overhead costs and other efficiencies.

LendIt USA is scheduled for April 13-15, 2015 in New York City, and features luminaries such as Lawrence Summers, former head of the US Treasury, as well as executives in some leading P2P lending companies (only a selection shown).


Lending Club and OnDeck went public last year and boast valuations of $9.5 and $1.5 billion, respectively.

Topics at the Lendit USA Conference include:

◾ State of the Industry: Today and Beyond

◾ Lending to Small Business

◾ Buy Now! Pay Later! – Purchase Finance meets P2P

◾ Working Capital for Companies through invoice financing

◾ Real Estate Investing: Equity, Debt and In-Between

◾ Big Money Talks: the institutional investor panel

◾ Around the World in 40 minutes: the Global Lending Landscape

◾ The Giant Overseas: Chinese P2P Lending

◾ The Support Network: Service Providers for a Healthy Ecosystem

Peer-to-peer lending is small in comparison to the conventional banking sector, but has the potential to significantly disrupt conventional banking with its marble pillars, spacious empty floors, and often somewhat decorative bank officers.

By eliminating the need for traditional banks, P2P lending is designed to improve efficiency and unnecessary frictions in the lending and borrowing processes. P2P lending has been recognised as being successful in reducing the time it takes to process these transactions as compared to the traditional banking sector, and also in many cases costs are reduced to borrowers. Furthermore in the current extremely low interest-rate environment that we are facing across the globe, P2P lending provides investors with easy access to alternative venues for their capital so that their returns may be boosted significantly by the much higher rates of return available on the P2P projects on offer. The P2P lending and investing business is therefore disrupting, albeit moderately for the moment, the traditional banking sector at its very core.

Peer-to-Peer Lending—Disruption for the Banking Sector?

Top photo of LendingRobot team from GeekWire.

Some Thoughts for Monday

There’s a kind of principle in invention and innovation which goes like this – often the originator of new ideas and approaches is a kind of outsider, stumbling on a discovery by pursuing avenues others thought, through training, would be fruitless. Or at least this innovator pursues a line of research outside of the mainstream – where accolades are being awarded.

You can make too much of this, but it does have wide applicability.

In science, for example, it’s the guy from the out-of-the-way school, who makes the important discovery, then gets recruited to the big time. I recall reading about the migration of young academics from lesser schools to major institutions – Berkeley and the Ivy League – after an important book or discovery.

And, really, a lot of information technology (IT) was launched by college drop-outs, such as the estimable Mr. Bill Gates, or the late Steve Jobs.

This is a happy observation in a way, because it means the drumbeat of bad news from, say, the Ukrainian or Syrian fronts, or insight such as in Satjajit Das’ The Sum of All Our Fears! The Outlook for 2015, is not the whole story. There are “sideways movements” of change which can occur, precisely because they are not obvious to mainstream observers.

Without innovation, our goose is cooked.

I’m going to write more on innovation this week, detailing some of my more recent financial and stock market research under that heading.

But for now, let me comment on  the “libertarian” edge that accompanies a lot of innovation, these days.

The new new peer-to-peer (P2P) “sharing” or social access services provide great examples.

Uber, Lyft, Airbnb – these companies provide access to rides, vehicles, and accommodations. They rely on bidirectional rating systems, background checks, frictionless payment systems, and platforms that encourage buyers and sellers to get to know each other face-to-face before doing business. With venture funding from Wall Street and Silicon Valley, their valuations rise a dramatic way. Uber’s valuation has risen to an estimated $40 billion, making it one of the 150 biggest companies in the world–larger than Delta, FedEx or Viacom. Airbnb coordinates lodging for an estimated 425,000 persons a night, and has an estimated valuation of $13.5 billion, almost half as much as 96-year-old Hilton Worldwide.

There are increased calls for regulation of these companies, as they bite into markets dominated by the traditional hotel and hospitality sector, or taxi-cab companies. Clearly, raising hundreds of millions in venture capital can impart hubris to top management, as in the mad threats coming from a Uber executive against journalists who report, for example, sexual harassment of female customers by Uber drivers.

Noone should attempt to stop the push-and-pull of regulation and disruptive technology, however. Innovations in P2P platforms, pioneered by eBay, pave the way for cultural and institutional innovation. At the same time, I feel better about accepting a ride within the Uber system, if I know the driver is insured and has a safe vehicle.