# Portfolio Analysis

Greetings again. Took a deep dive into portfolio analysis for a colleague.

Portfolio analysis, of course, has been deeply influenced by Modern Portfolio Theory (MPT) and the work of Harry Markowitz and Robert Merton, to name a couple of the giants in this field.

Conventionally, investment risk is associated with the standard deviation of returns. So one might visualize the dispersion of actual returns for investments around expected returns, as in the following chart.

Here, two investments have the same expected rate of return, but different standard deviations. Viewed in isolation, the green curve indicates the safer investment.

More directly relevant for portfolios are curves depicting the distribution of typical returns for stocks and bonds, which can be portrayed as follows.

Now the classic portfolio is comprised of 60 percent stocks and 40 percent bonds.

Where would its expected return be? Well, the expected value of a sum of random variables is the sum of their expected values. There is an algebra of expectations to express this around the operator E(.). So we have E(.6S+.4B)=.6E(S)+.4E(B), since a constant multiplied into a random variable just shifts the expectation by that factor. Here, of course, S stands for “stocks” and B “bonds.”

Thus, the expected return for the classic 60/40 portfolio is less than the returns that could be expected from stocks alone.

But the benefit here is that the risks have been reduced, too.

Thus, the variance of the 60/40 portfolio usually is less than the variance of a portfolio composed strictly of stocks.

One of the ways this is true is when the correlation or covariation of stocks and bonds is negative, as it has been in many periods over the last century. Thus, high interest rates mean slow to negative economic growth, but can be associated with high returns on bonds.

Analytically, this is because the variance of the sum of two random variables is the sum of their variances, plus their covariance multiplied by a factor of 2.

Thus, algebra and probability facts underpin arguments for investment diversification. Pick investments which are not perfectly correlated in their reaction to events, and your chances of avoiding poor returns and disastrous losses can be improved.

Implementing MPF

When there are more than two assets, you need computational help to implement MPT portfolio allocations.

For a general discussion of developing optimal portfolios and the efficient frontier see http://faculty.washington.edu/ezivot/econ424/portfoliotheorymatrixslides.pdf

There are associated R programs and a guide to using Excel’s Solver with this University of Washington course.

Also see Package ‘Portfolio’.

These programs help you identify the minimum variance portfolio, based on a group of assets and histories of their returns. Also, it is possible to find the minimum variance combination from a designated group of assets which meet a target rate of return, if, in fact, that is feasible with the assets in question. You also can trace out the efficient frontier – combinations of assets mapped in a space of returns and variances. These assets in each case have expected returns on the curve and are minimum variance compared with all other combinations that generate that rate of return (from your designated group of assets).

One of the governing ideas is that this efficient frontier is something an individual investor might travel along as they age – going from higher risk portfolios when they are younger, to more secure, lower risk portfolios, as they age.

Issues

As someone who believes you don’t really know something until you can compute it, it interests me that there are computational issues with implementing MPT.

I find, for example, that the allocations are quite sensitive to small changes in expected returns, variances, and the underlying covariances.

One of the more intelligent, recent discussions with suggested “fixes” can be found in An Improved Estimation to Make Markowitz’s Portfolio Optimization Theory Users Friendly and Estimation Accurate with Application on the US Stock Market Investment.

The more fundamental issue, however, is that MPT appears to assume that stock returns are normally distributed, when everyone after Mandelbrot should know this is hardly the case.

Again, there is a vast literature, but a useful approach seems to be outlined in Modelling in the spirit of Markowitz portfolio theory in a non-Gaussian world. These authors use MPT algorithms as the start of a search for portfolios which minimize value-at-risk, instead of variances.

Finally, if you want to cool off and still stay on point, check out the 2014 Annual Report of Berkshire Hathaway, and, especially, the Chairman’s Letter. That’s Warren Buffett who has truly mastered an old American form which I believe used to be called “cracker barrel philosophy.” Good stuff.

# Investment and Other Bank Macro Forecasts and Outlooks – 2

Today, I take a brief look at economic forecasts available from Morgan Stanley, Wells Fargo, and the French concern Credit Agricole. As readers will note, Morgan Stanley has a lively discussion of the implications of the US midterms, while Wells Fargo has a very comprehensive and easy-to-access series of economic projections, ranging from weekly, to monthly and annual. Credit Agricole (apologies for omitting the accent mark) is the first European bank profiled in these brief looks, and has quarterly updates of fairly comprehensive economic projections across a range of variables.

And I might mention that these publications, which date back into September in many cases, are interesting to review both because of their projections and because of what they miss – notably the drop in oil prices and aggressive new round of quantitative easing by the Bank of Japan.

The fact these developments are missed in these September and even later releases qualifies them as genuine surprises. Thus, their impacts are not discounted in past market developments, and, going forward, oil prices and Japan QE could exert significant, discrete effects on markets.

Morgan Stanley

According to the Federal Reserve’s National Information Center, Morgan Stanley is the nation’s 6th largest bank.

The Global Investment Committee (GOC) Weekly for November 10 is notable for some straight talk on the Implications of the US midterms, which Morgan Stanley see as slightly pro-growth, positive for equities, with constructive compromises, characteristic of lame duck presidencies. I quote fairly extensively, because the frankness of the insights and suggestions is refreshing.

The maxim that gridlock in Washington is good for markets has certainly held true during the “do nothing” Congress of the past two years. Now, with the Republicans winning control of the Senate and adding 15 seats to their House majority, the outlook appears to be for more of the same. Happily for investors, an analysis going back to 1900 shows that equity markets have averaged annualized 15% returns when the Congress is controlled by Republicans and the White House by a Democrat.

Although many pundits have suggested that the GOP sweep creates a mandate, the Global Investment Committee (GIC) sees the results as a mandate for change in the functioning and compromise in Washington rather than the embrace of a specific agenda. On that score, unlike the deeply partisan divide between the House and the Senate of the last four years that prevented any compromise bills from getting off the Hill, legislation may actually get to the president’s desk. While President Obama will be free to veto, he is now playing for his legacy and may be apt to compromise on some issues.

The Republicans’ challenge is to demonstrate leadership and competence in governing, a task that will require corralling the Tea Party caucus and, as Morgan Stanley & Co. Chief US Economist Vincent Reinhart wrote last week, “sequencing priorities” in a constructive way. Lacking a coherent issue-driven platform, most Republicans simply ran against Obama. Party infighting or an immediate battle about the debt ceiling and budget authorizations would likely be disastrous for the GOP—and the markets. From the GIC’s perspective, a better result would be for Congress to focus on job-creating initiatives and not on eviscerating the Affordable Care Act (ACA).

Agreement should be easiest around initiatives involving the energy sector, where this year’s 25% decline in oil prices has been front and center. American energy independence is no longer a dream but a real prospect with profound geopolitical as well as economic consequences (see Chart of the Week, page 3). Heretofore, the Keystone XL pipeline, a six-year-old proposal to connect Canadian oil with US Gulf Coast refineries, has been stalled amid wrangling with environmentalists. We believe the pipeline is now likely to win approval, creating a large national infrastructure project. Similarly, the growth of US energy supply is likely to reignite a debate on oil exports, which have been banned since the Arab oil embargoes of the 1970s. With US dollar strength likely to crimp other exports, expanding energy exports is a way to maintain economic growth. There is likely to be similar debate about exports of liquefied natural gas as the US is the world’s largest and lowest-cost producer. We believe that energy exports would be a major beneficiary focus if the new Congress approves the Trans-Pacific Partnership, a free trade agreement that would give the president authority to negotiate deals with 11 Asian nations.

Beyond energy, we expect repeal of the medical-device tax; expansion of defense spending, which has been curtailed under sequestration; and a debate on corporate tax reform, especially given the noise around tax-driven international mergers. Revisions to the ACA, to the extent they are pursued, will likely focus on measures that impact the number of insured and thus, hospitals and managed-care companies. The employer mandate, which requires employers with more than 100 workers to make available health insurance for any employee working more than 30 hours per week, is most likely to be revised, in our view.

As a final note, a review of state and local ballot initiatives suggest that voters are far from embracing an ideological position on fiscal austerity. Minimum-wage increases were passed in each state where they were on the ballot as did several large new-money infrastructure projects in New York and California—a development that MS & Co. Municipals Strategist, Michael Zezas, notes will likely increase bond supplies in 2015.

It looks like the august Global Economic Forum is being being published more infrequently than in the past, the last edition being March 5 of this year.

Wells Fargo

Wells Fargo, accounting to Wikipedia is –

an American multinational banking and financial services holding company which is headquartered in San Francisco, California, with “hubquarters” throughout the country… It is the fourth largest bank in the U.S. by assets and the largest bank by market capitalization…Wells Fargo is the second largest bank in deposits, home mortgage servicing, and debit cards. In 2011, Wells Fargo was the 23rd largest company in the United States.

The Wells Fargo website has a suite of forecasting reports, ranging from weekly, to monthly, to the big annual report, all downloadable in PDF format.

In October, the bank also released this video interview about their economic outlook.

In case you did not get time to watch that, one of the key graphics is the PCE deflator, which has been trending down recently, raising the spectre of deflation in the minds of some.

Credit Agricole

Credit Agricole is an international full services banking company, headquartered in France, with historical ties to French farming,

Their website offers at least two quarterly macroeconomic forecasting publications.

The publication Economic and Financial Forecasts presents a series of tabular forecasts for interest rates, exchange rates and commodity prices, together with the Crédit Agricole Group’s central economic projections. This is a kind of “just the numbers ma’am report.”

Macro Prospects is more discursive and with short highlights on key countries, such as, in the September issue, Brazil and China.

I signed up for emails from Credit Agricole, announcing updates of these documents.

# Cycles -1

I’d like  to focus on cycles in business and economic forecasting for the next posts.

“Cycles” – in connection with business and economic time series – evoke the so-called business cycle.

Immediately after World War II, Burns and Mitchell offered the following characterization –

Business cycles are a type of fluctuation found in the aggregate economic activity of nations that organize their work mainly in business enterprises: a cycle consists of expansions occurring at about the same time in many economic activities, followed by similarly general recessions, contractions, and revivals which merge into the expansion phase of the next cycle

Earlier, several types of business and economic cycles were hypothesized, based on their average duration. These included the 3 to 4 year Kitchin inventory investment cycle, a 7 to 11 year Juglar cycle associated with investment in machines, the 15 to 25 year Kuznets cycle, and the controversial Kondratieff cycle of from 48 to 60 years.

Industry Cycles

I have looked at industry cycles relating to movements of sales and prices in semiconductor and computer markets. While patterns may be changing, there is clear evidence of semi-regular pulses of activity in semiconductors and related markets. These stochastic cycles probably are connected with Moore’s Law and the continuing thrust of innovation and new product development.

Methods

Spectral analysis, VAR modeling, and standard autoregressive analysis are tools for developing evidence for time series cycles. STAMP, now part of the Oxmetrics suite of software, fits cycles with time-varying parameters.

Sometimes one hears of estimations in the time domain moving into the frequency domain. Time series, as normally graphed with time on the horizontal axis, are in the “time domain.” This is where VAR and autoregressive models operate. The frequency domain is where we get indications of the periodicity of cycles and semi-cycles in a time series.

Cycles as Artifacts

There is something roughly analogous to spurious correlation in regression analysis in the identification of cyclical phenomena in time series. Eugen Slutsky, a Russian mathematical economist and statistician, wrote a famous “unknown” paper on how moving averages of random numbers can create the illusion of cycles. Thus, if we add or average together elements of a time series in a moving window, it is easy to generate apparently cyclical phenomena. This can be demonstrated with the digits in the irrational number π, for example, since the sequence of digits 1 through 9 in its expansion is roughly random.

Significances

Cycles in business have sort of reassuring effect, it seems to me. And, of course, we are all very used to any number of periodic phenomena, ranging from the alternation of night and day, the phases of the moon, the tides, and the myriad of biological cycles.

As a paradigm, however, they probably used to be more important in business and economic circles, than they are today. There is perhaps one exception, and that is in rapidly changing high tech fields of which IT (information technology) is still in many respects a subcategory.

I’m looking forward to exploring some estimations, putting together some quantitative materials on this.

First post with my Android, so there are some minor items that need polishing – mainly how to embed links. It’s a complicated process, compared with MS Word and Windows.

In any case,  there are couple of fairly deep pieces here.

Enjoy.

A detailed exposé on how the market is rigged from a data-centric approach

We received trade execution reports from an active trader who wanted to know why his large orders almost never completely filled, even when the amount of stock advertised exceeded the number of shares wanted. For example, if 25,000 shares were at the best offer, and he sent in a limit order at the best offer price for 20,000 shares, the trade would, more likely than not, come back partially filled. In some cases, more than half of the amount of stock advertised (quoted) would disappear immediately before his order arrived at the exchange. This was the case, even in deeply liquid stocks such as Ford Motor Co (symbol F, market cap: \$70 Billion, NYSE DMM is Barclays). The trader sent us his trade execution reports, and we matched up his trades with our detailed consolidated quote and trade data to discover that the mechanism described in Michael Lewis’s “Flash Boys” was alive and well on Wall Street.

Did the Other Shoe Just Drop? Black Rock and PIMCO Sue Banks for \$250 Billion. Any award this size would destabilize the banking system.

Rand Paul eyes tech-oriented donors, geeks in Bay Area.  The libertarian wedge in a liberal-dem stronghold.

Predictive analytics at World Cup  – Goldman Sachs does a big face plant, predicts Brazil would win. Importance of crowd-sourcing.

A Hands-on Lesson in Return Forecasting Models. I’ve almost never seen a longer blog post, and it ends up dissing the predictive models it exhaustively covers. But I think you will want to bookmark this one, and return to it for examples and ideas.

Yellen Yap: Silliness, Outright Lies, and Some Refreshingly Accurate Reporting. Point of concord between libertarian free market advocates and progressive-left commentators.

# Predicting the Singularity, the Advent of Superintelligence

From thinking about robotics, automation, and artificial intelligence (AI) this week, I’m evolving a picture of the future – the next few years. I think you have to define a super-technological core, so to speak, and understand how the systems of production, communication, and control mesh and interpenetrate across the globe. And how this sets in motion multiple dynamics.

But then there is the “singularity” –  whose main publicizer is Ray Kurzweil, current Director of Engineering at Google. Here’s a particularly clear exposition of his view.

There’s a sort of rebuttal by Paul Root Wolpe.

Part of the controversy, as in many arguments, is a problem of definition. Kurzweil emphasizes a “singularity” of superintelligence of machines. For him, the singularity is at first the point at which the processes of the human brain will be well understood and thinking machines will be available that surpass human capabilities in every respect. Wolpe, on the other hand, emphasizes the “event horizon” connotation of the singularity – that point beyond which out technological powers will have become so immense that it is impossible to see beyond.

And Wolpe’s point about the human brain is probably well-taken. Think, for instance, of how decoding the human genome was supposed to unlock the secrets of genetic engineering, only to find that there were even more complex systems of proteins and so forth.

And the brain may be much more complicated than the current mechanical models suggest – a view espoused by Roger Penrose, English mathematical genius. Penrose advocates a  quantum theory of consciousness. His point, made first in his book The Emperor’s New Mind, is that machines will never overtake human consciousness, because, in fact, human consciousness is, at the limit, nonalgorithmic. Basically, Penrose has been working on the idea that the brain is a quantum computer in some respect.

I think there is no question, however, that superintelligence in the sense of fast computation, fast assimilation of vast amounts of data, as well as implementation of structures resembling emotion and judgment – all these, combined with the already highly developed physical capabilities of machines, mean that we are going to meet some mojo smart machines in the next ten to twenty years, tops.

The dysutopian consequences are enormous. Bill Joy, co-founder of Sun Microsystems, wrote famously about why the future does not need us. I think Joy’s singularity is a sort of devilish mirror image of Kurzweil’s – for Joy the singularity could be a time when nanotechnology, biotechnology, and robotics link together to make human life more or less impossible, or significantly at risk.

There’s is much more to say and think on this topic, to which I hope to return from time to time.

Meanwhile, I am reminded of Voltaire’s Candide who, at the end of pursuing the theories of Dr. Pangloss, concludes “we must cultivate our garden.”

# Estimation and Variable Selection with Ridge Regression and the LASSO

I’ve posted on ridge regression and the LASSO (Least Absolute Shrinkage and Selection Operator) some weeks back.

Here I want to compare them in connection with variable selection  where there are more predictors than observations (“many predictors”).

1. Ridge regression does not really select variables in the many predictors situation. Rather, ridge regression “shrinks” all predictor coefficient estimates toward zero, based on the size of the tuning parameter λ. When ordinary least squares (OLS) estimates have high variability, ridge regression estimates of the betas may, in fact, produce lower mean square error (MSE) in prediction.

2. The LASSO, on the other hand, handles estimation in the many predictors framework and performs variable selection. Thus, the LASSO can produce sparse, simpler, more interpretable models than ridge regression, although neither dominates in terms of predictive performance. Both ridge regression and the LASSO can outperform OLS regression in some predictive situations – exploiting the tradeoff between variance and bias in the mean square error.

3. Ridge regression and the LASSO both involve penalizing OLS estimates of the betas. How they impose these penalties explains why the LASSO can “zero” out coefficient estimates, while ridge regression just keeps making them smaller. From
An Introduction to Statistical Learning

Similarly, the objective function for the LASSO procedure is outlined by An Introduction to Statistical Learning, as follows

4. Both ridge regression and the LASSO, by imposing a penalty on the regression sum of squares (RWW) shrink the size of the estimated betas. The LASSO, however, can zero out some betas, since it tends to shrink the betas by fixed amounts, as λ increases (up to the zero lower bound). Ridge regression, on the other hand, tends to shrink everything proportionally.

5.The tuning parameter λ in ridge regression and the LASSO usually is determined by cross-validation. Here are a couple of useful slides from Ryan Tibshirani’s Spring 2013 Data Mining course at Carnegie Mellon.

6.There are R programs which estimate ridge regression and lasso models and perform cross validation, recommended by these statisticians from Stanford and Carnegie Mellon. In particular, see glmnet at CRAN. Mathworks MatLab also has routines to do ridge regression and estimate elastic net models.

Here, for example, is R code to estimate the LASSO.

lasso.mod=glmnet(x[train,],y[train],alpha=1,lambda=grid)
plot(lasso.mod)
set.seed(1)
cv.out=cv.glmnet(x[train,],y[train],alpha=1)
plot(cv.out)
bestlam=cv.out\$lambda.min
lasso.pred=predict(lasso.mod,s=bestlam,newx=x[test,])
mean((lasso.pred-y.test)^2)
out=glmnet(x,y,alpha=1,lambda=grid)
lasso.coef=predict(out,type=”coefficients”,s=bestlam)[1:20,]
lasso.coef
lasso.coef[lasso.coef!=0]

What You Get

I’ve estimated quite a number of ridge regression and LASSO models, some with simulated data where you know the answers (see the earlier posts cited initially here) and other models with real data, especially medical or health data.

As a general rule of thumb, An Introduction to Statistical Learning notes,

..one might expect the lasso to perform better in a setting where a relatively small number of predictors have substantial coefficients, and the remaining predictors have coefficients that are very small or that equal zero. Ridge regression will perform better when the response is a function of many predictors, all with coefficients of roughly equal size.

The R program glmnet linked above is very flexible, and can accommodate logistic regression, as well as regression with continuous, real-valued dependent variables ranging from negative to positive infinity.

We’ve been struggling with a software glitch in WordPress, due to, we think, incompatibilities between plug-in’s and a new version of the blogging software. It’s been pretty intense. The site has been fully up, but there was no possibility of new posts, not even a notice to readers about what was happening. All this started just before Christmas and ended, basically, yesterday.

So greetings. Count on daily posts as rule, and I will get some of the archives accessible ASAP.

But, for now, a few words about my evolving perspective.

I came out of the trenches, so to speak, of sales, revenue, and new product forecasting, for enterprise information technology (IT) and, earlier, for public utilities and state and federal agencies. When I launched Businessforecastblog last year, my bias popped up in the secondary heading for the blog – with its reference to “data-limited contexts” – and in early posts on topics like “simple trending” and random walks.

I essentially believed that most business and economic time series are basically one form or another of random walks, and that exponential smoothing is often the best forecasting approach in an applied context. Of course, this viewpoint can be bolstered by reference to research from the 1980’s by Nelson and Plosser and the M-Competitions. I also bought into a lazy consensus that it was necessary to have more observations than explanatory variables in order to estimate a multivariate regression. I viewed segmentation analysis, so popular in marketing research, as a sort of diversion from the real task of predicting responses of customers directly, based on their demographics, firmagraphics, and other factors.

So the press of writing frequent posts on business forecasting and related topics has led me to a learn a lot.

The next post to this blog, for example, will be about how “bagging” – from Bootstrap Aggregation – can radically reduce forecasting errors when there are only a few historical or other observations, but a large number of potential predictors. In a way, this provides a new solution to the problem of forecasting in data limited contexts.

This post also includes specific computations, in this case done in a spreadsheet. I’m big on actually computing stuff, where possible. I believe Elliot Shulman’s dictum, “you don’t really know something until you compute it.” And now I see how to include access to spreadsheets for readers, so there will be more of that.

Forecasting turning points is the great unsolved problem of business forecasting. That’s why I’m intensely interested in analysis of what many agree are asset bubbles. Bursting of the dot.com bubble initiated the US recession of 2001. Collapse of the housing market and exotic financial instrument bubbles in 2007 bought on the worst recession since World War II, now called the Great Recession. If it were possible to forecast the peak of various asset bubbles, like researchers such as Didier Sornette suggest, this would mean we would have some advance – perhaps only weeks of course – on the onset of the next major business turndown.

Along the way, there are all sorts of interesting sidelights relating to business forecasting and more generally predictive analytics. In fact, it’s clear that in the era of Big Data, data analytics can contribute to improvement of business processes – things like target marketing for customers – as well as perform less glitzy tasks of projecting sales for budget formulation and the like.

Email me at cvj@economicdataresources.com if you want to receive PDF compilations on topics from the archives. I’m putting together compilations on New Methods and Asset Bubbles, for starters, in a week or so.