Friday 23 November 2018

Is trend following dead?

I get asked this question at least once a week. As those of you that have met me IRL ('in real life') will know I have limited patience and I'm easily bored. I'm definitely bored of answering this question. This post is the last time I'll answer it.

There are broadly two ways to answer this question:


  • Looking at fundamental reasons why trend following is less likely to work
  • Some kind of statistical analysis



Are there fundamental reasons why trend following won't work any more?


Some people spend their entire lives opining about why this or that strategy no longer makes sense (google 'is value/ momentum/xxx dead' and see how many results come back). Personally I find that a very pointless occupation.

I've always felt it is very difficult to forecast the future, and your best bet is to maintain an exposure to a diversified source of different return factors. You'd be mad to have 100% of your capital exposed to trend following (it's about 15% across my whole portfolio). However you'd be equally mad to have 0% of your capital exposed to it because you think it was dead. In this oft quoted recent interview with David Harding, he was cutting the exposure of his fund in the strategy in half, to 25%.

When I have occasionally checked to see if exogenous conditions can be used to predict trading strategy returns I've found very weak effects if anything at all, as in this post on the effect of QE on CTA returns.  Also; trend following returns tend to have negative auto correlation for annual returns (alluded to in this blog post). So bad years tend to be followed by better years.

Before my patience is tested to it's limit let me quickly discuss just two of the reasons why people think trend following is dead:


  • Strategy is overcrowded; possibly but trend following is mostly a self reinforcing strategy, unlike say relative value strategies where profits get squeezed out when investors rush in, having more trend followers causes trends to last longer. Having said that overcrowding is potentially problematic when trends end and numerous investors rush to the exits, especially as there are other players like risk parity investors whose behaviour will closely resemble trend followers (see February 2018). It's worth reading the excellent work of Dr. Robert Hillman on this subject.
  • World is unpredictable (see Trump, also Brexit): perhaps this is true, but this unpredictability also affects discretionary human traders - I doubt any human can predict what Trump is going to do next (that probably includes Trump). Also trend following as a strategy has been around a long time, and on average it's worked despite the fact that there have always been unpredictable factors in the world. I'd be more concerned about a strategy that worked really well in the Obama presidency, but hadn't been tested before that (especially as the Obama presidency was a strong bull market in stocks).

Can statistical analysis tell us if trend following is dead


Statistical analysis is brilliant at telling us about the past. Less useful in telling us about the future. But perhaps it can tell us that trend following has definitely stopped working? I'm keener on this approach than thinking about fundamentals - because it's a useful exercise in understanding uncertainty. Let's find out.

First we need some data. I'm going to use (a) the SG Trend index, and (b) a back-test of trend following strategy returns. The advantage of (a) is that it represents actual returns by trend followers, whilst (b) goes back longer.

Heres the SG trend index (monthly values, cumulated % returns equivalent to a log scale):



It certainly looks like things get rather ugly after the middle of 2009.

Heres a backtest:

Backtest of three equally weighted EWMAC rules over 37 futures

And for comparision a zoom of the backtest since 2000 to match the SG index:

Backtest of three equally weighted EWMAC rules over 37 futures, since 2000 only



The backtest by the way is just an equal weight of three EWMAC rules, equally weighted across 37 odd futures instruments in my dataset generated using pysystemtrade with the following configuration.

In terms of answering the question we can reframe it as:

Is there statistical evidence that the performance of the strategy is negative in the last X years?

Two open questions then are (a) how do we measure performance, and (b) what is X?

I normally measure performance using Sharpe Ratio, but I think it's more appropriate to use return here. One characteristic of trend following is that the vol isn't very stable; it tends to be low when losing and high when making money. This results in very bad rolling Sharpe Ratios in down periods, and not so good rolling Sharpes in the good times. So just this once I'm going to use return as my performance measure. 

In terms of the time period we have the usual two competing effects; short periods won't give us statistical significance, longer periods won't help test the hypothesis of whether trend following is recently dead. The recent period of poor performance started either in 2009 or 2015 depending on how far back you want to go. Let's use a 2 year, 3 year, 5 year and 10 year window.

What I'm going to be backing out is the rolling T-statistic, testing against the null hypothesis that returns were zero or negative. A high positive T-statistic indicates it's likely the returns are significantly positive (yeah!). A low negative T-statistic indicates that it's likely that returns are significantly negative (boo!). A middling T-statistic means we don't really know.

Here's the python:

from scipy.stats import ttest_1samp

# dull wrapper function as pandas apply functions have to return a floatdef ttest_series(xseries):
    return ttest_1samp(xseries, 0.0).statistic

# given some account curve of monthly returns this will return the rolling 10 year 
# series of t-statisticsacc.rolling(120).apply(ttest_series)

Incidentally I also tried bootstrapping the T-statistic, and it didn't affect the results very much.

Here are the results. Firstly for my back test, 2 year rolling window:

The red lines show the critical values for the statistic, given the degrees of freedom (24 months -1 = 23 in this case); and the significance level (I've opted for the widely used 5%). We can say that trend following has definitely been working a few times (significant positive T-statistic). But it never goes significantly negative.

Let's look at 3 years:

Nothing to see hear folks. The recent poor performance brings the 3 year rolling average down to zero.

Surely five years must show significant results:

Ten years?
Nothing. Also interestingly it looks like the performance of the strategy has been pretty stable for the last 30 years or so (flat from the period in the ten years before 1996), with the exception of a fallow period in the years before the last financial crisis.

Okay, let's switch to the SG CTA index. Starting with two years:

No evidence of anything at all there. Three years?

We literally have no clue. Five years?

Never in the field of human conflict have so many lines of matplotlib wasted on not finding anything interesting. I won't bother with the 10 year plot - as with my backtest the last 10 years have been positive, so they can't be significantly negative.

Some may accuse me of straw-manning here; "listen Rob we're not saying trend following is so broken it will lose money; just that it hasn't and won't do as well as in the past". Well looking again at those rolling plots I see no evidence of that eithier.

Looking at the SG index there has perhaps been a slight degradation in performance after 2009, but taking the long term view over the backtest I'd say that over the last 30 years at least performance has been very similar and the current period of poor returns is by no means as bad as things have got in the past before recovering.

One more VERY IMPORTANT POINT: It's arguably silly to look at the performance of any trading strategy in isolation; like I said above only a moron would have 100% of their money in trend following. One of the arguments for trend following is that it provides 'crisis alpha' or to be more precise it has a negative correlation to other assets in a bear market. Unfortunately it's virtually impossible to say whether trend following still retains that property, since (he said wistfully) there hasn't been a decent crisis for 10 years.

You should be happy to invest in crisis alpha even if it has a expected return of zero over all history - arguably you should even be happy to pay for it's insurance properties, and put up with a slightly negative return. Since the 2009 trend following has delivered some modestly positive performance; arguably better than we have a right to expect. We won't know for sure if trend following can still deliver until the next big crisis comes along.


Summary

"Is trend following dead?" I don't know. Probably not. Now leave me alone and let us never speak of this again.

The next person who asks me this question will get a deep sigh in response. The one after that, a full eye roll. And with the third person I will have to resort to physical violence.


Wednesday 7 November 2018

Is maths in portfolio construction bad?

First an apology. It's been quite a few months since my last blog post. I've been in book writing mode and trying to minimise outside distractions. Though looking at my media page since my last blog post I've done two conferences, a webinar, a book review, a guest lecture, a TV panel discussion and written six articles for efinancialcareers.com. So maybe I haven't done a great job as far as filtering out the noise goes.

Anyway the first draft of the book is now complete, and I though I'd write a blog post before I start smashing my next project (updating my course material for next semester, since you ask).

This post is about portfolio construction and is a response to the following article:

https://www.institutionalinvestor.com/article/b1bpqyp4684v06/The-Wall-Street-Math-Hustle

It's long but absolutely worth reading. To sum up the salient points, as I see them (and I apologise to the authors if I am misrepresenting them):

  • The "Diversification industry" is a con, including but not limited to portfolios labelled: risk efficient, maximum diversification, minimum variance, equal-risk contribution, inverse volatility-weighted, diversity-weighted, and so on.
  • Equal weighting is as good or better than any of this garbage
  • The 'rebalancing premium' is junk.
  • Maximum diversification type portfolios are often highly concentrated
  • By seeking to minimise market beta maximum diversification portfolios end up with the weirdest stocks
  • Low correlation is no protection against a crisis
  • These strategies naturally tilt towards small cap and value. And the authors have a particular problem with the small cap part of that.
  • Summary of the summary: 
    • "Maximum diversification" - bad. "Market cap weighted" - good.
I agree with a fair bit of the content of this article, and I'm a massive fan of people smashing their sticks into the piƱata of received industry wisdom. And it's true to say that many people are charging unreasonably high fees for something that can be done mechanically using publicly available methodologies.
But just because people use certain methodologies badly doesn't mean they are bad methodologies. I guess if I was to sum up what I'm now going to say it in a pithy way it would be: Maths in portfolio construction is fine if you don't use it naively. 

Before I start, two minor irritants:
  • At one point the authors conflate the idea of equal Sharpe Ratios, and CAPM. Not the same thing. Both assume risk adjusted returns are identical, but they use different measures of risk: absolute standard deviation and covariance with the market respectively.
  • It's Maths. With an 's'. Bloody Americans :-)



A quick primer on uncertainty and portfolio construction



Before we begin in earnest let's have a recap (for those that haven't read my second book, attended one of my talks or webcasts on the subject, or been my unfortunate victims students in the Queen Mary lecture theatre).

  • Classic portfolio optimisation is very sensitive to small changes in inputs; in particular it's very sensitive to small differences in Sharpe Ratio, and correlations - when those correlations are high. It's relatively insensitive to small changes in standard deviation, or to correlations when correlations are low.
  • It's extremely difficult to predict Sharpe Ratios, and their historic uncertainty (sampling variance) is high. It's relatively easy to predict standard deviations, and their historic uncertainty is low. Correlations fall somewhere in the middle.
  • If we assume we can't predict Sharpe Ratios, then some kind of minimum variance (if we have a low risk target or can use leverage) or maximum diversification portfolio will make theoretical sense
  • If we assume we can't predict Sharpe Ratios or correlations, then an inverse volatility portfolio makes the most sense
  • If we assume we can't predict anything, then an equal weight portfolio makes the most sense.

Yes, Equal weights and Market cap weighting are as good as anything... in the right circumstances


Like I said above these portfolios will make the most sense in theory if you can predict vol and correlation, but not Sharpe Ratios.

[Let's take it as a given for now that we can't predict Sharpe Ratios; in other words that all assets have equal expected Sharpe - I'll relax that assumption later in the post] 

If you can't predict volatility or correlation either, then equal weights makes more sense. Equal weights will also make sense if volatility and correlations are pretty much the same, or if vols are the same and the correlation structure is such that the portfolio can be decomposed into blocks, like sectors, of broadly equal size with homogeneous correlations. Something like this:

                    BankA         BankB        TechA        TechB
BankA                1.0           0.8           0.4         0.4
BankB                0.8           1.0           0.4         0.4
TechA                0.4           0.4           1.0         0.8
TechB                0.4           0.4           0.8         1.0

Assuming equal vol and Sharpe the correct portfolio to hold here (lowest risk, highest Sharpe Ratio, highest geometric mean) would be equal weights. This would also be the minimum variance / maximum diversification / equal risk portfolio  ... blah blah blah.

The question then is how realistic is to assume equal variances, equal correlations or an 'equal block' correlation structure. Actually for something like the S&P 500 it's not a bad assumption to make. 

It's also worth noting that for a relatively well diversified index like the S&P 500 there isn't going to be a huge amount of difference between market cap weighted, maximum diversification, and equal weighted. In my second book I ran the numbers for the Canadian TSX 60 index; a relatively extreme index by the standards of developed markets:

  • Equal weighted. This portfolio is quite concentrated: 58% is in just three sectors.
  • Market cap. This portfolio is also quite concentrated by firm: 9% in one firm, and even more so in sectors: 62% in just three sectors.
  • Equal risk across sectors (a sort of maximum diversification portfolio). This portfolio is quite concentrated by firm: 10% in one firm
Although these portfolios are quite different the expected geometric mean, assuming equal vol and Sharpe Ratio, come in very similar: 
  • 2.17% market cap weighted
  • 2.20% equal weighted
  • 2.21% equal sector risk 

(All excess returns based on my central assumptions about future equity growth - but it's the relative values that's important). 

4 lousy basis points. Let me remind you that's for the quirky Canadian index, which judging by the sector concentration in their stock index is a country where everyone is busy digging stuff up, drilling for other stuff, or flogging financial products to the diggers and drillers. For the naturally more diversified S&P 500 index there will be a negligible difference.

If your universe of assets is a large cap index of developed markets in the same country.... then there is almost no value theoretically in moving away from market cap or equally weighted towards some sort of funky 'diversified' weighting. 

It's no coincidence that the seminal work on equally weighted portfolios was done on US equities

But not every universe of assets has those characteristics. A cross asset, cross country portfolio is likely to have a much messier correlation matrix, and is extremely unlikely to have equal volatility. An emerging markets index or a small concentrated index like the DAX or OBX is going to produce equal weight or market cap portfolios that have serious concentration issues. Because volatility, and to a lesser degree correlation, are reasonably predictable it would be silly to throw away that kind of information if you're working in that context.


We should all hate sparse weights 


There is a world of difference between the theoretical results above, obtained by setting everything to equality, and what happens when you push real data into an optimiser. Even slight differences in Sharpe Ratios, correlations or volatility can produce extreme weights (also named sparse weights, depending on whether it's the zeros or the higher values that bother you).

Again from my second book: if I consider sector weighted S&P 500 portfolios then the difference between holding 11 stocks (one per sector) and holding all 500 stocks is a geometric mean of 2.18% versus 2.23% (again assuming equal volatility and Sharpe Ratio). Just 5 basis points. Statistically completely insignificant- surely any idiot could get those 5 basis points back and more by smart stock picking.  Here is the same point made by Adam Butler at the end of my recent webcasts

“… while mean variance optimisation is unstable in terms of portfolio weights it’s actually quite stable in terms of portfolio qualities… with quite different portfolio weights the means and variances are very similar” Adam Butler, ReSolve asset management 2018

My answer was that by having extreme allocations, or sparse weights, we're exposing ourselves to idiosyncratic risk. In theory this is being mostly diversified away leaving us only with systematic market risk, but this is one theoretical result I am very unhappy about. The joint Gaussian model of risk is a pretty good workhorse but we all know it's flawed; and the consequences of those flaws will become very evident if you're holding a sparse portfolio in a crisis situation when co-skewness type behaviour becomes apparent, or if certain asset prices to go to zero (firms do, occasionally, go bust).

So I'm certainly no friend to sparse weights; I think the theoretically small advantage of 'fully populated' portfolios is in reality much bigger. Of course there is a limit; I'd probably only be happy if I held all 30 DAX stocks, but I don't see the need to hold all five hundred S&P 500 stocks. Assuming the portfolio is reasonably well diversified I don't see the harm in holding only 100 out of 500 stocks. 

Solving this problem is straightforward - you can do it in an ugly way with constraints, or you use any of the well known techniques that do optimisation more robustly.

As a professional fund manager of course there is another issue, which is that if you don't have any exposure to a high profile, high performing stock then you're going to look pretty silly. Perhaps then you should add something so that you always have some exposure to say the top 10% of stocks by market cap (I'm only half joking here).


The dangerous world of low or negative correlations, and weird factor risk


I like to think of risk management as a waterbed. If you try and reduce your risk too much by in one area, then it will pop up in an unexpected place. This is most notable in long short portfolios of highly correlated assets, or of leveraged long only portfolios of negatively correlated assets. 

It's possible to construct portfolios with very low risk - low risk at least if you assume that a joint Gaussian risk model is correct, and that volatility and correlations are perfectly stable and predictable. But they're not. By pushing down the risk on the Gaussian part of the waterbed we're forcing the risk to pop up somewhere else. We're exposed to correlation risk, and probably liquidity risk (One word, four letters: LTCM).

These problems still exist in the land of unleveraged long only portfolio construction- but they aren't as serious. Any long only portfolio can be decomposed into some other long only portfolio, plus a bunch of long/short bets. The long/short stuff can indeed be dangerously toxic. But it's only part of the portfolio.

Also: this happens because we allow the mean variance optimiser to use the correlation matrix naively. Just because correlations are relatively predictable it doesn't mean we should trust the optimiser to use them sensibly. We don't have to do that; there are many techniques for more robust optimisation. 


The rebalancing premium


There are two rebalancing premiums; a theoretical one which is small, but definitely exists; and an empirical one. I believe the theoretical premium was first outlined by Fernholz and Shay. It definitely exists, but may not survive the impact of costs. 

An additional (and probably much larger) empirical rebalancing premium will exist if asset prices were mean reverting in some relative sense. Then you could sell high on overweight assets, and buy underweight assets cheaply. 

In case you haven't noticed this is the point in the post when I relax the assumption that Sharpe Ratios are inherently unpredictable, and hence that all Sharpe Ratios are equal in expectation; now we have some conditioning information which can predict Sharpes.

Historically most assets have exhibited the following pattern:
  • Short horizon, mean reverting
  • Medium horizon, trending
  • Long horizon, mean reverting
The horizons vary depending on the asset class (and have also done over time), but if you're operating in a time frame between about a week and a year you are probably in the trending zone, where rebalancing doesn't work. Unfortunately that's also a pretty neat fit for the typical frequencies when most people rebalance: monthly, quarterly, annually. So you are likely to do worse by rebalancing unless you speed up (which will end up costing more, unless you do it smartly with the use of no-trade buffers and limit orders) or slow down a lot (in which case your information ratio will take a nose dive).

To labour the point, that doesn't mean rebalancing is a bad idea, in the same way that maximum diversification portfolios aren't always a bad idea. It's just bad if you do it in a dumb way - the way many parts of the fund industry do it.


The arguments against small cap and value


'Diversified' portfolios of all flavours start from the premise that all stocks are equal until proven otherwise (due to information about correlation or volatility), whereas market cap weighting thinks that larger cap stocks are better (of course equal weighting thinks that all stocks are always equal). So yes, anything that isn't market cap weighting will have a tilt towards small cap relative to the market cap index. 

Of course another way of putting this is that market cap weighting has a tilt towards large cap stocks relative to any other index. It depends on your perspective - there is no 'true' benchmark.

The choice of market cap weighting as a starting point is historical, and it's justification is that it's the portfolio that all investors have to hold in aggregate. Of course that's true... but then I look at the FTSE 100 weights and I have to ask myself if I really expect HSBC to outperform RBS (both UK banks) to the extent that it deserves a weight that is 4 times bigger?

[Weirdly I own shares in HSBC but not RBS. Go figure]

Frankly if your universe is the S&P 500 or the FTSE 100, then you're not really not tilting towards small caps. You're tilting towards 'not quite so large' large caps. So the well known reasons why one might expect genuine small caps to outperform (fewer analysts covering, higher trading costs, less liquidity) are unlikely to be present. 

Some benefit of market cap weighting is that it tilts towards stocks that have done well recently, so benefiting from trend effects that occur in line with quarterly rebalancing, but this is quite a small effect which mainly works on the margins (when stocks are promoted into the index that have recently done well), and is certainly outweighed by the slower effects of mean reversion (in the long run stocks which have done well - and are more likely to be in the top end of a large cap index - will do worse versus the rest).

Bottom line - I don't think there is any evidence one way or the other that a massive megacap stock should outperform a relatively small large cap or vice versa. So I see no reason why the massive megacap stock should automatically get a higher portfolio weight, or vice versa. Inverse volatility weighting - as practiced by all the 'esorteric' weighting schemes apart from equal weighting - will probably underweight the smaller large caps in cash terms since they're normally riskier. Since volatility is the most predictable characteristic of asset returns I'm a big fan of using it in portfolio construction. On this point then equal weighting falls over compared to weighting schemes that use volatility as an input.

Arguably using volatility is a very crude way of measuring risk, and it might be that it understates the tail risks. Evidence suggests that small cap stocks have more positive skewness, but worse kurtosis. I'm not sure this is a significant issue.

Genuine small caps will probably outperform for good reason, but that's not really what we're talking about here.

I'm slightly more confused about where value comes in, as the original authors weren't clear. Of course there is an overlap between value and small cap; you're unlikely to find much value in stocks that are well covered by analysts, and which may also have gone up in price a lot recently (although not always; the #10 stock by market cap could well be the #1 stock that has fallen on hard times). 

[And there is no reason why megacaps can't be good value in their own right. According to my value screens HSBC - currently #1 in the FTSE 100 by market cap - is much better value than RBS. I knew there was a good reason why I owned HSBC]

But to make the same point again; if higher cap stocks are poorer value then a market cap weighted index will be tilted towards poorer value relative to any other form of weighting. You might not buy the value premium, but do you really believe an anti-value premium exists? There would need to be such a premium for market cap weighting to make sense compared to pretty much any other type of weighting. Once again higher value stocks are probably going to be more volatile, but using information about volatility will deal with that.

Yes it is disingenuous to smuggle value and small cap bias into an index that is ostensibly something else, but I really don't think that is happening here. Market cap indices are the biased ones - with tilts towards large cap, and perhaps value, versus any other kind of weighting.


Conclusion



Yes, the naive use of portfolio construction methods is dumb. Yes, equal and market cap weights will often do as good a job. Yes, letting your optimiser set sparse / extreme weights is stupid. But:

  • Outside of the universe of large cap stock indices it makes a lot of sense to use the relatively predictable components of asset returns - volatility and correlation.
  • Using volatility as an input makes a lot of sense - it's highly predictable, and will help reduce your exposure to potentially problematic assets.
  • Plenty of well known techniques exist to do portfolio optimisation in robust sensible ways.
  • Sensible rebalancing can be fun and profitable
  • It's market cap weighted indices that are biased, and not in a direction that's likely to be profitable.
I also can't help feeling that now would be a good opportunity to plug next weeks talk I'm doing on portfolio construction with uncertainty, at The Thalesians.