Monday, 4 September 2017

Smart Portfolios: A post about a book, NN Taleb, and two conferences

September 18th is the official publishing date of my second book, "Smart Portfolios: A practical guide to building and maintaining intelligent investment portfolios (Harriman House, 2017)".



This blog post will give you some more information about the book, and more importantly help you decide if it's worth buying. I'll also let you know about a couple of forthcoming conferences where I will be talking about some of the key points (at Quantcon Singapore and QuantExpo Prague).

It is written in the form of an interview. As no other interviewer was available I decided to interview myself. If after reading this post you still want to buy the book then you can go to this link.


Shall we start with some easy questions?


Sure


What's your favourite colour?


I was hoping for a more highbrow interview than this. Niels Kaastrup-Larsen never asks such a trivial question.



Sorry. This is the first time I've ever interviewed such a well known and intelligent person.


No problem. Since I'm well known, intelligent, and also very easily flattered.


Perhaps instead I could ask you where the idea for the book came from?


That's a much better question. After leaving AHL in late 2013 amongst other things I was thinking about writing a book. I came up with an idea for a book which I was going to title "Black Magic". Once I had the cool title I had to decide what the book would actually be about. I proposed to the publisher (Harriman House) that I'd write something which would be subtitled something like: "Tales from the world of systematic hedge funds: How to invest and trade systematically".

After a long series of emails that got narrowed down to the shorter title "How to invest and trade systematically", and subsequently cut down further to "How to trade systematically". About eighteen months later "Systematic Trading" was published.

The obvious thing to do next was to write "How to invest systematically". Of course this is also a huge topic and I had to spend a fair bit of time thinking about what the focus of the book should be, and what ground it would cover that wasn't covered elsewhere. I also had to think of a more original title than "Systematic Investing".


How did you decide on the foc(us/i) of the book?


To some extent I wrote the book for myself: I wanted a framework for managing my long only investments; which included shares and ETFs, where I had to pay relatively high trading costs compared to my futures, where I allocated across multiple asset classes, and where there were real world problems like tax to worry about.

I then thought long and hard about what were the most important - and neglected - topics in investment books. I decided they were uncertainty and costs. These two ideas are actually linked, because costs are highly predictable, whereas almost everything else about financial returns is uncertain to varying degrees. It's important to make decisions with this firmly in mind.

Of course there is an overlap with Systematic Trading here because in that book I frequently emphasise the difficulty of knowing the future with any degree of certainty, and I also wrote an entire chapter on trading costs.

Like Systematic Trading I also wanted to publish something that was a complete framework. So the idea is you can use this book for almost any kind of unleveraged long-only investment (passive ETFs, individual shares and active funds), and it also covers a few different 'use cases'. Of course this makes the book pretty long. It's about 50% longer than "Systematic Trading", but the sticker price on the cover is the same (in GBP anyway) so it's actually better value.

So... if you like [winks] we can talk a bit more those key ideas of uncertainty and costs now.


Oh yes, sure. Perhaps you can talk a little more about uncertainty


In finance there are almost two opposing views. On the one hand there is Taleb who says "We don't know anything" and on the other you have almost the entire industry of quantitative finance that assumes we know everything with 3 decimal places of precision (obviously I'm exaggerating both viewpoints for effect).

The idea that we can't naively use the probability of past events to predict the future is hardly new; it goes back to Keynes and deeper into the past. In contrast in quant finance we normally assume that we can (a) know the model that generated financial returns data in the past (b) precisely measure the parameters of this model and (c) assume it will continue into the future.

The "Weak Taleb" attack on quant finance is an attack on (b); so "The casino is the only human venture I know where the probabilities are known... and almost computable... In real life you do not know the odds, you need to discover them... ” (Black Swan).

But we can make equally valid points that (a) is also untrue (there is no 'model' waiting to be found and measured); and that (c) is nonsense (the future will never be exactly like the past). A "Strong Taleb" attack would essentially make the points that: (a) there are no models [or at least none that are practically usable], (b) even if there were we couldn't ever know their parameters precisely, and (c) these models are unchanging into the future*.

* By the way for the purposes of this discussion a Markov state model is still a single "model" - not a way of dealing with models that could change in the future. 

This is all true - but extremely unhelpful. Nearly all the smart people in finance are aware of this problem, but mostly ignore it. In fact we probably just have to assume that there is a model, and we also have to assume that this model will work in the future. Or we might as well close our laptops and become non-systematic, "gut feel" discretionary investors and traders.

But it's quite straightforward to deal with the weak Taleb attack on point (b) and think about the accurate measurement of the past. First you need to get yourself away from the idea that there was only one past with one set of estimable parameters which are known with certainty. Past movements of financial markets are either [i] a random draw from an unknown distribution or [ii] just one of many possible parallel universes that could have happened or [iii] are realisations of some random hidden latent process. It's easier to model [i] but these ideas are functionally equivalent.

Quantifying the effect of this uncertainty of the past on parameter estimates is relatively trivial statistics 101. So for example if the mean of a return series is 5% a year, and the standard deviation 24%, and you have 36 years of data, then the estimation error for the mean is (24% / sqrt 36) or 4%, so the two standard deviation confidence interval is -3% to +13%. Even with a relatively long history of data that is a huge amount of uncertainty about what the modelled mean was in the past: and remember we're still making the quite strong assumptions that there really is a model generating the returns; which happens to be Gaussian normal; and which will remain unchanged in the future.

The key insight here is that there are different degrees of uncertainty. The confidence interval for a standard deviation in this case is much narrower: 18.4% to 29.6%. If we have more than one return series we can also estimate correlation; so for example between US bonds and stocks the confidence interval is around -0.1 to 0.2.

So we don't need to throw away all of our data; we can be a bit smarter and just calibrate how differently confident we can be in the individual estimates we draw from that data.


That's given me a headache! It sounds like you've written a very technical book on maths and/or philosophy...


Nothing of the sort! All the ideas are introduced in a very intuitive way (much simpler language than I've used above); and it's very much aimed at a non-quant but financially literate audience. The book is mostly about what practical use these findings have. Once you start thinking about the world in terms of quantified uncertainty you can still be a systematic, model based, investor; and you can simultaneously be a skeptical pupil of Taleb; but you can also still do some useful things.


So what practical problems do you address with this idea of (calibrated) uncertainty of the past?


The first main insight is that standard portfolio optimisation is partly junk. Of course everyone in finance knows this: but again there are two extreme views: "Complete junk - I don't believe in any of that nonsense and I'm just going to hold US tech stocks whose names begin with the letter A" or "Junk, but I'm going to use it anyway because what choice do I have?". But reality is more nuanced than either of these views.

The insight and intution behind Markowitz's work is extremely valuable - it's the baby in this particular bathwater. Though yes: estimates of risk adjusted returns have such huge past uncertainty they're mostly worthless. But estimates of volatility and correlation are more predictable and so have some value. So I address this question: how should you build portfolios given this knowledge?

The other main insight is that you shouldn't look at post-cost returns as you're subtracting apples (costs) from oranges (pre-cost returns). Pre-cost returns have huge estimation error. But costs are actually relatively predictable (unless you're trading so fast or in such size you affect the order book). A better approach is that the starting point for any decision should be that you go for the cheapest option unless the evidence strongly suggests - with some probability - that the more expensive option is better. I guess this is a Bayesian worldview, though I never use that term in the book.


Okay I get the hint. I think perhaps it would be good to talk about costs now


The first thing to say about costs is that although they're relatively predictable, they're not actually that easy to measure. Although there have been attempts to get funds to state the "total cost of ownership", in practice you have to make some educated guesses to work out likely costs of different forms of investment.

Once you have that information, what should you do? Anyone whose read my first book knows that costs are important when deciding how much, and what, to trade. But for long only investment there are a whole lot of other decisions where the notion of certain costs and uncertain returns is useful. For example should you buy a fund which is more expensive, but which has had - or should have - higher returns?

Another important point is that different kinds of investors have to worry about different kinds of costs. So relatively large investors have to worry about market impact. But for relatively small investors, especially those in the UK, the tyranny of minimum brokerage commissions is more important. A £10 commission on a £1,000 portfolio is 1%: quite a lot if you have realistic estimates of future returns. An important implication of this is that the right kind of portfolio will depend on how much capital you have to invest.


You've already talked about some common elements, but what would readers of Systematic Trading recognise in this book?


The main thing they will recognise is the idea of a top down, heuristic portfolio construction method which I call handcrafting in both books. The difference in Smart Portfolios is that I make it even simpler - all grouped assets have equal weights (once differential risk has been accounted for). 

In part two of the new book I also go into much more detail about how you'd practically build a cross asset portfolio using the top down handcrafting method: choosing appropriate ETFs, and where it makes sense to buy individual shares. 

Because of the emphasis on costs this would be done differently for smaller and larger investors. In particular larger investors can afford more diversification: smaller investors who buy too many funds will end up owning too many small chunks of things that they've had to pay multiple minimum commissions on. The advantage of a top down approach is it deals with this nicely: you just stop diversifying when it no longer makes sense (a decision based, naturally, on the certain costs and uncertain benefits of diversification). 


Earlier you talked about "different use cases"...


Glad to see you've been paying attention! Just as in Systematic Trading I realise that not everyone will sign up to the extremely pure dogma: in this case that risk adjusted returns are completely unpredictable. So the book also helps people who want to vary slightly from that central path, whilst limiting the damage they can do. These different use cases all appear in part three.

Firstly as you might expect I talk about systematic ways to forecast future returns. At the risk of being stereotyped one is a trend following model, the other is based on yields (so effectively carry). The point, as with Systematic Trading, isn't that these are the best ways to forecast the markets - they're just nicely familiar examples which most people are able to understand (and whose nuances I can explain). Unfortunately as with my first book a few people won't understand this and will pigeonhole me as a chartist / trend follower / technical trader...

Secondly I talk about using "gut feel" but in a systematic way. This is analogous to the "semi-automatic trader" in my first book. The idea being that some people will always think they can predict market returns / pick stocks; at least let's provide a framework where they can do limited damage.

Thirdly are people who are still convinced that active fund managers are the bees knees. I show them how to determine if this is true by looking through the prism of uncertain returns (perhaps higher realised alpha in the past) versus certain costs (higher management fees).

Finally there are the relatively recent innovations of Smart Beta; again more expensive than standard passive funds, but are they worth it? I also talk a bit about robo-advisors.


"Smart Beta": is that where the title of the book came from?


Sort of. It's an ironic title in that respect since you'll realise quite quickly I am pretty skeptical of Smart Beta at least in the guise of relatively expensive ETFs. Using systematic models to do the smart beta yourself is better, if you have sufficient capital.

But "Smart" actually sums up the book quite well (and yes, this is an ex-post rationalisation once I'd thought up the title. Deal with it). Smart for me means "Practical but theoretically well grounded".

So for example there are some technical books on things like Bayesian optimisation that deal with uncertainty, and other papers around trading costs. But if you introduce taxes into the mix you end up with really non tractable, non closed form models and it gets pretty unpleasant. This isn't the kind of the thing the average financial advisor can really use. Frankly even I don't use that kind of technical artillery when deciding if I should top up my pension fund.

And there is plenty of "backwoodsman" advice in less technical books that is either vague ("Don't trade too much"), overly simplistic ("Buy the cheapest passive funds") or worse isn't supported by theory ("Everyone should just own stocks").

What I tried to do in Systematic Trading, and continue in Smart Portfolios, is to provide some heuristic rules that are (a) as simple as possible and (b) theoretically correct, or at least supported by research. So for example one simple rule is "if you are paying a minimum brokerage commission of $1, you shouldn't invest in ETFs in units smaller than $300".

One, fair, criticism of my first book was that I didn't provide enough realistic examples. So I've probably gone overboard with them here in trying to make the book as accessible as possible.

A less fair criticism of Systematic Trading is that there weren't enough equations - which of course was deliberate! I've included some more here to aid clarity, but they are mostly extremely simple without an integral symbol in sight.


What about portfolio rebalancing?


Yes, that's another big topic where I try to use simple rules that are theoretically grounded. So there is the standard rebalancing method where you don't rebalance unless your positions are out of whack by a certain amount. But I introduce a simple method for calculating what "out of whack" is, which again depends on the cost level you face, which in turn depends on how much capital you have to invest.

Then there are other rules to deal with other common situations: rebalancing when you're using forecasting rules, the effect of taxes, changes in characteristics used to pick stocks, takeovers, and so on.


I really enjoyed Systematic Trading. Should I buy your second book?


It depends. "Smart Portfolios" is actually two books in one:

  • A practical discussion of the effects of estimation uncertainty on optimising portfolios
  • A complete handbook for long only investing in funds and shares 
So if you are a pure short term futures trader who already has a good understanding of statistical uncertainty then you'll probably find little of value in this book. It is definitely not "Systematic Trading 2: The Market Strikes Back". But feel free to buy it out of misplaced loyalty! Then give it to the guy or gal who manages your long only investments.

On the other hand if you read "Systematic Trading", and enjoyed it, but struggled to see how this related to your long only ETF or shares portfolio (with the exception of the "asset allocating investor" examples), then you should really find this book very useful.

Finally if you are in fact Taleb you should definitely read the second chapter of the book, but no more. After that I mostly assume that Gaussian Normal is a useful model when used properly, and you'd absolutely hate it. Although in my defence I do at least use "Kelly-Compatible" geometric means which penalise negative skew, rather than arithmetic means.


Is there anyone you'd like to thank?


Nine people were absolutely key in this book coming about. Stephen Eckett, top dog at Harriman House, commissioned the book. Craig Pearce spent months whipping my ramblings into marketable and readable condition. Riccardo Ronco and Tansu Demirbilek were brilliant reviewers. My third reviewer Tom Smith was also brilliant, but deserves a special mention as he also reviewed my first book; in both cases with no money changing hands (I suggested he pay me £500 for the privilege but this was greeted with derision). 

The other four people are my wife and children, who have had to put up with a distracted and absent minded husband and father for months on end. 


Any more books on the horizon?


Not immediately as I have a few other projects I'm working on which will take up most of my time over the next few months. But then I've got a couple of ideas. The first idea is to try and write "Systematic Trading and Investing for Idiots" (clearly a working title). Essentially a distillation of the methods and ideas in my first two books, but written for a wider retail audience. The second idea is to write something about the interaction of people and machines in the financial markets. With all the hype over AI in financial markets this might be an interesting book.


Are you doing any conferences in the near future where we can here more about your ideas?

Great question! [surreptitiously slips ten pound note to interviewer] 

At the end of this month I'm speaking in Singapore (at QuantCon) and then at the start of November in Prague (at a new event QuantExpo). Both of these events look to have a great lineup and I'd highly recommend them if you're within flying distance of either venue.

The talk I am giving at both venues will be about the impact of past uncertainty on the estimates used for portfolio optimisation: basically material covered in the first few chapters of the book. I'll also introduce some of the possible solutions to this problem. Many of these people will have seen before but I think it's good to understand specifically how they deal with uncertainty.

There might be other events coming up - keep an eye on my social media for news.


So finally: When and Where can people get your book?


It's officially published on the 18th September but currently available for pre-order. If you go to the website for the book at this link you'll get a link to my publishers page, which is the best place to buy it from my perspective (and currently the cheapest). The books website also has a lot more information about exactly what is in the book if you're still undecided. 


Note


If you thought the (frankly incompetent) interviewer missed a key question then please feel free to comment below and I'll add the question (and answer it).


Monday, 14 August 2017

My new book: Smart Portfolios


"Smart Portfolios" - my second book - is now ready for pre-order.

A blog post talking in more detail about the book is here: https://qoppac.blogspot.co.uk/2017/09/smart-portfolios-post-about-book-nn.html

For more information see the website, here: https://www.systematicmoney.org/smart

To pre-order you can go here: https://www.harriman-house.com/smart-portfolios



Thursday, 22 June 2017

Some more trading rules

It is a common misconception that the most important thing to have when you're trading, or investing, systematically is good trading rules. In fact it is much, much, much more important to have a good position management framework (as discussed in my first book) and to trade a diversified set of instruments. Combine those with a couple of simple trading rules, and you'll have a pretty decent system. Adding additional rules will improve your expected return, but with rapidly diminishing returns.

It's for this reason that only 2 out of the 75 posts I've published on this blog have been about trading rules (this on trend following and carry; and this one on my 'breakout' system). But ... if I look at my inbox or blog comments or my thread on elitetrader.com the most common request is for me to "write about X"... where X is some trading rule I may have casually mentioned in passing that I use, but haven't written about it.

So I have mixed feelings writing this post (in which the metaphorical kimono will be completely opened- there are no more secret trading rules hiding inside my system). I'm hoping that this will satisfy the clamour for information about other trading rules that I run.  Of course it's also worth adding these rules to my open source python project pysystemtrade, since I hope that will eventually replace the legacy system I use for my own trading, and I won't want to do that unless I have a complete set of trading rules that matches what I currently use.

But I'd like to (re-)emphasise that there is much, much, much more to successful systems trading than throwing every possible trading rule into your back test and hoping for the best. Adding trading rules should be your last resort once you have a decent framework, and have done as much instrument diversification as your capital can cope with.

Pre-requisites: Although there is some messy pysystemtrade python code for this post here you don't need to use it. It will however be helpful to have a good understanding of my existing trading rules: Carry and EWMAC (Exponentially weighted Moving Average Crossover) which you can glean from my first book or this post - most of the rules I discuss here are built upon those two basic ideas.

PS You'll probably notice that I won't talk in detail about how you'd develop a new trading rule; but don't panic, that's the subject of this post.


Short volatility

I'm often asked "What do you think your trading edge is?" A tiresome question (don't ask it again if you want to stay in my good books). If I have any 'edge' it's that I've learned, the hard way, the importance of correct position sizing and sticking to your trading system. My edge certainly doesn't lie in creating novel trading rules. 

Instead the rules I use all capitalise on well known risk factors: momentum and carry for example. You'll sometimes see these called return factors but you don't get return without risk. Of course we all have different risk tolerances, but if you are happy to hold positions that the average investor finds uncomfortably risky, then you'll earn a risk premium (at least it will look like a premium if you use standard measures of risk when doing your analysis). A comprehensive overview of the world of return factors can be found in this excellent book or in this website

One well known risk factor is the volatility premium. Simply put investors are terrified of the market falling, and bid up the price of options. This means that implied volatility (effectively the price of volatility implied by option prices) will on average be higher than expected realised volatility.

How can a systematic futures trader earn the volatility premium? You could of course build a full blown options trading strategy, like my ex AHL colleague. But this is a huge amount of work. A much simpler way is to just sell volatility futures (the US VIX, and European 
V2TX); in my framework that equates to using a constant forecast of -10, or what I call in my book the "no rule" trading rule (note because of position scaling we'll still have smaller positions when the volatility of volatility was higher, and vice versa).

And here is a nice picture showing a backtest of this rule:

"With hindsight Rob realised that starting his short vol strategy in late 2007 may not have been ideal timing...."

Earning this particular premium isn't for the faint hearted. You will usually earn a consistent return with occasional, horrific, drawdowns. This is what I call a negative skew / insurance selling strategy. Indeed based on monthly returns the skew of the above is a horror show -0.664. This isn't as bad as the underlying price series, because vol scaling helps improve skew, but still pretty ugly (on S&P500 using the same strategy it's a much nicer 0.36).

It is a good compliment to the positive skew trend following rules that form the core of my system (carry is broadly skew neutral, depending on the asset class). For various reasons I don't recommend using the first contract when trading vol futures (in my data the back adjusted price is based on holding the second contract). One of these good reasons is that the skew is really, really bad on the first contract. 


But... we already have trend following and carry in vol? Do we need a short bias as well?


I already include the VIX, and V2TX, in my trend following and carry strategies. That means to an extent I am already earning a volatility premium. 

How come? Well imagine you're holding the first VIX contract, due to expire in a months time. The price of that (implied vol) will be higher than the current level of the VIX (which I'll call, inaccurately, spot vol), reflecting the desire of investors to pay up for protection against volatility in the next month. As the contract ages the price will drift down to spot levels, assuming nothing changes; a rolldown effect on futures prices. That's exactly what the carry strategy is designed to capture.

This isn't exactly the same as the implied versus spot vol premium; but it's very closely related.

Now consider trend following. Assuming you use back adjusted futures prices then in an environment when spot vol doesn't move, but in which there is negative rolldown for the reasons described above, then the back adjusted price will drift downwards. This will create a trend in which the trend following strategy will want to participate.

Arguably trend following and carry are actually better than being short vol, since they are reactive to changing conditions. In 2008 a short vol strategy would have remained stubbornly short in the face of rapidly rising vol levels. But trend following would have ended up going long vol (eventually, depending on the speed of the rule variation). Also in a crisis the vol curve tends to invert (further out vol becoming cheaper than nearer vol) - in this situation a carry strategy would buy vol.

The vol curve tends to invert in a crisis

So.... what happens if I throw carry and trend following back into the mix? Using the default optimisation method in pysystemtrade (bayesian shrinkage) the short biased signal gets roughly a 10% weight (sticking to just VIX and V2X). That equates to an improvement in Sharpe Ratio on the overall account curve of the two vol futures of just 0.03, a difference that isn't statistically different. And the skew gets absolutely horrific.

So... is this worth doing? I'll discuss this general issue at the end of the post. But on the face of it using trend following and carry on vol futures might a better way of capturing the vol premium than just a fixed short bias. Using all three of course could be even better.



An aside: What about other asset classes?


An excellent question is why we don't incorporate a bias to other asset classes that are known to earn a risk premium; for example long equities (earning the equity risk premium) or long bonds (earning the term premium)?*

* I'm not convinced that there is a risk premia in Commodities, at best these might act as an inflation hedge but without a positive expected return. It's not obvious what the premia you'd earn in FX is, or which way round you should be to earn it.

This might make sense if all your capital was in systematic futures trading (which I don't recommend - it's extremely difficult to earn a regular income purely from trading). But I, like most people, own a chunk of shares and ETFs which nicely cover the equity and bond universe (and which pay relatively steady dividends which I'm happy to earn an income from). I don't really need any more exposure to these traditional asset classes.

And of course the short vol strategy has a relationship with equity prices; crashes in equities normally happen alongside spikes in the VIX / V2X (I deliberately say relationship here rather than correlation, since the relationship is highly non linear). Having both long equity and short vol in the same portfolio is effectively loading up massively on short black swan exposure.



Relative carry


The next rule I want to consider is also relatively simple - it's a relative version of the carry rule that I describe in my book and which is already implemented in pysystemtrade. As the authors of this seminal paper put it:

"For each global asset class, we construct a carry strategy that invests in high-carry securities while short selling low-carry instruments, where each instrument is weighted by the rank of its carry"

Remember for carry the original forecast is quite noisy, to avoid that we need to smooth it. In my own system I use a fixed smooth of 90 business days (as many futures roll quarterly) for both absolute and relative carry. 

Mathematically the relative carry measure for some instrument x will be:

Rx_t = Cx_t - median(Ca_t, Cb_t, ...) 


Where Ca_t is the smoothed carry forecast for some instrument a, Cb_t for instrument b and so on; where a,b, c....x are all in the same asset class. 

Note - some people will apply a further normalisation here to reflect periods when the carry values are tightly clustered within an asset class, or when they are further apart - the normalisation will ensure a consistent expected cross sectional standard deviation for the forecast. However this is leveraging up on weak information - not usually a good idea.

This rule isn't super brilliant by itself. Here it is, tested using the full set of futures in my dataset:


It clearly underperforms it's cousin, absolute carry. More interestingly though the predictors look to be doing relatively different things (correlation is much lower than you might expect at around 0.6), and the optimisation actually gives the relative carry predictor around 40% of the weight when I just run a backtest with only these two predictors. 

Lobbing together a backtest with both relative and absolute carry the Sharpe ratio is improved from 0.508 to 0.524 (monthly returns, annualised). Again hardly an earth shattering improvement, but it all helps.


Normalised momentum

Now for something completely different. Most trading rules rely on the idea of filtering the price series to capture certain features (the other school of thought within the technical analysis campus is that one should look for patterns, which I'm less enthusiastic about). For example an EWMAC trend following rule is a filter which tries to see trends in data. Filtering is required because price series are noisy, and a lot of that noise just contributes to potentially higher trading costs rather than giving us new information. 

But there is another approach - we could normalise the price series to make it less noisy, and then apply a filter to the resulting data. The normalised series is cleaner, and so the filters have less work to do.

The normalisation I use is the cumulative normalised return. So given a price series P_0, P_1 ... P_T the normalised return is:

R_t = (P_t - P_[t-1] ) / sigma (P_0.... P_t)
Where sigma is a standard deviation calculation.Also to avoid really low vol or bad prices screwing things up I apply a cap of 6.0 in absolute values on R_t. Then the normalised price on any given day t will be:

N_t = R_1 + R_2 + R_3 + .... R_t

NOTE: For scholars of financial history I've personally never seen this trading rule used elsewhere - it's something I dreamed up myself about three years ago. However it comes under the "too simple not to have been already thought of" category so I expect to see comments pointing out that this was invented by some guy, or gal, in 1952. If nobody does then I will not feel too embarrassed to call this "Carvers Normalised Momentum".

Perceptive readers will note:

  • You probably shouldn't use normalised prices to identify levels since the level of the price is stripped out by the normalisation.
  • These price series will not show exponential growth; the returns will be roughly normal rather than log normal. This is a good thing since over long horizons using prices that show exponential growth tends to screw up most filters since they don't know about exponential growth. Over relatively short horizons however it makes no difference.
  • Simple returns calculated using the change in normalised price can be directly compared and aggregated across different instruments, asset classes and time periods; something that you can't do with ordinary prices. We'll use this fact later.

Rather boringly I am now going to apply my favourite EWMAC filter to these normalised price series, although frankly you could apply pretty much anything you like to them. 

Minor point: The volatility normalisation stage of an EWMAC calculation [remember its ewma_fast - ewma_slow / volatility] isn't strictly necessary when applied to normalised price series which will have a constant expected volatility but it's more hassle to take it out so I leave it in here.


Normalised momentum
Performance wise there isn't much to choose between normalised and the use of standard EWMAC on the actual price; but these things aren't perfectly correlated, and that can only be a good thing.


Aggregate momentum


It's generally accepted that momentum doesn't work that well on individual stocks. It does however sort of work on industries. And it is relatively better again when applied to country level equity indices. 

I have an explanation for this. The price of an individual equity is going to be related to the global equity risk premium, plus country specific, industry specific, and idiosyncratic firm specific factors. The global equity risk premium seems to show pretty decent trends. The other factors less so; and indeed by the time you are down to within industries mean reversion tends to dominate (though you might call it the value factor, which if per share fundamentals are unchanged amounts to the same thing).

Value type strategies then tend to work best when we're comparing similar assets, like equities in the same country and industry; also because accounting ratios are more comparable across two Japanese banks, than across a Japanese bank and a Belgian chocolate manufacturer. There is a more complete expounding of this idea in my new book, to be released later this year.

So trading equity index futures then means we're trying to pick up the momentum in global equity prices through a noisy measurement (the price of the equity index) with a dollop of mean reverting factor added on top.

If you follow this argument to it's logical conclusion then the best places to see momentum will be at the global asset class level*. There we will have best measure of the underlying risk factor, without any pesky mean reversion effects getting in the way.

* A future research project is to go even further. I could for example create super asset classes, like "all risky assets" [equities, vol, IMM FX which are all short USD in the numeraire, commodities...?] and "all safe assets" [bonds, precious metals, STIR, ...]. I could even try and create a single asset class using some kind of PCA analysis to identify the single most important global factor. 

How do we measure momentum at the asset class level? This is by no means a novel idea (see here) so there are plenty of suggestions out there. We could use benchmarks like MSCI world for equities, but that would involve dipping into another data source (and having to adjust because futures returns are excess returns, whilst MSCI world is a total return); and it's not obvious what we'd use for certain other asset classes. Instead I'm going to leverage off the idea of normalised prices and normalised returns which I introduced above.

The normalised return for an asset class at time t will be:

RA_t = median(Ra_t, Rb_t, Rc_t, ...)

Where Ra_t, Rb_t are the normalised returns for the individual instruments within that asset class (eg for equities that might include SP500 futures, EUROSTOXX and so on). You could take a weighted average, using market cap, or your own risk allocations to each instrument, but I'm not going to bother and just use a simple average.

Then the normalised price for an asset class is just:

NA_t = RA_1 + RA_2 + RA_3 + .... RA_t

Next step is to apply a trend following filter to the normalised price... yes why not use EWMAC? 

Minor point of order - it's definitely worth keeping the volatility normalisation part of EWMAC here because the volatility of NA is not constant even when the volatility of each Na, Nb... is - if equities become less correlated then the volatility of NA will fall, and vice versa; as more assets are added to the data basket and diversification increases again the volatility of NA will fall. Indeed NA should have an expected volatility that is lower than the expected volatility of any of Na, Nb...

Having done that we have a forecast that will be the same for all instruments in a particular asset class. 

If I compare this to standard, and normalised, momentum:



... again performance wise not much to see here, but there is clearly diversification despite all three rules using EWMAC with identical speeds!



Cross sectional within assets

So we can improve our measure of momentum using aggregated returns across an asset class. This works because the price of an instrument within an asset class is affected by the global asset class underlying latent momentum, plus a factor that is mostly mean reverting. Won't it also make sense then to trade that mean reversion? In concrete terms if for example the NASDAQ has been outperforming the DAX, shouldn't we bet on that no longer happening?

Mathematically then, if NA_t is the normalised price for an asset class, and Nx_t is the normalised price for some instrument within that asset class, then the amount of outperformance (or if you prefer, Disequilibrium) over a given time horizon (tau, t) is:



Dx_t = [Nx_t - Nx_tau] - [NA_t - NA_tau]

Be careful of making t-tau too large as remember the slightly different properties of Nx and NA; the former has constant expected vol whilst the latter will, by construction, have lower and time varying vol. But also be careful of making it too small- you need sufficient time to estimate an equilibrium. A value of around 6 months probably makes sense

And my personal favourite measure of mean reversion is a smooth of this out-performance:

- EWMA(Dx_t, span)

Where EWMA is the usual exponentially weighted moving average; this basically ensures we don't trade too much whilst betting on the mean reversion. The minus sign is there to show mean reversion is expected to occur (I prefer this explicit reminder, rather than reversing the stuff inside Dx).

Using my usual heuristic, finger in the air, combined with some fake data I concluded that a good value to use for the EWMA span was one quarter of the horizon length, t - tau.

Here is an example for US 10 year bond futures. First of all the normalised prices:

Blue is US 10 year normalised price. Orange is the normalised price for all bond futures.
Let's plot the difference:

US 10 year bond future normalised price - Bond asset class normalised price
This is a classic mean reversion trade. For most of history there is beautiful mean reversion, and then the "taper tantrum" happens in 2013 and US bonds massively underperform. Now for the forecast:


Notice how the system first bets strongly on mean reversion occurring during the taper tantrum, but then re-estimates the equilibrium and cuts its bet. With any mean reversion system it's important to have some mechanism to stop the falling knife being caught; whether it be something simple like this, a formal test for a structural break, or a stop loss mechanism (also note that forecast capping does some work here).

What about performance? You know what - it isn't great:


Performance across all my futures markets of mean reversion rule


BUT this is a really nice rule to have, since by construction it's strongly negatively correlated with all the trend following rules we have (in case you have lost count there are now four!: original EWMAC, breakout, normalised momentum, and aggregate momentum; with just two carry rules - absolute and relative; plus the odd one out - short volatility). Rules that are negatively correlated are like buying an insurance policy - you shouldn't expect them to be profitable (because insurance companies make profits in the long run) but you'll be glad you bought them when if your car is stolen.

In fact I wouldn't expect this rule to perform very well, since plenty of people have found that cross sectional momentum works sort of okay in some asset classes (read this: thank you my ex-colleagues at AHL) and this is doing the opposite (sort of). But strong negative correlation means we can afford to have a little slack in accepting a rule that isn't stellar in isolation (a negatively correlated asset with a positive expected return can be used to create a magic money machine).

Note: This rule is similar in spirit to the "Value" measure defined for commodity futures in this seminal paper (although the implementation in the paper isn't cross sectional). To reconcile this it's worth noting that momentum and value mostly operate on different time frequencies - in the paper the value measure is based on 5 year mean reversion [I use 6 months], whilst the authors use a 12 month measure for momentum [roughly congruent to my slowest variation].



Summary


Does adding these rules improve the performance of a basic trend following using EWMAC on price, plus carry strategy? It doesn't (I did warn you right at the start of the post!) but is it sill worth doing? I use a variation of Occam's Razor when evaluating changes to my trading strategy. Does the change provide a statistically significant improvement in performance? If not is it worth the effort? (By the way I make exceptions for simplifying and instrument diversifying changes when applying these rules).

I'd expect there to be a small improvement in performance given these rules are diversifying, and given that there isn't enough evidence to suggest that these rules are better or worse than any of my existing rules, but in practice it actually comes out with slightly worse performance; although not with a statistically significant difference.

But I don't care. I have a Bayesian view that the 'true' Sharpe Ratio of the expanded set of rules is higher, even if one sample (the actual backtest) comes out slightly different that doesn't dissuade me. I'm also a bit wary of relying on just one form of momentum rule to pick up trends in the future, even if it has been astonishingly successful in the past. I'd rather have some diversification.

Note if I had dropped any of the 'dud' rules like mean reversion, I'd be guilty of in sample implicit [over]fitting. Instead I choose to keep them in the backtest, and let the optimisation downweight them in as much as there was statically significant evidence they weren't any good.

The new rules have less of a long bias to assets that have gone up consistently in the backtest period; so arguably they have more 'alpha' though I haven't formally judged that.

Although on the face of it there is no compelling case for adding all these extra rules I'm prepared to make an exception. Although I don't like making my system more complex without good reason there is complexity, and there is complexity. I would rather have (a) a relatively large number of simple rules combined in a linear way, with no fancy portfolio construction, than (b) a single rule which has an insane number of parameters and is used to determine expected returns in a full blown markowitz optimisation.

So I'm going to be keeping all these numerous rule variations in my portfolio.


Monday, 15 May 2017

People are worried about the VIX

"Today the VIX traded below 10 briefly intraday. A pretty rare occurrence. Since 1993, there have been only 18 days where it traded below 10 intraday and only 9 days where it closed below 10." (source: some random dude on my linkedin feed)

... indeed 18 observations is a long.... long... way from anything close to a statistically significant sample size. (my response to random dude)

You can't move on the internet these days for scare stories about the incredibly low level of the VIX, a measure of US implied stock market volatility. Notably the VIX closed below 10 on a couple of days last week, although it has since slightly ticked up. Levels of the VIX this low are very rare - they've only happened on 11 days since 1990 (as of the date I'm writing this).

The VIX in all it's glory


The message is that we should be very worried about this. The logic is simple - "Calm before a storm". Low levels of the VIX seem to presage scary stuff happening in the near future. Really low levels, then, must mean a very bad storm indeed.

Consider for example the VIX in early 2007:

Pootling around at 10 in late 2006, early 2007, the VIX responded to the failure of two Bear Stearns hedge funds which (as we know now) marked the beginning of the credit crunch. 18 months later there was a full blown panic happening.

This happened then, therefore it will happen again.

It struck me that this story is an example of what behavioural finance type people call narrative bias; the tendency of human beings to extrapolate single events into a pattern. But we need to use some actual statistics to see if we can really extend this anecdotal evidence into a full blown forecasting rule.

There has been some sensible attempt to properly quantify how worried we should be, most notably here on the FT alphaville site, but I thought it worth doing my own little analysis on the subject. Spoiler alert for the terminally lazy: there is probably nothing to be worried about. If you're going to read the rest of the post then along the way you'll also learn a little about judging uncertainty when forecasting, the effect of current vol on future price movements, and predicting volatility generally.

(Note: Explanations for the low level of the VIX abound, and self appointed finance "experts" can be found pontificating on this subject. It's also puzzling how the VIX is so low, when apparently serious sized traders are buying options on it in bucket load sized units (this guy thinks he knows why). I won't be dealing with this conundrum here. I'm only concerned about making money. To make money we just need to judge if the level of the VIX really has any predictive power. We probably don't need to know why the VIX is low.)


Does the level of VIX predict stock prices?


If this was an educational piece I'd work up to this conclusion gradually, but as it's clickbait I'll deal with the question everyone wants to know first (fully aware that most people will then stop reading).

This graph shows the distribution of rolling 20 business day (about one month) US stock returns since 1997:


(To be precise it's the return of the S&P 500 futures contract since I happened to have that lying around; strictly speaking you'd add LIBOR to these. The S&P data goes back to 1997. I've also done this analysis with actual US stock monthly returns going back to 1990. The results are the same - I'm only using the futures here as I have daily returns which makes for nicer, more granular, plots.) 

Important point here: this is an unconditional plot. It tells us how (un)predictable one month stock returns are in the absence of any conditioning information. Now let's add some conditioning information - the level of spot VIX:

I've split history in half - times when VIX was low (below 19.44%) shown in red, and when it was high (above 19.44%), which are in blue (overlaps are in purple). Things I notice about this plot are:


  • The average return doesn't seem to be any different between the two periods of history
  • The blue distribution is wider than the red one. In other words if spot VIX is high, then returns are likely to be more volatile. Really this is just telling us that implied vol (what the VIX is measuring) is a pretty good predictor of realised vol (what actually happens). I'll talk more about predicting vol, rather than the direction of returns, later in the post.
  • Digging in a bit more it looks like there are more bad returns in the blue period (negative skew to use the jargon)


The upshot of the first bullet point is that spot VIX doesn't predict future equity returns very well. In fact the average monthly return is 0.22% when vol is low, and 0.38% when vol is high; a difference of 0.16% a month. That doesn't seem like a big difference - and it's hard to see from the plot - but can we test that properly?

Yes we can. This plot shows the distribution of the differences in averages:

This was produced by monte carlo: repeatedly comparing the difference between random independent draws from the two distributions. This is better than using something like a 't-test' which assumes a certain distribution.

A negative number here means that high VIX gives a higher return than low VIX. We already know this is true, but the distribution plot shows us that this difference is actually reasonably significant. In fact 94.4% of the differences above are below zero. That isn't quite at the 95% level that many statisticians use for significance testing, but it's close.

To put it another way we can be 94.4% confident that the expected return for a low VIX (below 20%) environment will be lower than that for days when VIX is high (above 20%).

A moments thought shows it would be surprising if we got a different result. In finance we expect that with a higher return you will get higher risk. We know that when VIX is high that returns will have a higher volatility. So it's not shocking that they also have higher risk.

So a better way of testing this is to use risk adjusted returns. This isn't the place to debate the best way of risk adjusting returns, I'm going to use the Sharpe Ratio and that is that. Here I define the Sharpe as the 20 business day return divided by the volatility of that return, and then annualised.

(You can see now why using the futures contract is better, because to calculate Sharpe Ratios I don't need to deduct the risk free rate)

Now we've adjusted for risk there is little to choose between the high VIX and low VIX environments. In fact things have reversed, with low VIX having a higher Sharpe Ratio than high VIX. But the difference in Sharpes is just 0.04, which isn't very much.


We can only be 63% confident that low VIX is better than high VIX. This is little better than chance, which would be 50% confidence.

An important point: notice that although the difference in Sharpes isn't significant, we do know it with reasonably high confidence, as each bucket of observations (high or low VIX) is quite large. We can be almost 100% confident that the difference was somewhere between -0.04 and +0.04.

"Hang on a minute!", I hear you cry. The point now is that vol is really really low now. The analysis above is for VIX above and below 20%. You want to know what happens to stock returns when VIX is incredibly low - below 10%.

The conditional Sharpe Ratio for VIX below 10 is actually negative (-0.14) versus the positive Sharpe we get the rest of the time (0.14). Do we have a newspaper story here?

Here is the plot of Sharpe Ratios for very low VIX below 10% (red), and the rest of the time (blue):

But hang on, where are the red bars in the plot? Well remember there are only a tiny number of observations where we see vol below 10. You can just about make them out at the bottom of the plot. In statistics when we have a small number of observations we can also be much less certain about any inference we can draw from them.

Here for example is the plot of the difference between the Sharpe Ratio of returns for very low VIX and 'normal' VIX.

Notice that the amount of uncertainty about the size of the difference is substantial. Earlier it was between -0.04 and 0.04, now it's between -1 and 0.5; a much larger range. To reiterate this is because one of the samples we're using to calculate the expected difference in Sharpe Ratios is very small indeed. It does look however as if there is a reasonable chance that returns are lower when VIX is low; we can be 86% confident that this is the case.

Perhaps we should do a "proper" quant investingation, and take the top and bottom 10% of VIX observations, plus the middle, and compare and contrast.That way we can get some more data. After all although statistics can allow us to make inferences from tiny sample sizes (like the 11 days the VIX closed below 10), it doesn't mean we should.



The big blue area is obviously the middle of the VIX distribution; whilst the purple (actually red on blue) is relatively low VIX, and the green is relatively high VIX.

It's not obvious from the plot but there is actually a nice pattern here. When the VIX is very low the average SR is 0.071; when it's in the middle the SR is 0.139, and when it's really high the SR is 0.20. 
Comparing these numbers the differences are actually highly significant (99.3% chance mid VIX is better than low VIX, 98.4% chance high VIX is better than mid VIX, and 99.999% chance high VIX is better than low VIX).

So it looks like there might be something here - an inverse relationship between VIX and future equity returns. However to be clear you should still expect to make money owning S&P 500 when the VIX is relatively low - just a little bit less money than normal. Buying equities when the VIX is above 30 also looks like a good strategy. It will be interesting to see if market talking heads start pontificating on that idea when, at some point, the VIX gets back to that level.

"Hang on another minute!!", I hear you unoriginally cry, again. The original story I told at the top of this post was about VIX spiking in February 2007, and the stock market reacting about 18 months later. Perhaps 20 business days is just too short a period to pick up the effect we're expecting. Let's use a year instead.




The results here are more interesting. The best time to invest is when VIX is very high (average SR in the subsequent year, 1.94). So the 'buy when everyone else is terrified' mantra is true. But the second best time to invest is when VIX is relatively low! (average SR 1.14). These are both higher Sharpes than what you get when the VIX is just middling (around 0.94). Again these are also statistically significant differences (low VIX versus average VIX is 97% confidence, the other pairs of tests are >99%).

I could play with permutations of these figures all day, and I'd be rightly accused of data mining. So let me summarise. Buying when the VIX is really high (say above 30) will probably result in you doing well, but you'll need nerves of steel to do it. Buying when the VIX is really low (say less than 15) might give you results that are a little worse than usual, or they might not.

However there is nothing special about the VIX being below 10. We just can't extrapolate from the tiny number of times it has happened and say anything concrete.


Does the level of VIX predict vol?


Whilst the VIX isn't that great for predicting the direction of equity markets, I noted in passing above that it looks like it's pretty good at predicting their future volatility

We're still conditioning on low, middling, and high VIX here but the response variable is the annualised level of volatility over the subsequent 20 days. You can see that most of the red (turning purple) low VIX observations are on the left hand side of the plot - low VIX means vol will continue to be low. The green (high VIX) observations are spread out over a wider area, but they extend over to the far right.

Summarising:

Low VIX (below 12.5): Average subsequent vol 8%
Medium VIX: Average subsequent vol 12.3%
High VIX: Average subsequent vol 21.9%

These numbers are massively statistically significant from each other (above 99.99%). I get similar numbers for trying to predict one year volatility. 

So it looks like the current low level of VIX means that prices probably won't move very much. 


Does the level of vol predict vol?


The VIX is a forward looking measure of future volatility, and it turns out a pretty good one. However there is an even simpler predictor of future vol, and that is recent vol. The level of the VIX, and the level of recent volatility, are very similar - their correlation is around 0.77.

Skipping to the figures, how well does recent vol (over the last 20 days) predict subsequent vol (over the next 20 days)?

Recent Vol less than 6.7%: Average subsequent vol 7.9%
Recent Vol between 6.7% and 21.7%: Average subsequent vol 13.2%
Recent Vol over 21.7%: Average subsequent vol 23.4%

These are also hugely significant differences (>99.99% probability). 


The best way of predicting volatility


Interestingly if you use the VIX to try and predict what the VIX will be in one months time you find it is also very good. Basically both recent vol and implied vol (as measured by the VIX) cluster - high values tend to follow high values, and vice versa. Over the longer run vol tends not to stay high, but will mean revert to more average levels - and this applies to both implied vol (so the VIX) and realised vol.

So a complete model for forecasting future volatility should include the following:

  1. recent vol (+ve effect)
  2. current implied vol (the VIX) (+ve)
  3. recent vol relative to long run average (-ve)
  4. recent level of spot VIX relative to long run average
  5. (You can chuck in intraday returns and option smile if you have time on your hands)
However there is decreasing benefit from including each of these things. Recent vol does a great job of telling you what vol is probably going to be in the near future. Including the current level of the VIX improves your predictive power, but not very much.


Summary


The importance of the VIX to future equity returns is somewhat overblown. It's just plain silly to say we can forecast anything from something that's only happened on a handful of occasions in the past (granted that the handful in question belongs to someone with 11 fingers). Low VIX might be a signal that returns will be a little lower than average in the short term, but by no means is inevitable impending doom fast approaching. 

If there is a consistent lesson here it's that very high levels of VIX are a great buy signal. 

The VIX is also helpful for predicting future volatility - but if you have room in your life for just one forecasting rule using recent realised vol is better.


Tuesday, 2 May 2017

Some reflections on QuantCon 2017

As you'll know if you've been following any of my numerous social media accounts I spent the weekend in New York at QuantCon, a conference organised by Quantopian who provide a cloud platform for python systematic trading strategy backtesting.

Quantopian had kindly invited me to come and speak, and you can find the slides of my presentation here. A video of the talk will also be available in a couple of weeks to attendees and live feed subscribers. If you didn't attend this will cost you $199 less a discount using the code CarverQuantCon2017 (That's for the whole thing - not just my presentation! I should also emphasise I don't get any of this money so please don't think I'm trying to flog you anything here).

Is a bit less than $200 worth it? Well read the rest of this post for a flavour of the quality of the conference. If you're willing to wait a few months then I believe that the videos will probably become publicly available at some point (this is what happened last year).

The whole event was very interesting and thought provoking; and I thought it might be worth recording some of the more interesting thoughts that I had. I won't bother with the less interesting thoughts like "Boy it's much hotter here than I'd expected it to be" and "Why can't they make US dollars of different denominations more easily distinguishable from each other?!".


Machine learning (etc etc) is very much a thing


Cards on the table - I'm not super keen on machine learning (ML), AI Artificial intelligence, NN Neural Networks, and DL Deep Learning (or any mention of Big Data, or people calling me a Data Scientist behind my back - or to my face for that matter). Part of that bias is because of ignorance - it's a subject I barely understand, and part is my natural suspicion of anything which has been massively over hyped.

But it's clearly the case that all this stuff is very much in vogue right now, to the point where at the conference I was told it's almost impossible to get a QuantJob unless you profess expertise in this subject (since I have none I'd be stuck with a McJob if I tried to break into the industry now); and universities are renaming courses on statistics "machine learning"... although the content is barely changed. And at QuantCon there were a cornucopia of presentations on these kind of topics. Mostly I managed to avoid these. But the first keynote was about ML, and the last keynote which was purportedly about portfolio optimisation (by the way it was excellent, and I'll return to that later), so I didn't manage to avoid it completely.

I also spent quite a bit of time during the 'off line' part of the conference talking to people from the ML / NN / DL / AI side of the fence. Most of them were smart, nice and charming which was somewhat disconcerting (I felt like a heretic who'd met some guys from the Spanish inquisition at a party, and discovered that they were all really nice people who just happened to have jobs that involved torturing people). Still it's fair to say we had some very interesting, though very civilised, debates.

Most of these guys for example were very open about the fact that financial price forecasting is a much harder problem than forecasting likely credit card defaults or recognising pictures of cats on the internet (an example that Dr Ernie Chan was particularly fond of using in his excellent talk, which I'll return to later. I guess he likes cats. Or watches a lot of youtube).

Also, this cartoon:

Source: https://xkcd.com/1831/ This is uncannily similar to what DJ Trump recently said about healthcare reform.


The problem I have here is that "machine learning" is a super vague term which nobody can agree on a definition for. If for example I run the most simple kind of optimisation where I do a grid search over possible parameters and pick the best, is that machine learning? The machine has "learnt" what the best parameters are. Or I could use linear regression (200+ years old) to "learn" the best parameters. Or to be a bit fancier, if I use a Markov process (~100 years old) and update my state probabilities in some rolling out of sample Bayesian way, isn't that what an ML guy would call reinforcement learning?

It strikes me as pretty arbitrary whether a particular technique is machine learning or considered to be "old school" statistics. Indeed look at this list of ML techniques that Google just found for me, here:

  1. Linear Regression
  2. Logistic Regression
  3. Decision Tree
  4. SVM
  5. Naive Bayes
  6. KNN
  7. K-Means
  8. Random Forest
  9. Dimensionality Reduction Algorithms
  10. Gradient Boost & Adaboost

Some of these machine learning techniques don't seem to be very fancy at all. Linear and logistic regression are machine learning? And also Principal Components Analysis? (which apparently is now a "dimensionality reduction algorithm". Which is like calling a street cleaner a "refuse clearance operative")

Heck, I've been using clustering algorithms like KNN for donkeys years, mainly in portfolio construction (of which more later in the post). But apparently that's also now "machine learning".

Perhaps the only important distinction then is between unsupervised and supervised machine learning. It strikes me as fundamentally different to classical techniques when you let the machine go and do it's learning, drawing purely from the data to determine what the model should look like. It also strikes me as potentially dangerous. As I said in my own talk I wouldn't trust a new employee with no experience in the financial markets to do their fitting without supervision. I certainly wouldn't trust a machine.

Still this might be the only way of discovering a genuinely novel and highly non linear pattern in some rich financial data. Which is why I personally think high frequency trading is one of the more likely applications for these techniques (I particularly enjoyed Domeyards Christina Qi's presentation on this subject, which most of us only know about through books like Flash Boys).

I think it's fair to say that I am a bit more well disposed towards those on the other side of the fence than I was at the conference. But don't expect me to start using neural networks anytime soon.


... but "Classical" statistics are still important


One of my favourite talks that I've already mentioned was Dr Ernie Chan who talked about using some fairly well known techniques to identify pictures of cats on you tube enhance the statistical significance of backtests (with a specific example of a multi factor equity regression).


Source: https://twitter.com/saeedamenfx

Although I didn't personally learn anything new in this talk I found it extremely interesting and useful in reminding everyone about the core issues in financial analysis. Fancy ML algorithims can't help solve the fundamental problem that we usually have insufficient data, and what we have has a pretty low ratio of signal to noise. Indeed most of these fancy methods need a shed load of data to work, especially if you run them on an expanding or rolling out of sample basis as I would strongly suggest. There are plenty of sensible "old school" methods that can help with this conundrum, and Ernie did a great job of providing an overview of them.

Another talk I went to was about detecting structural breaks in relative value fixed income trading, which was presented by Edith Mandel of Greenwhich Street Advisors. Although I didn't actually agree with the approach being used this stuff is important. Fundamentally this business is about trying to use the past to predict the future. It's really important to have good robust tests to distinguish when this is no longer working, so we know that the world has fundamentally changed and it isn't just bad luck. Again this is something that classical statistical techniques like Markov chains are very much capable of doing.


It's all about the portfolio construction, baby


As some of you know I'm currently putting the final touches to a modest volume on the ever fascinating subject of portfolio construction. So it's something I'm particularly interested in at the moment. There were stacks of talks on this subject at Quancon, but I only managed to attend two in person.

Firstly the final keynote talk, which was very well received, was on Building Diversified Portfolios that Outperform Out-of-Sample", or to be more specific Hierarchical Risk Parity (HRP), by Dr. Marcos López de Prado:

Source: https://twitter.com/quantopian. As you can see Dr. Marcos is both intelligent, and also rather good looking (at least as far as I, a heterosexual man, can tell).

HRP is basically a combination of a clustering method to group assets and risk parity (essentially holding positions inversely scaled to a volatility estimate). So in some ways it is not hugely dissimilar to an automated version of the "handcrafted" method I describe in my first book. Although it smells a lot like this is machine learning I really enjoyed this presentation, and if you can't use handcrafting because it isn't sophisticated enough then HRP is an excellent alternative.

There were also some interesting points raised in the presentation (and Q&A, and the bar afterwards) more generally about testing portfolio construction methods. Firstly Dr Marcos is a big fan (as am I) of using random data to test things. I note in passing that you can also use bootstrapping of real data to get an idea of whether one technique is just lucky, or genuinely better.

Secondly one of the few criticisms I heard was that Dr Marcos chose an easy target - naive Markowitz - to benchmark his approach against. Bear in mind that (a) nobody uses naive Markowitz, and (b) there are plenty of alternatives which would provide a sterner test. Future QuantCon presenters on this subject should beware - this is not an easy audience to please! In fairness other techniques are used as benchmarks in the actual research paper.

If you want to know more about HRP there is more detail here.

I also found a hidden gem in one of the more obscure conference rooms, this talk by Dr. Alec (Anatoly) Schmidt on "Using Partial Correlations for Increasing Diversity of Mean-variance Portfolio".

Source: https://twitter.com/quantopian


That is more interesting than it sounds - I believe this relatively simple technique could be something genuinely special and novel which will allow us to get bad old Markowitz to do a better job with relatively little work, and without introducing the biases of techniques like shrinkage, or causing the problems with constraints like bootstrapping does. I plan to do some of my own research on this topic in the near future, so watch this space. Until then amuse yourself with the paper from SSRN.


Dude, QuantCon is awesome


Finance and trading conferences have a generally bad reputation, which they mostly deserve. "Retail" end conferences are normally free or very cheap, but mostly consist of a bunch of snake oil salesman. "Professional" conferences are normally very pricey (though nobody there is buying their ticket with their own money), and mostly consist of a bunch of better dressed and slightly snake oil salespeople.

QuantCon is different. Snake oil sales people wouldn't last 5 minutes in front of the audience at this conference, even if they'd somehow managed to get booked to speak. This was probably the single biggest concentration of collective IQ under one roof in finance conference history (both speakers and attendees). The talks I went to were technically sound, and almost without exception presented by engaging speakers.

Perhaps the only downside of QuantCon is that the sheer quantity and variety of talks makes decisions difficult, and results in huge amount of regret at not being able to go to a talk because something only slightly better is happening in the next room. Still I know that I will have offended many other speakers by not (a) going to their talk, and (b) not writing about it here.

So I feel obligated to mention this other review of the event from Saeed Amen, and this one from Andreas Clenow, who are amongst the speakers whose presentations I sadly missed.

PS If you're wondering wether I am getting paid by QuantCon to write this, the answer is zero. Regular readers will know me well enough that I do not shill for anybody; the only thing I have to gain from posting this is an invite to next years conference!