Wednesday, 17 December 2014

Should billionaires and bricklayers have the same investments?

I permanently cut my the risk of my trend following futures trading system last week and invested my profits into bonds, because I was feeling richer. Let me explain. why...


The basics


There are many ways to measure risk. My favourite is the expected daily standard deviation of your portfolio returns, and I usually look at the annualised version of this - multiply by 16. Until recently I was targeting 50% a year (or about 4% a day). I hasten to add that I only have a fraction of my net worth in my futures trading system - this would be an inappropriately high level if it was my entire asset base. I've now cut it to 25% a year, which is closer to the 10 - 20% that most systematic trend following firms target.

It's a well known result that to be consistent with the continuous Kelly criterion you should target the same standard deviation as you expect your Sharpe Ratio (SR) to be. I'll talk about what the right Sharpe Ratio might be in a moment.

So if you expect a SR of 0.5, you should run at 50% annualised risk. However that is a little bit rich for me, and for any sensible person. Consider the following graph:

Recovering the Kelly criterion from simulated data. Source: Author's research.

Focusing on the blue line for the moment (apologies to the colour blind, it's the middle one once we get to the right) you can see it peaks at around 0.50, or 50%, which is Kelly optimal for a portfolio (trading system, or long only portfolio with one or more assets in it) with a true Sharpe Ratio of 0.5 as we have here. However suppose you don't know what your true Sharpe is, which is the normal state of affairs.

Suppose you think that your SR is 1.0, in which case you would be betting at a risk target of 100% annualised risk. As the picture shows if the true SR is really 0.5 you would on average lose money in the long run, and in many cases you'd lose a lot. A far safer bet is to run at 'Half-Kelly'. Expecting a SR of 1.0 you'd run at 50% risk. If you thought you'd get a SR of 0.50 as in the graph then 25% annualised risk is fine. This isn't optimal, your average annual return will be about a third less than 'Full Kelly', but it's better than risking too much and ending up on the right hand side of the peak.

That is the result for a normal asset with symmetric returns. The other two lines show you the results of different kinds of assets. The green line (bottom line on the right) is for a negative skew asset - like a trading system that sells volatility either directly or through running relative value type trades. The red line (top line on the right) is for a positive skew asset like trend following. As you can see the negative skew asset becomes toxic much quicker than the other two.

Overestimating the Sharpe ratio of a negative skew asset and Kelly betting accordingly is a one way ticket to bankruptcy. This is made worse by the fact that these strategies normally have quite low natural volatility, so to get up to the likes of 50 or 100% annualised risk they will need enormous leverage.

In contrast the positive skew asset is relatively benign at larger risk percentages. It's still better to run at the optimal Kelly, and safer to run at half Kelly, but running too much risk isn't quite as damaging.


What is a reasonable Sharpe Ratio to expect?


All this is well and good but what sort of Sharpe should we expect? Most people would at this point just get some estimates of past returns and volatility, or if you run a trading system you fire up some back test software. Two reasons why you should take what comes of this with a pinch of salt.

Firstly asset returns in the future are unlikely to be as high as they were in the past. Take stocks. Even with the financial crisis over the last 40 years they have done pretty well. A good chunk of that comes from effects that won't be repeated (falling inflation) or could well reverse (rising proportion of GDP as corporate profits, rerating of earnings:price ratios). This also affects trading systems, since if assets have generally been going up then trend following for example will work better.

The second problem is most back tests are overfitted. Unless you've genuinely put in the first set of trading rules you thought of, not looked at the performance, not thrown anything away; and done a pure backward looking optimisation. Even if you do all of these things chances are you're still using trading rules that somebody else has come up, using past data or experience.

You can either apply a very sophisticated method, adjusting past asset class performance to take out secular effects and using statistical techniques to estimate the effect of overfitting, or just use a reasonable rule of thumb which is to cut the expected back test performance in half.

In long only world for a single average stock a SR of 0.2 is likely. For a diversified portfolio of equities you could get up to 0.3. Diversifying across asset classes might get you up to a SR of 0.5. Adding a trading system on top of these numbers could half again; with a mixture of styles you could probably double this.

For a very well diversified system like mine (45 futures markets over all major asset classes, 8 types of signal over three different styles) then backtested SR of 2.0 translate to an expectation of 1.0.

Unless you're in high frequency world, and benefiting from low latency technology or have market maker advantages, then I don't believe a SR above this is realistic.

So far I haven't justified why I cut my annual risk percentage by half, since if I was expecting a Sharpe of 1.0 then my 50% target was probably okay. So now we need to think about how wealth influences risk taking.


Should wealth determine the amount of risk you take, and the kinds of investments you have?


Economic theory generally assumes constant relative risk aversion. This would imply that wealth doesn't affect your desire for risk. A bricklayer who somehow managed to come a billionaire would maintain the same level of risk as a percentage of their portfolio. Financial theory also assumes that everyone should have the same portfolio of investments, with the highest possible Sharpe Ratio, and then leverage as required to get the risk they want.

I am not picking on bricklayers for any reason, except for the alliterative opportunities they offer here.

In practise this doesn't seem to happen. For example under prospect theory the bricklayer would probably become more risk averse as they get richer, for fear of losing their new found gains. Secondly most people also aren't comfortable using leverage, except when buying residential property.

Imagine you're a 64 year old bricklayer, who will be retiring next week. You only have a state pension and no other investments, except £10,000 in cash. Economically you own an annuity (the pension) worth perhaps £180,000 plus the cash which is 5.3% of your net worth.

Is the best use of £10,000 to invest it in a Sharpe ratio 1.0 opportunity which will return 10%, or to buy lottery tickets? The latter is more likely and also makes more sense. £1,000 isn't going to make any difference at all (adding 0.53% to wealth, and if invested risk free about the same to income). But in the 2 million to one or so chance of a lottery jackpot and winning £10 million the bricklayer could be much better off.

Point one: people who don't / can't use leverage and need / want high returns will pay for risky investments - lottery tickets, growth story stocks, 100-1 horses - even if they have a negative expectation.

If the bricklayer could infinitely leverage up his £10,000 the lottery ticket would make no sense as it would be dominated by a leveraged form of the SR 1.0 investment. This could net him £10 million (a leverage factor so large I can't be bothered to work it out) with a positive expectation. But that would be well beyond half or even full Kelly. Betting at half Kelly - five times leverage - would still only expect to earn £5,000 again - not enough to make a huge difference (2.5% of wealth). It's more likely the builder will bet beyond half or even full Kelly, even if they don't go all the way to lottery like levels.

Point two: people who have a low level of financial wealth, which is dominated by other income, will often use too much leverage or go for riskier investments.

Now suppose you are a billionaire, with a billion quid, and 5.3% or £53 million spare. You could certainly afford to throw it away on lottery tickets, or buy a football team, both of which have negative expectation. However it's much more likely that you will put it into the SR 1.0 investment - that after all is how you became rich, not by making stupid financial decisions but by making good ones.

Or maybe you inherited the money, in which case good decision to be born to the right parents. Go you!

I also think it's much more likely that you will be very cautious, investing at most half-Kelly, and probably not even leveraging at all. You don't need the extra income, so preserving your wealth is more important than taking additional risk to get it. This is why rich people like investments with consistent returns. Prospect theory tells us that fear of losing new found wealth makes people more risk averse than if they are trying to recover gains.

This also opens things up for the billionaires. They can invest in high SR, but low return, investments that other people would spurn.

This effect applies to all levels of wealth. Now hopefully you understand why after a good run on the futures markets I wanted to lower the risk of my portfolio, by scaling back on my leveraged derivative exposure and putting the money into relatively low risk bonds.

Point three: As people get more wealth they become risk averse, able to invest in low risk but high SR investments, and they use less leverage.


Let's get a bit more sophisticated...


Apart from risk preferences can we say anything else about preferences for different wealth levels. I was inspired to write this post by the following which also generated some discussion with my ex colleague Matt. The paper argues that wealthier investors are more likely to be 'value' investors, whereas others are 'momentum' investors.

By cutting my exposure to momentum (which I did before reading about the Lettau et al paper) I have definitely followed this track, at least to a degree.

The authors postulate that investors with different wealth levels are hedging different risk exposures that they already have.


"Thus shareholders in the bottom 90% of the wealth distribution may seek to hedge risks associated with an increase in the capital share by chasing returns and sticking to stocks whose prices have appreciated most recently. On the other hand, those in the top 10%, such as corporate executives whose fortunes are highly correlated with recent stock market gains, may have compensation structures that are already momentum-like. These shareholders may seek to hedge their compensation structures by undertaking contrarian investment strategies that go long in stocks whose prices are low or recently depreciated."

There may be other reasons. It's possible we can wrap this up with what we already know from above. Pure value strategies are relative value, exactly the kind of high SR, naturally low risk strategy that rich people like. Momentum strategies tend to be have higher natural risk, due to low futures margin and the positive skew that means you can safely run higher risk targets.

Another explanation relates to liquidity. Billionaires are more likely to be owners of 'patient capital', money that can be tied up for years or decades in family trusts. Value strategies - buying stuff that's cheap - particularly illiquid stuff like private equity or land - do better if they don't have to suddenly liquidate after losses due to redemption's by impatient investors. Again momentum strategies tend to be in more liquid futures which for the common or garden retired investor who relies on regular returns for income is a good thing.


Concluding thought



Although the story in the paper is an interesting one, and might have some truth to it, ultimately having a good mix of investment styles is undoubtedly better than favouring one or another, and will give you a higher Sharpe Ratio overall. So although getting a little bit richer might be a good excuse for reducing your risk appetite and leverage, it doesn't justify trusting all your money to one investing style.

Monday, 8 December 2014

Why you need two systems for running automating trading strategies

Running a fully automated trading strategy requires very little time. Apart from the 6 months or so of flat out coding you need to do first of course. Before doing this coding there is a chicken or the egg question to resolve. Do you write backtesting code and then some extra bits to make it trade live, or do you write live trading code which you then try and backtest?


Some background


If you haven't had the pleasure of writing an automated trading strategy, perhaps because you use prebaked software like "Me Too! Trader" or an online platform such as https://www.quantopian.com/, you may wonder what on earth I am talking about. 

The issue is that there are two completely different user requirements for what is usually one piece of software. The first user is a researcher. They want something highly flexible that they can use to test different, and novel, trading strategies; and to simulate their profitability by "backtesting". Any component needs to be interactive, dynamic and easy to modify.

The next user - lets call them the implementor -  does not rate flexibility, indeed it may be viewed as potentially dangerous. They want something that is ultra robust and can run with minimal human intervention. Every component must be unit tested to the eyeballs; modifications should be minimal and rigorously tested. Interaction is strongly discouraged and should be limited to reading diagnostic output. The code needs to be stuffed full of fail safes, "what ifs?", and corner case catchers.

Ultimately you won't benefit from a systematic trading strategy unless both users are happy. You will end up with a product which is either untested with market data and which may not be profitable (unhappy researcher), or with one which should be profitable but is so badly implemented it will either crash daily or produce fat finger class errors and buy 10e6 too many contracts (unhappy implementor). 

Weirdly of course if, like myself, you're trading with your own money these users are the same person!


Ideas have to be tested


In the vast majority of cases the backtest code comes first, for the same reason that when it comes to building a new car you don't just weld together a bunch of panels and see what they look like; you get out your little clay model (or in this less romantic world, your CAD package). Pretty much every design discipline uses a 'sandbox' environment to develop ideas. Important fact: The people playing in the sandpit aren't usually professionally trained programmers (including yours truly).

Either in a greenfield corporate context, or if you are developing your own stuff, the first thing you will do is write some code that turns prices or other data into positions; and then a little routine to pretend you were actually trading live in the past to see how much money you did, or didn't make.

If you're sensible then you might even have some of your core mathematical routines tidied up and unit tested so they are properly reusable. You can try and modularise the code as much as possible, so running a different trading rule just involves repointing one line of code. You could get quite fancy and have code that is flexible and is configurable by file or arguments, rather than  "configuration" by script. Your simulation of backtested performance can get quite sophisticated.

At some point though you're going to want to run real money on this.

 

 

Productionization - bringing in the grownups


This simulation code isn't normally up to the job of running with real money. In theory all you need to do is write a script that runs the simulation every day / hour / minute and then another piece of code that turns the output of that into actual real live trades.
I suspect most people who are running their own money go down this path. However I would estimate that only 10% of my own code base (of which more in a moment) is needed to run a simulation. What that means in practice is you start with code that isn't sufficiently robust to run in a fully automated way (because it's missing most of the other 90%) and if you're lucky you end up with a vast jerry built structure of things tacked on when you realise you needed them.

If you are trading your own money and not interested in the machinations of corporate fund management politics you'll probably want to skip ahead to 'Two systems'.
Alternatively what tends to happen next in a corporate context is some proper programmers get brought in to productionize the system. The simulation code is normally treated as a specification document, and a seriously incomplete and badly written one at that, rather than as a prototype. The rest of the spec, which is the stuff you need to do the 90%, then has to be written by the implementor.

The result is a robust trading systems but one on which it's now impossible to do any research. The reason why it's are that it's very hard to unpick the 10% of code that can be mucked about with, muck about with it and then re-run it to see what will happen.


When lunatics run the asylum: need for innovation


What usually happens next is that the research user comes up with some clever idea that the solid monolithic tank like existing production code isn't capable of doing. Given that most quant finance businesses have an oversupply of clever people with clever ideas, and an under supply of people who can actually make things work properly, they will then be faced with a choice. Either wait for many months for some programming talent to become available, or try and twist the arm of management to let them implement the simulation system with real money.

Most quant finance businesses are run by quants (Which you might think is the natural order of things. But being very clever and insightful AND being a great business person are quite unusual skills to find in the same person. Perhaps it is sometimes better to have the business run by a glorified COO whilst you stick to what you're good at, which is usually the cool and fun stuff. Tech company bosses also take note). Which means that the simulation system ends up being used to trade real money, despite this being an insane idea. By the way having quants in charge is also why there is an under supply of builders AKA programmers versus architects AKA researchers in these businesses. That and separate reporting / manpower budget lines for CTO's.

Anyway the bottom line is that rather than modify the existing production code to do the new new thing the programmers then often have to work with a hacked up backtest pretending to be a swan like production system. But because this is actually running real money it's treated more as a prototype than a badly written spec. This means a lot of crud gets ported across into the production system, and the process of productionizing takes a lot longer.

Eventually we end up with a robust system again. Until that is some bright spark has another clever idea, and the cycle begins again.


Two systems: An aside on testing and matching


One way - indeed the best way - of dealing with this is to keep your two code bases completely separate. Once your bright idea is fully developed then you show the programmers your code. After laughing hard at your pathetic attempt they then incorporate it into the production system. You then continue to use your simulation code

You can also do this as an individual, although you probably won't laugh at your own code, not if you've just written it anyway. As an individual programmer and trader having to maintain two systems is also a serious time overhead, but ultimately worth it.

Back in the corporate world an obvious problem with this is you still have the bottleneck of needing enough programmers to keep up with the flow of wonderful ideas. However at least they aren't wasting their time trying to deal with hurriedly rewriting cruddy simulation systems that are already running real money before they blow up.

A slight problem with this is that you have created two ways to do something. Corporate types running systematic fund businesses have an unhealthy obsession with things being 'right'. You have to prove that the position coming from your production system is 'right'. If you have a simulation the most obvious way of doing this is to run that and crosscheck them. In this way the simulation becomes a glorified integration test of the production code. 

This is a recipe for tens of thousands of person hours of wasted time and effort trying to work out why two are slightly different. This is completely stupid. For starters there is no 'right'. All trading rules are guesses anyway. Under this logic a trading rule that did exactly what it was 'supposed' to do, but lost a billion dollars would be better than one which was a bit wayward but which made the same amount in profit.

Second of all this is a very stupid way of testing anything. You should have a spec as to what the trading system should do. In case it isn't obvious, I don't think the simulation code should be the spec. At best it's a starting point for writing the spec. But should you reproduce a bug in the simulation if it isn't what was intended? No. You should find out what's intended, write it down, and that is what you should implement. You then write tests to check the production code meets the spec. And mostly they should be unit tests. This is very obvious indeed to anyone working in any kind of other industry where you build stuff after prototyping it.


Production first?!?!


When it came to writing my own trading system about a year ago I did something radical. Since I knew exactly what I wanted to implement, I just sat down and wrote the production code. Of course I was in the unusual position of having already designed enough trading systems to know what I wanted to do, albeit in a corporate context and I had never written an end to end production system before.

I don't think writing production first is unattainable even if you don't know exactly what you're going to do. If you have the pleasure of working in a greenfield setting you have two main jobs to do. The first is to write a production system, and the second is to come up with some new and profitable ideas. Don't wait until you've come up with ideas to hire your proper programmers, hire them now. Get them to code up a simple trading rule in a robust production system. Meanwhile you can do your clever stuff. Occasionally they will come and confront you with questions, and hopefully this will will force you to direct your cleverness in the direction of clarifying what your investment process might end up being.

Similarly if you are writing your own stuff then it might be worth coding the simplest possible production system first before you do your research. You could even do both jobs them in parallel. It's quite nice being able to shift to doing some hard core econometrics when you've been coding up corner cases for trading algorithms, and some mindless script writing can be just the ticket when you are stuck for inspiration and the great trading ideas just aren't coming.

If your code is modular enough you should be able to subsequently write the simulation code from production rather than vice versa. The simulation code will just be some scaffolding around the core trading rule part of your production code (the 10% bit, remember?). 

With my own system I did get round to doing this, but only after I'd be trading for 6 months. But to be honest I don't really run my simulation code that much, and I certainly don't check it against my production code. It's only used for what it should be used for - a sandbox for playing in. If I come up with any new ideas then I'll then have to go and implement them in the production code. So I am firmly in the two system world, although I approached it from the other direction than what we normally see.

  

Nirvana?


No not the early 90's grunge band, but the idea of some perfect system existing that can do both. A giant uber-system which can meet both requirements. I don't think such a nirvana is attainable, for a couple of reasons. Firstly the work involved is substantial - I would estimate at least four times as much as developing a separate production and simulation system.

Secondly, in a corporate context, there is usually too big a disparity in the needs of different users particularly on the research side. Often there is a temptation to over specify the flashy aspects of the project, such as the user interface having lots of interactive graphics. This often happens because the senior managers with the authority to order such large IT projects haven't done much coding for a while and need more of a point and click interface.


Small steps


I don't believe in the fairy story of 'one system to rule them all'. Instead I believe that two systems probably works best, but with some sensible code reuse where it makes sense. Here are some of the small steps you can take.

As I've already mentioned your core utilities, like calculate a moving average*, should be shared, and tested to death, so you can trust them. 

* Okay bad example, since I get pandas to do this for me. But you get the idea.

You can't possible reuse code unless you have good modularity. The wrapper around the 10% of my production code that is reusable for simulations looks like this:
 
data1 = get_live_data_to_do_step_one(*args for live data)
config1 = get_live_config_to_do_step_one(*args for live config)
diag = diagnostic(* define where live diagnostics are written to)

output1 = do_step_one(data=data1, config=config1, diag=diag)

data2 = get_live_data_to_do_step_two(*args for live data)
config2 = get_live_config_to_do_step_two(*args for live config)
output2 = do_step_two(output1, somedata=output1, moredata=data2, config2=config2, diag)


Hopefully I don't need to spell out how the simulation code is different, or how it would be hard to replace step one with a different step one in a research context if the code wasn't broken down like this.

Try and separate out the parts that do all the corner case and type testing from the actual algorithm. The latter part you will want to play with and look at. This does however mean you can't have a simple 'doughnut' model of production and simulation code, where there is just a different 'scaffolding' around a core position generation function (which I realise is what my pseudo code implies...). It needs to be more dynamic than that.

Don't make stuff reusable for the sake of it. For example I toyed with creating a fancy accounting object which could analyse either live or simulated profitability. But ultimately I didn't think it was worth it, just because it would have been cool. Instead I wrote a lot of small routines that did various small analysis, that I could stick together in different ways for each task.

As well as code reuse you can also have data reuse. It doesn't make any sense to have two databases of price data, one for simulation and one for live data. If there are certain prebaked calculations that you always do, such as working out price volatility, then you should have your production system work them out as often as it needs to and dump the results where the rest of your system, including your simulation code, can get it.


Go forth and code


That's it then. Hopefully I've convinced you that the two system model makes sense. Now if you will excuse me I'm going to go and hack some back testing code ....