Tuesday, 7 February 2017

Can you eat geometric returns?

This post is about a slightly obscure, but very important, issue. Should we use geometric or arithmetic means of returns to evaluate investments?

This might seem boring, but answering this will help us with some other serious problems: Does diversification increase the expected value of your portfolio or just reduce the volatility? If so can we then afford to pay extra costs to get diversification? Does adding a small amount of bonds to an all equities portfolio increase your likely returns?

It turns out that the answer to this boils down to one of the most fundamental questions in financial economics: How should we evaluate the expected value of possible outcomes? 


A brief introduction to geometric returns


When thinking about past and future returns I'm going to be using geometric means rather than the more common arithmetic means. Geometric means reflect what you will actually earn over time.
To understand this better let's look at an example. Consider an investment in which you invest $100 and earn  30%, 30% and -30% over the next three years of returns. The arithmetic mean of returns is the sum of annual returns, 30% + 30% - 30% = 30%, divided by the number of years (3), which equals 10%. You might expect to have an extra $30 after three years: probably more with the magic of compound interest.

End of year 1: $100 + 30% * $100 = $130
End of year 2: $130 + 30% * $130 = $169
End of year 3: $169 - 30% * $169 = $118.30

Whoops. Compound interest is a wonderful thing but it magnifies losses as well as gains. Now suppose you'd made a pathetic 5.76% a year but consistently:

End of year 1: $100    + 5.76% * $100      = $105.76
End of year 2: $105.76 + 5.76% * $105.76   = $111.85
End of year 3: $111.85 + 5.76% * $111.85   = $118.30

Notice that the annual return here is much lower, just 5.76% a year, but it's consistent. The final account value after three years is exactly the same as in table 1: $118.30.  The geometric mean of a series of returns is the consistent return that gives the correct final account value. So the geometric mean of 30%, 30% and -30% is 5.76% a year.

The bit of the post where I put an obligatory equation or two


Mathematically the geometric mean is  [n√(1+ r1)(1+ r2)....(1+ rT)]-1 where rt are each of T returns. Alternatively it's exp[(1 ÷ N) Σln(1+rt))] – 1 where ln is the natural log, and exp is the exponent function.

Notice that the geometric mean is a concave function of the final value of the portfolio (1+ r1)(1+ r2)....(1+ rT). This is an important point which I'll return to later.


Some irony


I am unlikely cheerleader for geometric returns... until a couple of years ago I'd never actually used them! That's because in the hedge fund world where we rebalance to target expected risk on constant capital it's better to use non compounded curves (See this post for more). 

It's not necessary to use geometric returns since there no compounding and the volatility of different options is identical (expected risk on target), rather you can use arithmetic returns and make your life easier (you can also focus entirely on Sharpe Ratios, since you effectively have as much leverage as you need to maximise returns for a given level of risk target).


Some interesting properties of geometric returns


Geometric returns give a more realistic picture than mean returns. To take an extreme example consider the following series of returns: 100%, 100%, -100%. The arithmetic return is 100%: What a fantastic investment! But the geometric mean is easy to calculate: 0%. You will have nothing left after three years have passed.

Geometric means are always lower than the arithmetic mean, unless all annual returns are identical. The difference between the two measures is larger for more volatile assets.  In fact we can see this easily with the following which is a nice approximation for geometric means:

μg = μa – 0.5 σ_2 

...where μg is the geometric mean, μa is the arithmetic mean and σ_2 is the variance of returns. In other words the geometric mean is the arithmetic mean, less a correction for risk.

(This can be proven using Jensens Inequality which I'll return to later)

It's worth emphasising this: the benefits of diversification are greater when average returns are measured with geometric means. 


The consequences of using geometric returns


1) Diversification improves returns: so we can afford to pay for it


Geometric returns get higher as risk falls, something that never happens with arithmetic returns.

Take a group of similar assets, like equities in the same country and sector. It's not unreasonable to assume they have equal arithmetic returns, equal standard deviations (and thus equal Sharpe Ratios - and equal geometric means) and identical correlations. The optimal portfolio here has equal weights and as many assets as possible. But adding these assets doesn't affect the arithmetic mean, which is unchanged. It reduces Sharpe Ratio, rapidly. But is also improves the geometric mean, a little more gradually.

For example, assuming correlation of 0.85:

1 asset: arithmetic mean 5%, geometric mean 1.3%
5 assets: arithmetic mean 5%, geometric mean 1.8% 

We can also use geometric returns to "pay" for higher diversification costs. If you can get an extra 0.5% in geometric returns then you can pay 0.4% more in costs and still be ahead.

1 asset: arithmetic mean 5%, geometric mean 1.3%
5 assets: arithmetic mean 5% - 0.4% = 4.6%,  geometric mean 1.4% 

This explains the weird title of the post: we can "eat" higher geometric returns, or use them to pay higher costs.


2) 100% equity portfolios are bad even if you don't maximise Sharpe Ratio


Also using geometric returns is part of the way past the classic portfolio optimisation quandry: should we opt for a portfolio with higher return (more equities), or lower risk (more bonds)? The maximum Sharpe Ratio portfolio is just one possible compromise between these two options. But for those with a higher tolerance for risk it is inferior to options with more return.

The maximum geometric mean portfolio is interesting. It's the portfolio for which there is no point increasing risk further, even if doing so gives you a higher arithmetic return.

So an interesting implication is that adding a small amount of bonds to an all equities portfolio will increase geometric return: or to put it another way all investors should own some bonds.

For example given the following properties:

Bonds: arith. mean 1.6%, standard deviation 8.3%,  geo. mean 1.3% Geometric Sharpe Ratio 0.15
Equities: arith. mean 5%, standard deviation 19.8%,  geo. mean 3% Geometric Sharpe Ratio 0.15 

(these are derived from 100+ years of US real returns, adjusted to reflect more realistic forward expectations and to equalise geometric Sharpe Ratio)

A portfolio with 20% in bonds will have the following properties:

80:20 portfolio: standard deviation 16%,  geo. mean 3% Geometric Sharpe Ratio 0.188

Adding a few bonds to all equities portfolio has left geometric return the same. In fact the maximum geometric mean occurs at roughly 10% of the portfolio. Only once we add more than 20% of bonds to the portfolio does the geometric mean fall below the all equity portfolio.

Geometric mean (y axis) as bonds added to an all equity portfolio (x axis: 0= no bonds, 1.0 = 100% bonds)


The bear case for using geometric returns


The above findings are, relatively startling. So it's important that geometric return is in fact "real". But there is some debate about this. I was prompted to write this post after being shown this paper:

http://www.bfjlaward.com/pdf/25968/65-76_Chambers_JPM_0719.pdf

...h/t to Daal on elitetrader.com.

It's a very involved paper which also covers "rebalancing return" but here are the key points in relation to diversification:
  • A key misconception concerning the expected geometric mean return is that it provides an accurate indication of long-term expected future wealth.
  • Another potential misconception regarding geometric mean returns is that maximization of a portfolio’s expected geometric mean return is an optimal portfolio strategy
  • An asset’s expected geometric mean return (i.e., the expected compounded rate of return) is the probability-weighted average of all of the potential realized geometric mean returns. 
  •  ...volatility does not diminish expected value.
Fighting talk! Let's get to the core of the matter. The problem is thus:
  • geometric mean scales (concavely) with final portfolio value.
  • BUT expected geometric mean does not scale with expected portfolio value
  • Therefore maximising expected geometric mean might not maximise expected portfolio value
  • Instead maximising expected arithmetic mean will maximise portfolio value
If you're scratching your head right now, I don't blame you. To illustrate more clearly what is going on I produced some random data. The data is multiple random Gaussian returns of two sets of 10 years of daily returns, with the following properties:

High AM: Arith. mean 5%,     standard deviation 15%
Low AM:  Arith. mean 4.375%, standard deviation 10%

These apparently arbitrary values have been chosen so that both assets have the same geometric mean

Now I'm going to plot the distribution of statistical estimates from this little Monte Carlo exercise (with 500,000 runs; to ensure reasonably smooth results)

First the distribution of arithmetic means:


(The distribution of expected arithmetic means is Gaussian with lower standard deviation for more volatile assets)

The mean of the distribution (i.e. the expected arithmetic mean returns) are: 4.99% (High AM) and 4.38% (Low AM). This verifies that the Monte Carlo hasn't done something weird.

Now geometric means


(Again the distribution of expected geometric means is Gaussian with lower standard deviation for more volatile assets)

You can see that both assets have the same expected geometric mean, though there is more uncertainty about the estimate for the higher volatility "High AM" asset.

Finally let's have a look at the distribution of final portfolio values:



(A final value of 1.0 indicates the portfolio hasn't grown, 2 means 100% growth over 10 years and so on)

Now this distribution is more interesting. Even if you squint really hard it isn't Gaussian - it's a skewed lognormal. The bunching of values on the left hand side is happening when a lot of losses occur in a row. Because we're not using leverage the portfolio value can't go below zero; hence we get bunching.

The means of the distribution are: 1.65 (High AM) and 1.55 (Low AM). These are the key numbers. Although both assets have the same geometric mean the final portfolio value is larger for the asset with a higher arithmetic mean.

Here is clear evidence that the highest expected final value comes when the arithmetic mean is higher even with higher volatility. From the paper:

  • Another potential misconception regarding geometric mean returns is that maximization of a portfolio’s expected geometric mean return is an optimal portfolio strategy
It certainly looks like the geometric mean is in trouble.


The case for the defence - a question of measuring expectations


I'm going to home in on one particular implication of the paper cited above:

  • The expected final value of the portfolio  is the probability-weighted average of all of the possible portfolio final values. 
Expectation:  A word we use a lot in economics and finance without a pause. What does it mean? And also, and very importantly, which average?

When I was at school we learned about three: the mean, the median, and the mode (which I won't be using here). Remember from the figure above that the distribution of realised final values is right skewed. Hence the mean will be greater than the median. So the choice of average matters a lot.

The paper assumes that the probability weighted average of all potential portfolio values is the mean of the distribution of possible portfolio values.

To be clear then:
  • The expected final value of the portfolio  is the probability-weighted average of all of the potential realized portfolio final values. In the paper this is the mean of the distribution of possible portfolio values.
The case against the geometric mean relies heavily on using the mean, not the median, to summarise the distribution of final portfolio values. All this matters because we're dealing with a distribution where the mean and the median are significantly different.


The case for using the median not the mean


Is the mean really appropriate? Personally I would say no, for two reasons:
  1. Risk neutral behaviour doesn't really exist
  2. The median is closer to how humans form expectations


Does risk neutral behaviour exist?


Using the mean makes sense for risk neutral investors. Let's take a simple and rather extreme example. Suppose your entire wealth is £100,000. I offer you the chance to buy a lottery ticket for £100,000, which will pay you £100 million, at odds of 999 to one. The expected value of the ticket using the weighted average mean of the outcomes is £100,100. For an economist this bet is worth taking!

The two options are:
  • Don't buy the ticket. Mean of future wealth: £100,000. Median of future wealth: £100,000
  • Buy the ticket. Mean of future wealth: £100,100. Median of future wealth: £0
Taking the arithmetic mean of the distribution leads us to prefer an outcome that is completely and utterly insane.  No human being would ever gamble everything they have for such a tiny expected average increase in wealth. People only take on those kinds of gambles for relatively small fractions of their wealth (so yes they do buy lottery tickets, but not for £100,000).

"Real" people require paying to take on risk: as economists like to think of it they will require the probability weighted average less a correction for risk. Classical economics tells us that people exist on a continuum:
  • Risk averse who require paying more than the risk neutral mean of outcomes to take on more risk (they would want to buy the lottery ticket for less than £100,000)
  • Risk neutral investors who use the weighted average mean to evaluate options
  • Risk lovers who are happy to pay to take on more risk (they would happily pay more than £100,000 for the ticket)
There are definitely risk averse people: I am one myself. So if there are risk lovers then there must also be risk neutral investors; it makes no sense to have a continuum with a break in the middle.

Looking around it does look like some people seem to love risk to the point they'll happily pay for it: eg gambling in casinos when the odds are against them (which they nearly always are). If these weirdos exist it does seem more plausible that risk neutral investors also exist. However I would argue that true risk loving behaviour doesn't exist.

Instead this behaviour is a result of people misjudging probabilities due to cognitive biases in the way we think about risk. The cognitive science hadn't been incorporated into financial economics when the idea of the continuum above was proposed.

We know that people overestimate the likelihood of events with very small probabilities (which is one reason why people do buy lottery tickets which cost only a fraction of their expected value even though they always have a negative expected value, and buy insurance against terrorist attacks).

If you ask a desperate gambler who is about to put the last of their money into the slot machine if they expect to win on this spin their answer will be "of course"; probably because they suffer from gamblers fallacy and believe they are "due" a win.

Similarly the most aggressive investors invest in highly speculative portfolios of rubbish companies with almost no diversification; which on the face of it would only make sense if they were risk loving.

But I would argue - again - that is a failure of probability assessment. Yes the investors say we know that diversification is better, but we have skill and can pick the best stocks. They overestimate their probability of beating the market - all of them are above average - the Lake Wobegon effect.

Cognitive failure leading to probability mis-assessment is mistaken for risk loving behaviour by economists. This is wrong.

With no risk loving investors I also believe risk neutral behaviour is also a myth. In reality everyone requires some compensation for risk.


The median makes more sense to humans than the mean


As humans when we think about expectations it is the median that we are thinking about. If the weather forecast tomorrow is for a 10% chance of rain, and I ask someone what they expect the weather to be, they will say they expect it to be dry (the median outcome). They won't say they expect it to be a little bit wet (the mean outcome).

The £100K lottery may be an extreme example but as we've already seen future wealth is always fairly heavily right skewed; enough so that the difference between mean and median is pretty significant.

Cross sectional distribution of wealth and income in the real world is also famously right skewed. Would you want to live in a country where there is a tiny chance of being very wealthy, but you're most likely to be dirt poor? (Hint: net migration from the very equal Nordic states to much less equal America is almost zero).

Would you want to do a job where you have a miniscule chance of earning millions, but will probably barely earn a living wage? (Again a lot of kids - or their parents - want to be professional footballers, but this is a judgement error that comes from overestimating the probability that they personally will make it to the top leagues).

I think the correct way to evaluate future wealth outcomes is by using the median. To most people the idea of "I expect what will happen is what is likely to happen half the time" is a more natural concept of expectations than "probability weighted average mean".

In an ideal world you'd show people distributions and explain the uncertainty involved and then get an idea of their risk / reward payoff function. But short of that I'd say that even the least risk averse humans on the planet should use the median outcome when evaluating future investments. 


Implications for using the median rather than the mean


Returning to the plots above what figures do we get if we summarise the for the median rather than the mean? Remember that High AM has a higher arithmetic mean than Low AM, but both have the same geometric mean.

Arithmetic mean:
High AM: Mean 4.99 Median 5.0
Low AM: Mean 4.38   Median 4.38

Geometric mean:
High AM: Mean 4.06  Median 3.95
Low AM: Mean  3.95  Median 3.96

Future wealth:
High AM: Mean  1.647  Median 1.473
Low AM: Mean   1.549  Median 1.474

The important figures are those in bold. Ignoring slight differences in decimal points (because ultimately this is random data) it's clear that we can draw the following conclusion:

The expected value (using the median) of future wealth is identical when geometric returns are identical, even if the arithmetic mean is lower.

So maximising geometric mean will also maximise final portfolio value. In other words the implications of using geometric mean that I outlined above still hold:

  • We can use diversification to pay for higher costs
  • 100% equity portfolios are not as good as portfolios with some bonds



Summary


I think declaring the death of geometric returns is somewhat premature. It's true that using the classical economists view of expectation - the mean of the distribution of portfolio values - implies that final value isn't lowered by volatility. But this vanishes when you use the median of the distribution as your basis for expectation.

I feel personally that using the median, rather than the mean, is the correct approach. However this is an ideological debate - there is no right answer. Ultimately an economic model is a simplification of the vastly complicated reality of human behaviour.




Wednesday, 18 January 2017

Playing with Docker - some initial results (pysystemtrade)

This post is about using Docker - a containerisation tool - to run automated trading strategies. I'll show you a simple example of how to use Docker with my python back testing library pysystemtrade to run a backtest in a container, and get the results out. However this post should hopefully be comprehensible to non pysystemtrade and non python speaking people as well.

PS: Apologies for the long break between posts: I've been writing my second book, and until the first draft is at the publishers posts will be pretty infrequent.


The logo of docker is a cute whale with some containers on its back. It's fun, but I'm worried that people will think it's okay to start using whales as a cheap alternative to ships. Listen people: It's not okay.
Source: docker.com


What the Heck is Docker and why do I need it?

As you'll know if you've read this post I currently run my trading system with two machines - a live and a backup. A couple of months ago the backup failed; some kind of disk failure. I humanely dispatched it to ebay (it is now in a better place, the new owner managing to successfully replace the disk: yes I am a software guy and I don't do hardware...).

A brief trip to ebay later and I had a new backup machine (actually I accidentally bought two machines, which means I now have a machine I can use for development). A couple of days ago I then had to spend around half a day setting up the new machine.

This is quite a complicated business, as the trading system consists of:


  1. An operating system (Linux mint in my case)
  2. Some essential packages (ssh; emacs, as I can't use vi; x11vnc as the machine won't normally have a monitor attached; git to download useful python code from github or my local network drive)
  3. Drive mountings to network drives
  4. The interactive brokers gateway
  5. A specific version of Python 
  6. A bunch of python libraries that I wrote all by myself
  7. A whole bunch of other python libraries like numpy, pandas etc: again all requiring specific versions to work (my legacy trading system is in frozen development so it is keyed to an obsolete version of various libraries which have since had pesky API changes)
  8. Some random non python package dependencies 
  9. A directory structure for data
  10. The actual data itself 

All that needs to be setup on the machine; which at times can be quite fiddly; for example many of the dependencies have dependencies and sometimes a google is required. Although I have a 'build' file consisting of a document file mostly saying "do this, then this..." it can still be a tricky process. And some parts are... very... slow...

While moaning about this problem on twitter I kept hearing about something called Docker. I had also seen references to Docker in the geeky dead trees based comic I occasionally indulge myself with, and most recently at this ultra geeky linux blog written by an ex colleague.

At it's simplest level Docker allows the above nightmare to be simplified to just the following steps:

  1. An operating system. Scarily this could be ANY operating system. I discuss this below.
  2. Some essential packages (these could be containerised but probably not worth it)
  3. Drive mountings to network drives
  4. The interactive brokers gateway (could also be put inside Docker; see here).
  5. Install Docker
  6. Run a docker container that contains everything in steps 5 to 10

This will certainly save time every time I setup a new trading server; but unless you are running your own data centre that might seem a minimal time saving. Actually there are numerous other advantages to using Docker which I'll discuss in a second. But first lets look at a real example.


Installing Docker


Installing Docker is, to be fair, a little tricky. However it's something you should only need to do once; and it will in the long run prevent you from much pain installing other crud. It's also much easier than installing say pandas.

Here is how it's done: Docker installation

It's reasonably straightforward, although I found to my cost it won't work on a 32 bit linux distro. So I had to spend the first few hours of yesterday reinstalling the OS on my laptop, and on the development machine I intend to use for running pysystemtrade when it gets closer to being live for at least .

Not a bad thing: I had to go through the pain of reinstalling my legacy code and dependencies to remind me of why I was doing this, took the opportunity to switch to a new IDE pycharm, and as a bonus finally wiped Windows 10 off the hard disk (I'd kept it 'just in case' but I've only used it twice in the last 6 months: as the rest of my household are still using Windows there are enough machines lying around if I need it).

The documentation for Docker is mostly excellent, although I had to do a little bit of googling to work out how to run the test example I'm going to present now (that's mostly because there is a lot of documentation and I got bored - probably if I'd read it all I wouldn't have had to google the answer).


Example of using Docker with pysystemtrade


The latest version of pysystemtrade on github includes a new directory: pysystemtrade/examples/dockertest/.  You should follow along with that. All the command line stuff you see here is linux; windows users might have to read the documentation for Docker to see whats different.


Step one: Creating the docker image (optional)


The docker image is the starting state of your container. Think of a docker image as a little virtual machine (Docker is different from a true virtual machine... but you can google that distinction yourself) preloaded with the operating system, all the software and data you need do your thing; plus the script that your little machine will run when it's loaded up.

Creating a docker image essentially front loads - and makes repeatable - the job of setting up the machine. You don't have to create your own images, since you can download them from the docker hub - which is just like the git hub. Indeed you'll do that in step two. Nevertheless it's worth understanding how an image is created, even if you don't do it yourself.

If you want to create your own image you'll need to copy the file Dockerfile (in the directory of pysystemtrade/examples/dockertest/) to the parent directory of pysystemtrade on your computer. For example the full path name for me is /home/rob/workspace3/pysystemtrade/... so I would move to the directory /home/rob/workspace3/. Then you'll need to run this command:

sudo docker build -t mydockerimage .

Okay; what did that just do? First let's have a look at the Dockerfile:


FROM python
MAINTAINER Rob Carver <rob@qoppac.com>
RUN pip3 install pandas
RUN pip3 install pyyaml
RUN pip3 install scipy
RUN pip3 install matplotlib
COPY pysystemtrade/ /pysystemtrade/
ENV PYTHONPATH /pysystemtrade:$PYTHONPATH
CMD [ "python3", "/pysystemtrade/examples/dockertest/dockertest.py" ]

Ignoring the second line, this does the following (also read this):


  • Loads a base image called python (this defaults to python 3, but you can get earlier versions). I'll talk about base images later in the post.
  • Loads a bunch of python dependencies (the latest versions; but again I could get earlier versions if I wanted)
  • Copies my local version of the pysystemtrade library into the image
  • Ensures that python can see that library
  • Runs a script within the pysystemtrade 


If you wanted to you could tag this image and push it on the docker hub, so that other people could use it (See here). Indeed that's exactly what I've done: here.

Note: Docker hub gives you one free private image by default, but as many public images as you need.


Step two: Running the container script


You're now ready to actually use the image you've created in a container. A container is like a little machine that springs into life with the image pre-loaded, does it stuff in a virtual machine like way, and then vanishes.

If you haven't created your own image, then you need to run this:

sudo docker run -t -v /home/rob/results:/results robcarver17/pysystemtrade

This will go to docker hub and get my image (warning this may take a few minutes).

OR with your own local image if you followed step one above:

sudo docker run -t -v /home/rob/results:/results mydockerimage


In both cases replacing  /home/rob/results with your own preferred directory for putting the backtest output. This stuff after the '-v' flag mounts that directory into the docker container; mapping it to the directory /results. Warning: the docker image will have complete power over that directory, so it's best to create a directory just for this purpose in case anything goes wrong.

The docker image will run, executing this script:


from systems.provided.futures_chapter15.basesystem import futures_system
from matplotlib.pyplot import show

resultsdir="/results"system = futures_system(log_level="on")
print(system.accounts.portfolio().sharpe())
system.pickle_cache("", resultsdir+"/dockertest.pck")


This just runs a backtest, and then saves the result to /results/dockertest.pck. The '-t' flag means you can see it running. Remember the /results directory is actually a volume mapped on to the local machine. After the image has closed it will have created a file on the local machine called   /home/rob/results/dockertest.pck


Step three: Perusing the results


Now in a normal local python sessions run the file dockertestresults.py

from systems.provided.futures_chapter15.basesystem import futures_system
from matplotlib.pyplot import show

resultsdir="/home/rob/results"
system = futures_system(log_level="on")
system.unpickle_cache("", resultsdir+"/dockertest.pck")
# this will run much faster and reuse previous calculationsprint(system.accounts.portfolio().sharpe())

Again you will need to change the resultsdir to reflect where you mapped the Docker volume earlier. This will load the saved back test, and recalculate the p&l (which is not stored in the systems object cache).


Okay... so what (Part one: backtesting)


You might be feeling a little underwhelmed by that example, but there are many implications of what we just did. Let's think about them.


Backtesting server


Firstly, what we just did could have happened on two machines: one to run the container, the other to analyse the results. If your computing setup is like mine (relatively powerful, headless servers, often sitting around doing nothing) that's quite a tasty prospect.


Backtesting servers: Land of clusters


I could also run backtests across multiple machines. Indeed there is a specific docker product (swarm) to make this easier (also check out Docker machine).


Easy setup


Right at the start I told you about the pain involved with setting up a single machine. With multiple machines in a cluster... that would be a real pain. But not with docker. It's just a case of installing essential services, docker itself, and then launching containers. 


Cloud computing


These multiple machines don't have to be in your house... they could be anywhere. Cloud computing is a good way of getting someone else to keep your machines running (if I was running modest amounts of outside capital, it would be the route I would take). But the task of spinning up, and preparing a new cloud environment is a pain. Docker makes it much easier (see Docker cloud).


Operating system independent


You can run the container on any OS that can install Docker... even (spits) Windows. The base image essentially gets a new OS; in the case of the base image python this is just a linux variant with python preloaded. You can also get different variants which have a much lower memory overhead.

This means access to a wider variety of cloud providers. It also provides redundancy for local machines: if both my machines fail I am only minutes away from running my trading system. Finally for users of pysystemtrade who don't use Linux it means they can still run my code. For example if you use this Dockerfile:

FROM python
MAINTAINER Rob Carver <rob@qoppac.com>
RUN pip3 install pandas
RUN pip3 install pyyaml
RUN pip3 install scipy
RUN pip3 install matplotlib
COPY pysystemtrade/ /pysystemtrade/
ENV PYTHONPATH /pysystemtrade:$PYTHONPATH


sudo docker build -t mydockerimage .
sudo docker run -t -v -i mydockerimage

... then you will be inside an interactive python session with access to the pysystemtrade libraries. Some googling indicates it's possible to run ipython and python notebooks inside docker containers as well, though I haven't tried this myself.


Okay... so what (Part two: production)


Docker also make running production automated trading systems much easier: in fact I would say this is the main benefit for me personally. For example you can easily spin up one or more new trading machines agnostic of OS either locally or on a cloud. Indeed using multiple machines in my trading system is one of the things I've been thinking about for a while (see this series of tweets: one, two, three and so on).


Microservices


Docker makes it easier to adopt a microservices approach where we have lots of little processes rather than a few big ones. For example instead of running one piece of code to do my execution, I could run multiple pieces one for each instrument I am trading. Each of these could live in it's own container. Then if one container fails, the others keep running (something that doesn't happen right now).

The main advantage of Docker over true virtual machines is that each of those containers would be working off almost identical images (the only difference being the CMD command at the end; in practice you'd have identical images and put the CMD logic into the command line); Docker would share this common stuff massively reducing the memory load of running a hundred processes instead of one.


Data in container images


In the simple version of pysystemtrade as it now exists the data is effectively static .csv files that live inside the python directory structure. But in the future the data would be elsewhere: probably in databases. However it would make sense to keep some data inside the docker image, eg static information about instruments or configuration files. Then it would be possible to easily test and deploy changes to that static information.


Data outside container images


Not all data can live inside images; in particular dynamic data like market prices and system state information like positions held needs to be somewhere else.

Multiple machines means multiple places where data can be stored (machine A, B or a local NAS). Docker volumes allow you to virtualise that so the container doesn't know or care where the data it's using lives. The only work you'd have to do is define environment variables which might change if data is living in a different place to where it normally is, and then launch your container with the approriate volume mappings.

Okay there are other ways of doing this (a messy script of sim links in linux for example) but this is nice and tidy.

You can also containerise your storage using Docker data volumes but I haven't looked into that yet.


Message bus


I am in two minds about whether using a message bus to communicate between processes is necessary (rather than just shared databases; the approach I use right now). But if I go down that route containers need to be able to speak to each other. It seems like this is possible although this kind of technical stuff is a little beyond me; more investigation is required. 

(It might be that Swarm might remove the need for a message bus in any case; with new containers launched passing key arguments)

Still at a minimum docker containers will need to talk to the IB Gateway (which could also live in a container... see here) so it's reassuring to know that's possible. But my next Docker experiment will probably be seeing if I can launch a Gateway instance (probably outside a container because of the two factor authentication I've grumbled about before unless I can use x11vnc to peek inside it) and then get a Docker container to talk to it. This is clearly a "gotcha" - if I can't get this to work then I can't use Docker to trade with! Watch this space.


Scheduling


At the moment my scheduling is very simple: I launch three big processes every morning using cron. Ideally I'd launch processes on demand; eg when a trade is required I'd run a process to execute it. I'd launch price capturing processes only when the market opens. If I introduce event driven trading systems into my life then I'd need processes that launched when specific price targets were reached.

It looks like Docker Swarm will enable this kind of thing very easily. In particularly because I'm not using python to do the process launching I won't violate the IB multiple gateway connection problem. I imagine I'd then be left with a very simple crontab on each machine to kick everything into life, and perhaps not even that.


Security


Security isn't a big deal for me, but there is something pleasing about only allowing images access to certain specific directories on the host machine.


Development and release cycle


Finally Docker makes it easier to have a development and release cycle. You can launch a docker container on one machine to test things are working. Then launch it on your production machine. If you have problems then you can easily revert to the last set of images that worked. You don't have to worry about reverting back to old python libraries and generally crossing your fingers and hoping it all works.

You can also easily run automated testing; a good thing if I ever get round to fixing all my tests.

Geeky note: You can only have one private image in your docker hub account; and git isn't ideal for storing large binaries. So another source control tool might be better for storing copies of images you want to keep private.


Summary


Development of pysystemtrade - as the eventual replacement to my legacy trading system - is currently paused whilst I finish my second book; but I'm sure that Docker will play a big part in it. It's a huge and complex beast with many possibilities which I need to research more. Hopefully this post has given you a glimpse of those possibilities.

Yes: there are other ways of achieving some of these goals (I look forward to people telling me I should use puppet or god knows what), but the massive popularity of Docker tells you why it's so good; it's very easy to use for someone like me who isn't a professional sysadmin or full on linux geek, and offers a complete solution to many of the typical problems involved with running a fully automated system.

PS You should know me better by now but to clear: I have no connection with Docker and I am receiving no benefit pecuniary or otherwise for writing this post.

Monday, 5 September 2016

Systematic risk management


As the casual reader of this blog (or my book) will be aware, I like to delegate my trading to systems, since humans aren't very good at it (well, I'm not). This is quite a popular thing to do; many systematic investment funds are out there competing for your money; from simple passive tracking funds like ETF's to complex quantitative hedge funds. Yet most of these employ people to do their risk management. Yes - the same humans who I think aren't very good at trading.

As I noted in a post from a couple of years ago, this doesn't make a lot of sense. Is risk management really one of those tasks that humans can do better than computers? Doesn't it make more sense to remove the human emotions and biases from anything that can affect the performance of your trading system?

In this post I argue that risk management for trading systems should be done systematically with minimal human intervention. Ideally this should be done inside an automated trading system model.

For risk management inside the model, I'm using the fancy word endogenous. It's also fine to do risk management outside the model which would of course be exogenous. However even this should be done in a systematic, process driven, way using a pre-determined set of rules.

A systematic risk management approach means humans have less opportunity to screw up the system by meddling. Automated risk management also means less work. This also makes sense for individual traders like myself, who can't / don't employ their own risk manager (I guess we are our own risk managers - with all the conflicts of interest that entails).

This is the second in a series of articles on risk management. The first (which is rather old, and wasn't originally intended to be part of a series) is here. The next article will be about an exogenous risk management tool I use called the system envelope. The final article will be about endogenous risk management, explain the simple method I use in my own trading system, and show an implementation of this in pysystemtrade.


What is risk management?


Let's go back to first principles. According to wikipedia:

"Risk management is the identification, assessment, and prioritization of risks (defined in ISO 31000 as the effect of uncertainty on objectives) followed by coordinated and economical application of resources to minimize, monitor, and control the probability and/or impact of unfortunate events[1] or to maximize the realization of opportunities. Risk management’s objective is to assure uncertainty does not deflect the endeavour from the business goals. "

This slightly overstates what risk management can achieve. Uncertainty is almost always part of business, and is a core part of the business of investing and trading. It's often impossible to minimise or control the probability of something happening, if that something is an external market event like a recession.

Still if I pick out the juicy parts of this, I get:

  • Identification, assessment and priorization of risks
  • Monitoring of risks
  • Minimize and control the impact of unfortunate events 
This suggests that risk management can be boiled down to the following

  1. Identify some important risks.
  2. Work out a way to measure them
  3. Set levels at which action should be taken, and specify an action to take.
  4. Monitor the risk measurements
  5. Take action if (when) the measurements exceed critical levels
  6. When (if) the situation has returned to normal, reverse the action

I would argue that only steps 1,2 and 3 are difficult to systematise. Steps 4 to 6 should be completely systematic, and if possible automated, occuring within the trading system.


Types of risk


It's very easy to forget that there are many types of risk beyond the usual; "the price will fall when we are long and we will lose our shirts". This is known as market risk and whilst it's the most high profile flavour there are others. Pick up any MBA finance textbook and you'll find a list like this:


  • Market risk. You make a bet trade which goes against you. We quantify this risk using a model.
  • Credit / counterparty risk. You do a trade with a guy and then they refuse to pay up when you win.
  • Liquidity risk. You buy something but can't sell it when you need to.
  • Funding risk. You borrow money to buy something, and the borrowing gets withdrawn forcing you to sell your position.
  • (Valuation) Model risk.You traded something valued with a model that turned out to be wrong. Might be hard to distinguish from market risk (eg option smile: is the Black-Scholes model wrong, or is it just that the correct price of OTM vol is higher?).
  • (Market) Model risk. You trade something assuming a particular risk model which turns out to be incorrect. Might be hard to distinguish from market and pricing model risk ("is this loss a 6 sigma event, or was our measurement of sigma wrong?"). I'll discuss this more later.
  • Operational / IT / Legal risk. You do a trade and your back office / tech team / lawyers screw it up.
  • Reputational risk. You do a trade and everyone hates you.

Looking at these it's obvious that some of them are things that are hard to systematise, and almost impossible to automate. I would say that operational / IT and Legal risks are very hard to quantify / systematise beyond something like a pseudo objective exercise like a risk register. It's also hard for computers to spontaneously analyse the weaknesses of valuation models, artifical intelligence is not quite there yet. Finally reputation: computers don't care if you hate them or not.

It's possible to quantify liquidity, at least in open and transparent futures markets (it's harder in multiple venue equity markets, and OTC markets like spot fx and interest rate swaps). It's very easy to program up an automated trading system which, for example, won't trade more than 1% of the current open interest in a given futures delivery month. However this is beyond the scope of this post.

In contrast it's not ideal to rely on quantitative measures of credit risk, which tend to lag reality somewhat and may even be completely divorced from reality (for example, consider the AAA rating of the "best" tranche of nearly every mortgage backed security issued in the years up to 2007). A computer will only find out that it's margin funding has been abruptly cut when it finds it can't do any more trading. Humans are better at picking up and interpreting whispers of possible bankruptcy or funding problems.

This leaves us with market risk - what most people think of as financial risk. But also market model risk (a mouthful I know, and I'm open to using a better name). As you'll see I think that endogenous risk management can deal pretty well with both of these types of risk. The rest are better left to humans. So later in the post I'll outline when I think it's acceptable for humans to override trading systems.


What does good and bad risk management look like?

There isn't much evidence around of what good risk management looks like. Good risk management is like plumbing - you don't notice it's there until it goes wrong, and you've suddenly got "human excrement"* everywhere

*Well my kids might read this blog. Feel free to use a different expression here.

There are plenty of stories about bad risk management. Where do we start... perhaps here is a good place: https://en.wikipedia.org/wiki/List_of_trading_losses.

Nick Leeson. Bad risk management in action, early 90's style. Source: Daily Mail


Generally traders are given a small number of risk management parameters they have to fit within.

For example my first job in finance was working as a trader for Barclays Capital. My trading mandate included a maximum possible loss (a mere million quid if I remember correctly), as well as limits on the greeks of my position (I was trading options). I also had a limit on everyones favourite "single figure" risk measurement, VAR.

Bad traders will eithier willfuly, or through ignorance, bend these limits as much as possible. For example if I return to the list of trading losses above, it's topped by this man:

Howie. The 9 billion dollar man. Not in a good way. Source: wallstreetonparade.com

Howie correctly called the sub prime mortgage debt collapse. He bet on a bunch of mortgage related derivative crap falling. But to offset the negative carry of this trade (which caused a lot of pain to other people doing this trade) he bought a bunch of higher rated mortgage related derivatives. For boring technical reasons he had to buy a lot more of the high rate stuff.

On paper - and presumably according to Morgan's internal models - this trade had minimal risk. It was assumed that the worst that would happen would be that house prices stayed up, and that the long and short side would remain high. Hopefully though Howie would get it right - the crap would fall, and the good stuff would keep it's value.

However it turned out that the good stuff wasn't that good eithier; the losses on the long position ended up dwarfing the gains on the short position. The risk model was wrong.

(The risk management team did [eventually] warn about this, but Howie succesfully argued that the default rate they were using to model the scenario would never happen. It did.)


Risk management embodied by trading systems


From the above discussion we can derive my first principle of risk management:

Good traders do their own risk management 

(and by trader here I mean anyone responsible for making investment decisions, so it includes fund managers of all flavours, plus people who think of themselves as investors rather than traders).

Good traders will take their given risk limits as a starting point. They will understand that all risk measurements are flawed. They will think about what could go wrong if the risk model being used was incorrect. They will consider risks that aren't included in the model.

Similarly good trading systems already do quite a lot of risk management. This isn't something we need to add, it's already naturally embodied in the system itself.

For example in my book I explain how a trading system should have a predetermined long term target risk, and then how each position should be sized to achieve a particular target risk according to it's perceived profitability (the forecast) and the estimated risk for each block of the instrument you're trading (like a futures contract) using estimates of return volatility. I also talk about how you should use estimates of correlations of forecasts and returns to achieve the correct long run risk.

Trading systems that include trend following rules also automatically manage the risk of a position turning against them. You can do a similar thing by using stop loss rules. I also explain how a trading system should automatically reduce your risk when you lose money (and there's more on that subject here).

All this is stuff that feels quite a lot like risk management. To be precise it's the well known market risk that we're managing here. But it isn't the whole story - we're missing out market model risk. To understand the difference I first need to explain my philosophy of risk in a little detail.


The two different kinds of risk


I classify risk into two types - the risk encompassed by our model of market returns; and the part that isn't. To see this a little more clearly have a look at a picture I like to call the "Rumsfeld quadrant"


The top left is stuff we know. That means there isn't any risk. Perhaps the world of pure arbitrage belongs here, if it exists. The bottom left is stuff we don't know we know. That's philosophy, not risk management.

The interesting stuff happens on the right. In green on the top right we have known-unknowns. It's the area of quantifiable market risk. To quantify risk we need to have a market risk model.

The bottom right red section is the domain of the black swan. It's the area that lies outside of our market risk model. It's where we'll end up if our model of market risk is bad. There are various ways that can happen:

  • We have the wrong model. So for example before Black-Scholes people used to price options in fairly arbitrary ways. 
  • We have an incomplete model. Eg Black-Scholes assumes a lognormal distribution. Stock returns are anything but lognormal, with tails fatter than a cat that has got a really fat tail.
  • The underlying parameters of our market have changed. For example implied volatility may have dramatically increased.
  • Our estimate of the parameters may be wrong. For example if we're trying to measure implied vol from illiquid options with large bid-ask spreads. More prosically we can't measure the current actual volatility directly, only estimate it from returns.

An important point is that it's very hard to tell (a) an extreme movement within a market risk model that is correct from (b) an extreme movement that isn't that extreme, it's just that your model is wrong. In simple terms is the 6 sigma event (should happen once every 500 million days) really a 6 sigma event?

Or is it really a 2 sigma event it's just that your volatility estimate is out by a factor of 3? Or the unobservable "true" vol has changed by a factor of 3? Or does your model not account for fat tails because 6 sigma events actually happen 1% of the time? You generally need a lot of data to make a Bayesian judgement about what is more likely. Even then it's a moving target because the underlying parameters will always be changing.

This also applies to distinguishing different types of market model risk. You probably can't tell the difference between a two state market with high and low volatility (changing parameter values), and a market which has a single state but a fat tailed distribution of returns (incomplete model); and arguably it doesn't matter.

What people love to do, particularly quants with Phd's trapped in risk management jobs, is make their market models more complicated to "solve" this problem. Consider:




On the left we can see that less than half of the world has been explained by green, modelled, market risk. This is because we have the simplest possible multiple asset risk model - a set of Gaussian distributions with fixed standard deviation and correlations. There is a large red area where we have the risk that this model is wrong. It's a large area because our model is rubbish. We have a lot of market model risk.

However - importantly - we know the model is rubbish. We know it has weaknesses. We can probably articulate intuitively, and in some detail, what those weaknesses are.

On the right is the quant approach. A much more sophisticated risk model is used. The upside of this is that there will be fewer risks that are not captured by the model. But this is no magic bullet. There are some disadvantages to extra complexity. One problem is that with more parameters they are harder to estimate, and estimates of things like higher order moments or state transition probabilities will be very sensitive to outliers.

More seriously however I think these complex models give you a false sense of security. To anyone who doesn't believe me I have just two words to say: Gaussian Copula. Whilst I can articulate very easily what is wrong with a simple risk model it's much harder to think of what could go wrong with a much weirder set of equations.

(There is an analogy here with valuation model risk. Many traders prefer to use Black-Scholes option pricers and adjust the volatility input to account for smile effects, rather than use a more complex option pricer that captures this effect directly)

So my second principle of risk management is:

Complicated risk model = a bad thing

What I prefer to do is use a simple model of returns as part of my trading system. Then I handle market model risk systematically: either endogenously within the system, or exogenously.


Risk management within the system (endogeonous)


The disadvantage of simpler models is their simplicity. But because they're simple, it's also easy to write down what their flaws are. And what can be written down easily can, and should, be added to a trading system as an endogenous risk management layer.

Let's take an example. We know that the model of fixed Gaussian volatility is naive (and I am being polite). Check this out (ignore the headline, which is irrelevant and for which there is no evidence):

S&P 500 vol over time. Source: Seeking Alpha

Now I could deal with this problem by using a model with multiple states, or something with fatter tails. However that's complicated (=bad).

If I was to pinpoint exactly what worries me here, it's this: Increasing position size when vol is really low, like in 2006 because I know it will probably go up abruptly. There are far worse examples of this: EURCHF before January 2015, Front Eurodollar and other STIR contracts, CDS spreads before 2007...

I can very easily write down a simple method for dealing with this, using the 6 step process from before:
  1. We don't want to increase positions when vol is very low.
  2. We decide to measure this by looking at realised vol versus historical vol
  3. We decide that we'll not increase leverage if vol is in the lowest 5% of values seen in the last couple of years
  4. We monitor the current estimated vol, and the 5% quantile of the distribution of vol over the last 500 business days.
  5. If estimated vol drops below the 5% quantile, use that instead of the lower estimated vol. This will cut the size of our positions.
  6. When the vol recovers, use the higher estimated vol.
Here is the implementation of this idea in pysystemtrade https://github.com/robcarver17/pysystemtrade/blob/master/syscore/algos.py#L39 (Default values can be changed here).

It's easy to imagine how we could come up with other simple ways to limit our exposure to events like correlation shocks, or unusually concentrated positions. The final post of this mini series will explain how my own trading system does it's own endogenous risk management, including some new (not yet written) code for pysystemtrade.


Systematic risk management outside the system (exogeonous)


There is a second category of risk management issues. This is mostly stuff that could, in principle, be implemented automatically within a trading system. But it would be more trouble than it's worth, or pose practical difficulties. Instead we develop a systematic process which is followed independently. The important point here is that once the system is in place there should be no room for human discretion here.

An example of something that would fit nicely into an exogenous risk management framework would be something like this, following the 6 step programme I outlined earlier:


  1. We have a large client that doesn't want to lose more than half their initial trading capital - if they do they will withdraw the rest of their money and decimate our business.
  2. We decide to measure this using the daily drawdown level
  3. We decide that we'll cut our trading system risk by 25% if the drawdown is greater than 30%, by half at 35%, by three quarters at 40% and completely at 45% (allowing some room for overshoot).
  4. We monitor the daily drawdown level
  5. If it exceeds the level above we cut the risk capital available to the trading system appropriately
  6. When the capital recovers, regear the system upwards

[I note in passing that:

Firstly this will probably result in your client making lower profits than they would have done otherwise, see here.

Secondly this might seem a bit weird - why doesn't your client just stump up only half of the money? But this is actually how my previous employers managed the risk of structured guaranteed products that were sold to clients with a guarantee (in fact some of the capital was used to buy a zero coupon bond). These are out of fashion now, because much lower interest rates make the price of the zero coupon bonds far too rich to make the structure work.

Finally for the terminally geeky, this is effectively the same as buying a rather disjointed synthetic put option on the performance of your own fund]

Although this example can, and perhaps should, be automated it lies outside the trading system proper. The trading system proper just knows it has a certain amount of trading capital to play with; with adjustments made automatically for gains or losses. It doesn't know or care about the fact we have to degear this specific account in an unusual way.

In the next post I'll explain in more detail how to construct a systematic exogenous risk management process using a concept I call the risk envelope. In this process we measure various characteristics of a system's backtested performance, and use this information to determine degearing points for different unexpected events that lie outside of what we saw in the backtest.

For now let me give you another slightly different example - implied volatility. Related to the discussion above there are often situations when implied vol can be used to give a better estimate of future vol than realised vol alone. An example would be before a big event, like an election or non farm payroll, when realised vol is often subdued whilst implied vols are very rich.

Ideally you'd do this endogenously: build an automated system which captured and calculated the options implied vol surface and tied this in with realised vol information based on daily returns (you could also throw in recent intraday data). But this is a lot of work, and very painful.

(Just to name a few problems; stale and non synchronous quotes, wide spreads on the prices of OTM options give you very wide estimates of implied vol, non continuous strikes, changing underlying mean the ATM strike is always moving....)

Instead a better exogenous system is to build something that monitors implied vol levels, and then cut positions by a proscribed amount when they exceed realised vol by a given proportion (thus accounting for the persistent premium of implied over realised vol). Some human intervention in the process will prevent screwups caused by bad option prices.




Discretionary overrides


Ideally all risk managers at systematic funds could now be fired, or at least redeployed to more useful jobs.

Risk manager working on new career. Source: wikipedia


But is it realistic to do all risk management purely systematically, either inside or outside a system? No. Firstly we still need someone to do this stuff...

  1. Identify some important risks.
  2. Work out a way to measure them
  3. Set levels at which action should be taken, and specify an action to take.
... even if stages 4-6 should still be done by computers.

Secondly there are a bunch of situations in which I think it is okay to override the trading system, due to circumstances which the trading system (or predetermined exogenous process) just won't know about.

I've already touched on this in the discussion related to types of risk earlier, where I noted that humans are better at dealing with hard to quantify more subjective risks. Here are some specific scenarios from my own experience. As with systematic risk management the appropriate response should be to proportionally de-risk the position until the problem goes away or is solved.


Garbage out – parameter and coding errors


If an automated system does not behave according to its algorithm there must be a coding bug or incorrect parameter. If it isn't automated then it's probably a fat finger error on a calculator or a formula error on a spreadsheet. This clearly calls for a de-risking unless it is absolutely clear that the positions are of the correct sign and smaller than the system actually desires. The same goes for incorrect data; we need to check against what the position would have been with the right data.


Liquidity and market failure


No trading system can cope if it cannot actually trade. If a country is likely to introduce capital controls, if there is going to be widespread market disruption because of an event or if people just stop trading then it would be foolish to carry on holding positions.

Of course this assumes such events are predictable in advance. I was managing a system trading Euroyen interest rate futures just before the 2011 Japanese earthquake. The market stopped functioning almost overnight.

A more pleasant experience was when the liquidity in certain Credit Default Swap indices drained away after 2008. The change was sufficiently slow to allow positions to be gradually derisked in line with lower volumes.


Denial of service – dealing with interruptions


A harder set of problems to deal with are interruptions to service. For example hardware failure, data feed problems, internet connectivity breaking or problems with the broker. Any of these might mean we cannot trade at all, or are trading with out of date information. Clearly a comparison of likely down time to average holding period would be important.

With medium term trading, and a holding period of a few weeks, a one to day outage should not unduly concern an individual investor, although they should keep a closer eye on the markets in that period. For longer periods it would be safest to shut down all positions, balancing the costs of doing this against possible risks.


What's next


As I said I'll be doing a couple more posts on this subject. The next one will talk about specific exogenous systematic risk management. The final post will explain how I use endogenous risk management within my own trading system.