Thursday, 7 May 2020

When endogenous risk management isn't enough: a simple risk overlay

"How does your risk management work?"

... is a question I'm frequently asked.

In fact this is actually a difficult question, if you were to look at my open source python backtesting project pysystemtrade, you would struggle to point at a piece of code and say "Behold! Right there, that's the risk management part alright!". The reason is that the risk management in my trading system is endogenous (from the greek, meaning 'word used to mean internally or inside by people trying to sound clever'). Risk management is something it just does without even trying.

For example, if volatility rises, then positions will be cut. If it starts to lose money on a particular position, the position will be cut. If the amount of capital deployed reduces, the position will be cut. Many of these things look like deliberate risk management, or perhaps the term 'position management' is more appropriate. But they are just a consequence of the simple building blocks that the system is built upon: inverse vol position scaling, a preponderance of trend following rules, and liberal use of the Kelly criterion.

However, these simple building blocks make some heroic assumptions. In particular they assume that asset returns follow a joint Gaussian return distribution, where co-movements are linear, and both volatility and correlations are perfectly predictable from historic data. The system also does it's risk management on a long run average basis:

The consequences of this are, to use technical language for a moment: sometimes things could get a bit scary. This post explains how, and introduces a simple risk overlay to make things slightly less scary. Essentially this overlay sits slightly outside the main system (although it runs as part of the same code base), tweaking positions when certain risk limits are hit.

This is an overlay I have already implemented in my existing trading system, so it's worked well for over  6 years. Although this new code is designed for pysystemtrade I will make the python code as stand-alone as possible so you can adapt it for your own use if you wish. It will work equally well in other trading systems, although it will be most useful in a system that works more like mine.

This is a continuation of a series I started a few years ago, but only got around to writing a couple of posts for. In the spirit of tidiness, here are the first two posts:


You don't have to read the first two posts to understand this one, but it might help especially if you don't understand exactly what I mean by edogenous risk management.

Parts of this post will be easier to follow if you've read my first book, Systematic Trading.


Realised risk


Let's start by measuring the actual risk we realised. This should average 25%, or whatever is defined in system.config.percentage_vol_target

(I have grabbed a backtest to do this which essentially reflects my live system with a subset of instruments, but you can play along with a different pysystemtrade backtest if you wish, or any series of daily returns you happen to have).

# assuming we already have a pysystemtrade system object...
returns = system.accounts.portfolio().as_percent()
returns = returns[pd.datetime(1997,1,1):]
annualised_std = returns.std()*16


(I'm only showing data since 1997, because I'm in the process of cleaning up my price data which still has a few spikes in it that aren't real.)

The average standard deviation comes out at 23.7% which is a fraction below the target of 25%, but more importantly how does this vary over time? Let's plot the daily returns:




With 23.8% annual risk, which equates to around 1.5% a day, we'd expect to see around two thirds (68%) of our returns coming in at between -1.5% and +1.5% (If our mean was zero. The mean is 0.08%, so the returns will come in between -1.42% and 1.58%, which isn't very different). 

To see more clearly, let's look at the rolling standard deviation of returns over the last 125 business days (about 6 months), and multiply by 16 to annualise:

roll_std_returns = returns.rolling(125).std()*16



You can see clearly that the 6 month rolling risk is highly variable, dipping down to 10% at some points, and up above 40% in the halcoyn days of the early 2000's.

There are two explanations for this:
  1. We are very bad at predicting our risk
  2. We are allowing our expected risk to vary a lot
... or perhaps a little bit of both.


Expected risk


It's very easy to check if our expected risk should vary a lot, by measuring our expected risk (assuming, naturally, a joint Gaussian risk model).

The code for this is a bit lengthy, so rather than cut and paste I've dropped into it's own little gist. The tricky part is converting everything into notional exposure as a percentage of capital, which allows us to use percentage returns. Incidentally, I use a 30 day span for standard deviations, and 120 days for correlations. These give a fairly good estimate, but using different values won't make a huge difference.

risk_series = get_expected_risk_for_system(system)





OK, so our expected risk is expected to vary (and I will explain why below). Another way of thinking about the problem is to see how well we did at matching expected and realised risk (basically, how good is our simple Gaussian normal risk model at forecasting risk). The plot below shows the realised returns, with one and two standard deviation bands from the expected risk distribution (ignoring the tiny mean).

threshold=pd.concat([risk_series*100/16.0, -risk_series*100/16.0],axis=1)
thresh_and_returns = pd.concat([returns, threshold], axis=1)
thresh_and_returns.columns = ['returns', '+1std','-1std']



We should see about two thirds of the returns fall within the bands, and indeed they do. Also, the bands should expand when the returns do. This happens eg in 2004. The risk model isn't perfect, because there are a few large outliers that we wouldn't expect to get with Gaussian normal returns (though some of these may be due to bad data). But it's doing a reasonable job of forecasting future risk.

Mostly our actual risk is varying because our expected risk is varying. So let's find out why.


Why does expected risk vary?


It is worth briefly revisting the calculations used to work out a position size. The position measured as a percentage of capital is a product of a lot of different numbers, but it simplifies to this:

position as % of capital = (instrument forecast / average instrument forecast)* (target risk / instrument risk) * instrument weight * IDM

Where the IDM (Instrument Diversification Multiplier) is the factor applied to positions to account for the correlation between trading subsystems (i.e. the trading strategies we run for each instrument and the returns they product, not the underlying instrument returns)

And the portfolio risk calculation is wSw', where w are the weights (basically position as % of capital) and S is the covariance matrix composed of instrument standard deviations and the correlation between instrument returns (different from that used for IDM).

In a very handwaving way, it can be shown that the current expected portfolio risk will then be equal to:

Expected risk = target risk * (relative forecast strength) * (relative correlation factor)

Relative forecast strength is a measure of how strong forecasts are relative to the average; it is equal to the forecasts for each instrument, weighted by instrument weight and divided by the average forecast (set to 10 in pysystemtrade).


All other things being equal, if your forecasts are all +20, and the average is +10, then your expected portfolio risk would be twice the average risk, or roughly twice the target risk (50% in the example I've been using).

Importantly, we want risk to vary according to forecast strength. Otherwise we'd have exactly the same risk on even if our forecasts were all +0.001, as if they were +20 (the maximum allowed under forecast capping).

(There is a school of thought that says that we want risk to remain fixed, which is how a lot of long:short hedge funds construct portfolios, but that is another blog post)

The relative correlation factor (RCF) is a bit more complicated. It is equal to the ratio between the IDM (which accounts for the average correlation across subsystem returns), and the IDM that would be appropriate given the current set of positions and current correlation between instrument returns.

So for example, if you normally trade two subsystems (say US 10 year and S&P 500) with corelation between subsystems of zero then your IDM will be equal to square root of 2: 1.414

Now imagine that for some reason your system has a long average sized position in US 10 years, and a short average sized position in S&P 500 futures, and also that the correlation between these two instruments is -1. A quick calculation shows that the expected risk here will be 2.82 times the average. If the correlation was zero, then the expected risk would be twice the average; and if the correlation was +1 then the expected risk would be zero. The relevant RCF would be 2.82/1.41, 2/1.41, and zero.

Similarly if the current position was long an average sized position in both instruments, then with a correlation of +1 the risk would be 2.8 times the average, with a correlation of zero it would be twice, and again with a correlation of -1 it would be zero. The relevant RCF again are 2.82/1.41, 2/1.41 and 0. Notice the symmetry here - we'll use this result later.

Clearly the RCF can vary quite a lot depending on what the current positions are, and the current correlation matrix. You might argue that positions and correlations of this kind are unlikely given the average correlation between subsystems is zero. They are unlikely, but they aren't impossible. In particular, correlations do vary especially in the kind of crisis we've just seen.

The RCF is more of an annoyance in terms of expected risk; we wouldn't neccessarily want our risk to be a lot higher just because the positions we happen to have on are especially toxic given todays correlation factor.

Let's plot the relative forecast strength against our expected risk to see if we can decompose how much of our risk is coming from these two components: relative forecast (which we like!), and relative correlation (which we don't like!).


def forecast__strength_for_system(system):
    list_of_instruments = system.get_instrument_list()
    forecasts = [system.combForecast.get_combined_forecast(instrument_code) 
              for instrument_code in
                list_of_instruments]
    forecasts = pd.concat(forecasts, axis=1)
    forecasts.columns = list_of_instruments
    forecasts = forecasts / system.config.average_absolute_forecast
    instrument_weights = system.portfolio.get_instrument_weights()

    weighted_forecast = instrument_weights.ffill() * forecasts.abs().ffill()
    forecast_strength = weighted_forecast.sum(axis=1)

    return forecast_strength

risk_vs_average = 100*risk_series / system.config.percentage_vol_target
forecast_strength = forecast__strength_for_system(system)

to_plot = pd.concat([risk_vs_average, forecast_strength], axis=1)
to_plot.columns = ['Expected risk', 'Forecast strength']



(I've put everything in terms relative to it's expected long average so we can plot them together)

So expected risk does indeed follow forecast strength pretty well. For example, in late 2018:


... forecast strength goes up, and expected risk follows it. However this isn't always the case. Strikingly, in the past couple of months expected risk has exploded while forecast strength has been falling. This is because the relative correlation factor has increased dramatically, most likely as correlations have got really weird in the current crisis.


Overview of the risk overlay


Now we have a better understanding of what is driving our expected risk, it's time to introduce the risk overlay. The risk overlay calculates a risk position multiplier, which is between 0 and 1. When this multiplier is one we make no changes to the positions calculated by our system. If it was 0.5, then we'd reduce our positions by half. And so on.

So the overlay acts across the entire portfolio, reducing risk proportionally on all positions at the same time. 

The risk overlay has three components, designed to deal with the following issues:

- Expected risk that is too high
- Weird correlation shocks combined with extreme positions
- Jumpy volatility (non stationary and non Gaussian vol)

Each component calculates it's own risk multipler, and then we take the lowest (most conservative) value.

That's it. I could easily make this a lot more complicated, but I wanted to keep the overlay pretty simple. It's also easy to apply this overlay to other strategies, as long as you know your portfolio weights and can estimate a covariance matrix (I'm assuming anyone who has read this far can do both of those things, or knows a person that can). You don't need the same concept of a 'forecast' for example, since forecast calculations don't enter into these.

Let's dive into the individual components.


Maximum expected risk


This component assumes that Guassian risk is a good enough model for expected risk, and it also assumes that we don't want too much of it. Specifically the risk multiplier looks like this:

risk multiplier = min(1, 2*target risk / current expected risk)

So if the current expected risk is more than twice the long run target, we'll start reducing our positions. The choice of '2' is arbitrary, and down to personal preference. However, since the combined forecast for an instrument is limited to 2.0, allowing the expected risk to be double the average seems to make sense.

From the discussion above, we'll be doing that if (a) we have very strong relative forecasts, or (b) the current correlation factor is particularly nasty. I could have made this more complex to specifically target the correlation factor, but this is simple enough to understand and explain, and works nicely on any kind of strategy with a long run risk target but varying expected risk.

How often will this kick in? We've already calculated expected risk vs target earlier:

risk_vs_average = 100*risk_series / system.config.percentage_vol_target


So now plotting the series:


There are a few times when risk goes over 2, including in recent weeks. Here is the risk multiplier:

risk_multiplier = 2/risk_vs_average
risk_multiplier[risk_multiplier>1.0]=1.0


Notice the sharp drop at the end, when expected risk balloons in the COVID-19 crisis.


Correlation risk


The maximum expected risk measure assumes that Gaussian risk is sufficient, and that we can forecast it's components (correlation, and standard deviation). Now let's relax that assumption. Correlation risk is the risk that instrument correlations will do scary unusual things, that happen to be bad for my position. If this has already happened (i.e. we have a correlation factor problem) then it will be dealt with in the expected risk calculation, that uses recent historic returns to calculate the instrument correlation. But what if it is about to happen?

There is a very simple way of dealing with this, which is that we replace the estimated correlation matrix with the worst possible correlation matrix. Then we re-estimate our expected risk, and plug it into a risk multiplier formula. Because we're shocking the correlations to the extreme, we allow expected risk to be 4 times larger than our target.

(There is no justification for this number 4, it's calibrated to target a particular point on the realised distribution of the estimate of relative risk. I talk a bit about calibration at the end of the post)

Specifically the risk multiplier looks like this:

risk multiplier = min(1, 4*target risk / current expected risk with worst possible correlation)

What is the worst possible correlation matrix? Simply, it's a matrix where all the correlations are 1. But that's only bad if all of our positions are long, right? If we had offsetting long/short positions, it would help us. You're right, which is why we also use the absolute weights when calculating the expected risk, not the normal signed weights. Note that a correlation of 1 if your weights are all long is equivalent to a correlation of -1 if your weights were long/short (we already saw this in the calculations above).

Here's a horribly hacky way (ugly! slow!) to calculate this risk multiplier (there is a better implementation in pysystemtrade, of which more later). In the gist above replace this function with this code:


def calc_risk_for_date(rolling_corr, rolling_std, index_date, value_of_positions_proportion_capital, list_of_instruments):
    std_dev = rolling_std.loc[index_date].values
    std_dev[np.isnan(std_dev)] = 0.0
    ## Use absolute weights rather than signed    
    weights = value_of_positions_proportion_capital.abs().loc[index_date].values
    weights[np.isnan(weights)]=0.0    
    cmatrix = get_corr_matrix_for_date(rolling_corr, index_date, list_of_instruments)
    # replace correlation matrix with zeros
    # yeah this is ugly and slow, but makes the point clearer    cmatrix[:] = 1.0
    sigma = sigma_from_corr_and_std(std_dev, cmatrix)

    portfolio_variance = weights.dot(sigma).dot(weights.transpose())
    portfolio_std = portfolio_variance**.5
    annualised_portfolio_std = portfolio_std*16.0
    return annualised_portfolio_std

Then we just recalculate everything: expected risk, and expected vs average:

risk_series_for_correlation = get_expected_risk_for_system(system)
risk_vs_average_for_correlation = 100*risk_series_for_correlation / system.config.percentage_vol_target

Let's plot it


Now for the risk multiplier:
risk_multiplier_for_correlation = 4/risk_vs_average_for_correlation
risk_multiplier_for_correlation[risk_multiplier_for_correlation>1.0]=1.0

This is a bit more active than the expected risk filter. Interestingly, it also shows a recent application in March 2020.

Incidentally, because of the way the system scaling works this is effectively the following constraint:

IDM*sqrt[Sum_i( k_i^2) + 2*abs(k_1*k_2*k_3... )] <=4

Where k_i = [instrument weight * forecast / average forecast] for instrument i


Proof of the above result, well for 2 assets anyway. Feel free to do this properly with matrices and stuff.

So it will only go into effect when we have a lot of large forecasts kicking off at the same time. No other inputs are relevant.


Standard deviation risk


Now let's deal with standard deviation risk. Specifically, we're concerned with a situation where we're estimating a standard deviation that is relatively low, but there's a good chance it will get a lot higher. This could be because risk is Gaussian, but varies, or because risk is non Guassian. We don't care what the cause is (and in fact it's impossible to distinguish these two explanations). We just want to deal with it.

To do this we use our standard estimate of portfolio risk, but replace our standard deviation estimates with '99vol'. This rather catchily named value* is the 99th percentile of the standard deviation estimate distribution, measured over the last 10 years. It's the standard deviation we'll get 1% of the time.

* "I've got 99 problems, but vol ain't one of them..." Sorry couldn't resist.

(Incidentally, if current vol is above the 99% point I still use the 99% point in this calculation.  In this case expected risk is likely to be very high anyway)

Once the new risk estimate has been calculated, I apply a multiplier if this comes out more than 6 times the target risk (again, no deep underlying logic for this, just a calibration)

Specifically the risk multiplier looks like this:

risk multiplier = min(1, 6*target risk / current expected risk with 99vol)


Note: Relationship to VAR. Yes, this is a bit like a 99% VAR. I prefer not to use VAR, since it confounds standard deviations and correlations.

Here's the hacky way of calculating it. Using the original gist (without the hacked function above) add one line to this other function:


def get_expected_risk_for_system(system):

    value_of_positions_proportion_capital = get_positions_as_proportion_of_capital(system)

    instrument_returns = get_instrument_returns(system)
    instrument_returns = instrument_returns.ffill().reindex(value_of_positions_proportion_capital.index)

    rolling_std = instrument_returns.ewm(span=30).std()
    rolling_corr = instrument_returns.ewm(span=120).corr()

    # new line    rolling_std = rolling_std.ffill().rolling(2500, min_periods=10).quantile(.99)

    list_of_instruments = system.get_instrument_list()
    expected_risk = calc_expected_risk_over_time(rolling_corr, rolling_std, value_of_positions_proportion_capital,
                                                 list_of_instruments)

    return expected_risk

New risk series:
risk_series_stdev = get_expected_risk_for_system(system)
risk_vs_average_for_stdev = 100*risk_series_stdev / system.config.percentage_vol_target


And the risk multiplier:

risk_multiplier_for_stdev = 6/risk_vs_average_for_stdev
risk_multiplier_for_stdev[risk_multiplier_for_stdev>1.0]=1.0

Putting them together


all_mult = pd.concat([risk_multiplier_for_stdev, risk_multiplier, risk_multiplier_for_correlation], axis=1)
joint_mult = all_mult.min(axis=1)


That's the most conservative multiplier, going back to 1997. 

The results aren't too dramatic: they shouldn't be. This is a risk overlay, to deal with corner cases and potential black swans. The vast bulk of the risk management load is being carried by the core system.


Pysystemtrade implementation


Now to implement the overlay into pysystemtrade. First of all we need some configuration options: as they'd appear in your .yaml file. Here are the defaults:

risk_overlay:
  max_risk_fraction_normal_risk: 2.0
  max_risk_fraction_correlation_risk: 4.0
  max_risk_fraction_stdev_risk: 6.0


Next you need to override the portfolio stage class with an inherited class which includes risk scaling:


## run inside pysstemtraderimport matplotlib
matplotlib.use("TkAgg")
from systems.provided.futures_chapter15.basesystem import *

## use your own config here
config = Config(
            "private.legacy_system.legacy_config_all_markets.yaml")

from systems.futures.risk_overlay import portfoliosRiskOverlay

data = csvFuturesSimData()

system = System([
    Account(), portfoliosRiskOverlay(), PositionSizing(), FuturesRawData(),
    ForecastCombine(), ForecastScaleCap(), Rules()
], data, config)
system.set_logging_level("on")

There are various new methods in the portfolio stage, such as:

system.portfolio.get_risk_multiplier()

Incidentally for efficiency the calculations work a bit different in pysystemtrade; I use weekly returns for correlations, and I only calculate a covariance matrix on a monthly basis (though I do use todays position weights, so the risk multiplier is calculated on a daily basis).


A quick test


I ran a backtest with, and without, the risk overlay to see what it looks like. Firstly here's the whole account curve:

The blue line is with the overlay, the orange line is without. This isn't unexpected; the overlay can only ever reduce risk, and so it will make less in returns unless it is lucky enough to do so only when the system is losing money. Broadly speaking the risk overlay knocks about 3% anually off both the returns and the risk.

The Sharpe Ratios are pretty close though: 0.940 with the overlay and 0.956 with it. More interestingly the overlay reduces the positive skew of the system somewhat (and this holds at all frequencies- read this to see why that's important). One argument for not applying any kind of risk control to trend following is that we lose the positive skew (see here for a relevant discussion).

The kurtosis does fall however, suggesting we are doing a good job of 'tidying up the tails'. Other measures of 'left tailedness', like the 1% quantile point are also improved. Drawdowns are a litle shallower.

If the performance penalty is too great then you can change the calibration of the risk overlay. Try not to tweak these for performance though, that is implicit fitting. Instead target something like:


  • a distributional point on the turnout of the estimated risk relative to the target risk (eg 95% point),
  •  the time you want the filter switched on for on average (eg 1% of the time),
  • the average value of the risk multiplier including when it is switched off (eg 0.99),
  •  or a minimum correlation between the system with and without the overlay (eg 0.98)



Summary


This has been an interesting journey which has hopefully given some more intuition about how the risk in CTA type strategies works. I've also introduced a simple risk overlay that can be used in a number of different strategies.

As usual questions are welcome in the comment box below.


POSTSCRIPT: We can use the maximum risk constraint above to target a fixed risk, i.e. aiming for the same ex-ante risk every day. I explore that idea in this post.

16 comments:

  1. Rob, what are your thoughts on letting your risk system gear up the portfolio (i.e., allowing the risk multiplier to be greater than 1) if the expected risk is coming in "too low"?

    Does your answer change with varying the number of market or signals traded?
    TD




    ReplyDelete
    Replies
    1. Hi 'T'. If you have a lower and an upper bound for risk, as you tighten those bounds it will of course approach the constant risk target case. I think we've had that discussion before :-)

      But perhaps a follow post is in order, when I look at constant risk targeting, as I've never actually done that test properly before.



      Delete
  2. Thanks for sharing! I learn so much from the blog posts they are super exciting and find the underling implementation of it very helpful not just theory.

    The risk reduction framework can be easily extended with additional risk reducing formulas that make me want to test ideas right away :)

    I just gain so much pure experience from the posts and very appreciative by the time and dedication you put into this Robert, thank you.

    ReplyDelete
  3. Great post as always!!! A more recent example of when it goes wrong https://www.zerohedge.com/energy/one-trader-started-day-77000-his-account-end-he-owed-9-million

    ReplyDelete
  4. Hi Rob, I know this post was mostly about CTA strategies but in Smart Portfolios you suggest bond portfolios should consist of 10% in index-linked bonds. A 'risk-overlay' for an investment portfolio would surely include IL bonds to protect against inflation even if not hyperinflation. There seem to be divergent views about inflation versus deflation at the moment: what is the role or % of IL bonds in a portfolio now? Steve

    ReplyDelete
    Replies
    1. Hi Steve
      Obviously valuation is the issue here. UK inflation spreads at 10 years are just under 3% right now. So if you think future inflation will be greater than that you should overweight, otherwise underweight. I don't personally have a model for expected inflation so I don't know whether that is good value or not. That number isn't a million miles away from previous inflation spreads, and of course it's also higher than realised inflation has been in recent history (essentially you pay a premium to protect against inflation, i.e. this is a negative carry trade).

      Delete
  5. I've read two of your books, and I still can't get my head around the 25% vol target. You also say that professional managers target between 10% and 30%.
    It seems extremely high. I've always known that professional managers had risk targets between 6% and 12%. Am I thinking about a different kind of vol (or different kind of manager)?

    ReplyDelete
    Replies
    1. Different kind of manager, I suspect. The most popular hedge fund style is 'equity neutral'. This has low natural vol, unless you use quite a lot of leverage. Leverage fell out of fashion in 2008, and since then high single figure risk targets are pretty common for this style.

      In the CTA world that I used to be in, most managers run between 10% and 30%.

      25% may sound high, but bear in mind that I only have a fraction of my portfolio in this trading account. Personally I couldn't stomach 25% if it was my entire net worth at stake.

      Delete
  6. Hi Rob, another great post! My main risk overlay attempts to handle worst case correlations. To calculate this, I simply sum the IVVs of the live instruments and compare this sum to the expected daily cash risk and then cut back if the ratio is too high. Am I being too simplistic?

    ReplyDelete
    Replies
    1. No that's exactly the same way that I do it, just expressed a different way.

      Delete
    2. I'm sorry for the stupid question, but what does "IVV" mean?

      Delete
    3. It's an abbrevation from 'Systematic Trading'. It means 'instrument value volatility' It's the daily standard deviation risk of holding one futures contract, expressed in the currency of your account.

      Delete
    4. Ah, ok. Thank you. I hadn’t associated the acronym with the book’s nomenclature.

      Delete
  7. Hi Rob,

    Thank you very much for this blog, and the book, have read Leveraged trading ,and moving onto Systematic trading .This question is slightly off topic, but related to forecasting . Should a retail trader focus on econometrics/time series analysis or move into ML/AI or focus on traditional quantitative TA? I am just starting in trading after a career in poker , and would rather focus on improving my skills in a particular area first . Thank you

    ReplyDelete
    Replies
    1. I would get a good handle on risk and position management before even considering either topic. Then I would implement some simple well known trading rules. After that depends a lot on your preference and experience as to which route you go down.

      Delete

Comments are moderated. So there will be a delay before they are published. Don't bother with spam, it wastes your time and mine.