Thursday 4 March 2021

Does it make sense to change your trading behaviour in different periods of volatility?

 A few days ago I was browsing on the forum site when someone posted this:

I am interested to know if anyone change their SMA/EMA/WMA/KAMA/LRMA/etc. when volatility changes? Let say ATR is rising, would you increase/decrease the MA period to make it more/less sensitive? And the bigger question would be, is there a relationship between volatility and moving average?

Interesing I thought, and I added it to my very long list of things to think about (In fact I've researched something vaguely like this before, but I couldn't remember what the results were, and the research was done whilst at my former employers which means it currently behind a firewall and a 150 page non disclosure agreement). 

Then a couple of days ago I ran a poll off the back of this post as to what my blogpost this month should be about (though mainly the post was an excuse to reminisce about the Fighting Fantasy series of books).

And lo and behold, this subject is what people wanted to know about. But even if you don't want to know about it, and were one of the 57% that voted for the other two options, this is still probably a good post to read. I'm going to be discussing principles and techniques that apply to any evaluation of this kind of system modification.

However: spolier alert - this little piece of research took an unexpected turn. Read on to find out what happened...

Why this is topical

This is particularly topical because during the market crisis that consumed much of 2020 it was faster moving averages that outperformed slower. Consider these plots which show the average Sharpe Ratio for different kinds of trading rule averaged across instruments. The first plot is for all the history I have (back to the 1970's), then the second is for the first half of 2020, and finally for March 2020 alone:

The pattern is striking: going faster works much better than it did in the overall sample. What's more, it seems to be confined to the financial asset classes (FX, Rates and especially equities) where vol exploded the most:

Furthermore, we can see a similar effect in another notoriously turbulent year:

If we were sell side analysts that would be our nice little research paper finished, but of course we aren't... a few anecdotes do not make up a serious piece of analysis.

Formally specifying the problem

Rewriting the above in fancy sounding language, and bearing in mind the context of my trading system, I can write the above as:

Are the optimal forecast weights across trading rules of different speeds different when conditioned on the current level of volatility?

As I pointed out in my last post this leaves a lot of questions unanswered. How should we define the current level of volatility? How we define 'optimality'? How do we evaluate the performance of this change to our simple unconditional trading rules?

Defining the current level of volatility

For this to be a useful thing to do, 'current' is going to have to be based on backward looking data only. It would have been very helpful to have known in early February last year (2020) that vol was about to rise sharply, and thus perhaps different forecast weights were required, but we didn't actually own the keys to a time machine so we couldn't have known with certainty what was about to happen (and if we had, then changing our forecast weights would not have been high up our to-do list!).

So we're going to be using some measure of historic volatility. The standard measure of vol I use in my trading system (exponentially weighted, equivalent to a lookback of around a month) is a good starting point which we know does a good job of predicting vol over the next 30 days or so (although it does suffer from biases, as I discuss here). Arguably a shorter measure of vol would be more responsive, whilst a longer measure of vol would mean that our forecast weights aren't changing as much thus reducing the costs.

Now how do we define the level of volatility? In that previous post I used current vol estimate / 10 year rolling average of the  vol for the relevant. That seems pretty reasonable. 

Here for example is the rolling % vol for SP500:

import  pandas as pd
from systems.provided.futures_chapter15.basesystem import *

system =futures_system()

instrument_list = system.get_instrument_list()

all_perc_vols =[system.rawdata.get_daily_percentage_volatility(code) for code in instrument_list]

 And here's the same, after dividing by 10 year vol:

ten_year_averages = [vol.rolling(2500, min_periods=10).mean() for vol in all_perc_vols]
normalised_vol_level = [vol / ten_year_vol for vol, ten_year_vol in zip(all_perc_vols, ten_year_averages)]

The picture is very similar, but importantly we can now compare and pool results across instruments.

def stack_list_of_pd_series(x):
stacked_list = []
for element in x:
stacked_list = stacked_list + list(element.values)

return stacked_list

stacked_vol_levels = stack_list_of_pd_series(normalised_vol_level)

stacked_vol_levels = [x for x in stacked_vol_levels if not np.isnan(x)]
matplotlib.pyplot.hist(stacked_vol_levels, bins=1000)

What's immediately obvious is that this is a very skewed distribution. This is made clear if we stack up all the normalised vols across markets and plot the distribution:

Update: There was a small bug in my code that didn't affect the conclusions, but had a significant effect on the scale of the normalised vol. Now fixed. Thanks to Rafael L. for pointing this out.

The mean is 0.98 - as you'd expect - but look at that right tail! About 1% of the observations are over 2.5, and the maximum value is nearly 6.7. You might think this is due to some particularly horrible markets (VIX?), but nearly all the instruments have normalised vol that is distributed like this.

At this point we need to think about how many vol regimes were going to have, and how they should be selected. More regimes will mean we can more closely fit our speed to what is going on, but we'd end up with fewer data points (I'm reminded of this post where someone had inferred behaviour from just 18 days when the VIX was especially low). Fewer data points will mean our forecast weights will eithier revert to an average, or worse take extreme values if we're not fitting robustly.

I decided to use three regimes:
  • Low: Normalised vol in the bottom 25% quantile [using the entire historical period so far to determine the quantile] (over the whole period, normalised vol between 0.16 and 0.7 times the ten year average)
  • Medium: Between 25% and 75% (over the whole period, normalised vol 0,7 to 1.14 times the ten year average)
  • High: Between 75% and 100% (over the whole period, normalised vol 1.14 to 6,6 times more than the ten year average)
There could be a case for making these regimes equal size, but I think there is something about relatively high vol that is unique so I made that smaller (with low vol the same size for symettry). Equally, there is a case for making them more extreme. There certainly isn't a case for jumping ahead and seeing which range of regimes performs the best - that would be implicit fitting!

def historic_quantile_groups(system, instrument_code, quantiles = [.25,.5,.75]):
daily_vol = system.rawdata.get_daily_percentage_volatility(instrument_code)
    # We shift by one day to avoid forward looking information
ten_year_vol = daily_vol.rolling(2500, min_periods=10).mean().shift(1)
normalised_vol = daily_vol / ten_year_vol

quantile_points = [get_historic_quantile_for_norm_vol(normalised_vol, quantile) for quantile in quantiles]
stacked_quantiles_and_vol = pd.concat(quantile_points+[normalised_vol], axis=1)
quantile_groups = stacked_quantiles_and_vol.apply(calculate_group_for_row, axis=1)

return quantile_groups

def get_historic_quantile_for_norm_vol(normalised_vol, quantile_point):
return normalised_vol.rolling(99999, min_periods=4).quantile(quantile_point)

def calculate_group_for_row(row_data: pd.Series) -> int:
values = list(row_data.values)
if any(np.isnan(values)):
return np.nan
vol_point = values.pop(-1)
group = 0 # lowest group
for comparision in values[1:]:
if vol_point<=comparision:
return group
group = group+1

# highest group will be len(quantiles)-1
return group

Over all instruments pooled together...
quantile_groups = [historic_quantile_groups(system, code) for code in instrument_list]
stacked_quantiles = stack_list_of_pd_series(quantile_groups)
.... the size of each group comes out at:
  • Low vol: 53% of observations
  • Medium vol: 22% 
  • High vol: 25%
That's different from the 25,50,25 you'd expect. That's because  vol isn't stable over this period and we're using backward looking quantiles, rather than doing a forward looking cheat where we use the entire period to determine our quantiles (which would give us exactly 25,50,25).

Still we've got a quarter in our high vol group, which was what we are aiming for. And I feel it would be some kind of cheating to go back and change the quantile cutoffs having seen these numbers.

Unconditional performance of momentum speeds

Let's get the unconditional returns for the rules in our trading system: momentum using exponentially weighted moving average crossovers from 2_8 (2 day lookback - 8 days) up to 64_256, plus the carry rule (not strictly speaking part of the problem we're looking at today, but what the hell: we can use this as a proxy for determining whether 'divergent' / momentum or 'convergent' systems do worse or better when vol is high or low). These are average returns across instruments; which won't be as good as the portfolio level returns for each rule (we'll look at those later).

rule_list  =list(system.rules.trading_rules().keys())
perf_for_rule = {}
for rule in rule_list:
perf_by_instrument = {}
for code in instrument_list:
perf_for_instrument_and_rule = system.accounts.pandl_for_instrument_forecast(code, rule)
perf_by_instrument[code] = perf_for_instrument_and_rule

perf_for_rule[rule] = perf_by_instrument

# stack
stacked_perf_by_rule = {}
for rule in rule_list:
acc_curves_this_rule = perf_for_rule[rule].values()
stacked_perf_this_rule = stack_list_of_pd_series(acc_curves_this_rule)
stacked_perf_by_rule[rule] = stacked_perf_this_rule

def sharpe(x):
# assumes daily data
return 16*np.nanmean(x) / np.nanstd(x)

for rule in rule_list:
print("%s:%.3f" % (rule, sharpe(stacked_perf_by_rule[rule])))


Similar to the plot we saw earlier; unconditionally medium and slow momentum (and carry) tends to outperform fast momentum.

Now what if we condition on the current state of vol?
historic_quantiles = {}
for code in instrument_list:
historic_quantiles[code] = historic_quantile_groups(system, code)

conditioned_perf_for_rule_by_state = []

for condition_state in [0,1,2]:
print("State:%d \n\n\n" % condition_state)

conditioned_perf_for_rule = {}
for rule in rule_list:
conditioned_perf_by_instrument = {}
for code in instrument_list:
perf_for_instrument_and_rule = perf_for_rule[rule][code]
condition_vector = historic_quantiles[code]==condition_state
condition_vector = condition_vector.reindex(perf_for_instrument_and_rule.index).ffill()
conditioned_perf = perf_for_instrument_and_rule[condition_vector]

conditioned_perf_by_instrument[code] = conditioned_perf

conditioned_perf_for_rule[rule] = conditioned_perf_by_instrument


stacked_conditioned_perf_by_rule = {}
for rule in rule_list:
acc_curves_this_rule = conditioned_perf_for_rule[rule].values()
stacked_perf_this_rule = stack_list_of_pd_series(acc_curves_this_rule)
stacked_conditioned_perf_by_rule[rule] = stacked_perf_this_rule

print("State:%d \n\n\n" % condition_state)
for rule in rule_list:
print("%s:%.3f" % (rule, sharpe(stacked_conditioned_perf_by_rule[rule])))

State:0  (Low vol)

Interesting! These numbers are better than the unconditional figures we saw above, but fast momentum still looks poor relatively speaking (these numbers, like all those in this post, are after costs). But overall the pattern isn't that different from the unconditional performance; nowhere near enough to justify changing forecast weights very much.

State:1 (Medium vol)

The 'medium' level of vol is more similar to the unconditional figures. Again this is nothing to write home about in terms of differences in relative performance, although relatively speaking fast is looking a little worse.

State:2 (High vol)

Now you've probably noticed a pattern here, and I know everyone is completely distracted by it, but just for a moment lets' focus on relative performance, which is what this post is supposed to be about. Relatively speaking fast is still worse than slow, and it's now much worse. 

Carry has markedly improved, but.... oh what the hell I can't contain myself anymore. There is nothing that interesting or useful in the relative performance, but what is clear is that the absolute performance of everything is reducing as we get to a higher volatility environment.

A regular reader (Mike N) asked me how much the above figures were affected by costs. So I re-ran the above but excluded costs

Unconditional figures:

Low vol:

Medium vol:

High vol:

So not just a cost story.

Testing the significance of overall performance in different vol environments

I really ought to end this post here, as the answer to the original question is a firm no: you shouldn't change your speed as vol increases. 

However we've now been presented with a new hypothesis: "Momentum and carry will do badly when vol is relatively high"

Let's switch gears and test this hypothesis.

First of all let's consider the statistical significance of the differences in return we saw above:

from scipy import stats

for rule in rule_list:
perf_group_0 = stack_list_of_pd_series(conditioned_perf_for_rule_by_state[0][rule].values())
perf_group_1 = stack_list_of_pd_series(conditioned_perf_for_rule_by_state[1][rule].values())
perf_group_2 = stack_list_of_pd_series(conditioned_perf_for_rule_by_state[2][rule].values())

t_stat_0_1 = stats.ttest_ind(perf_group_0, perf_group_1)
t_stat_1_2 = stats.ttest_ind(perf_group_1, perf_group_2)
t_stat_0_2 = stats.ttest_ind(perf_group_0, perf_group_2)

print("Rule: %s , low vs medium %.2f medium vs high %.2f low vs high %.2f" % (rule,

Rule: ewmac2_8 , low vs medium 0.37 medium vs high 0.00 low vs high 0.00
Rule: ewmac4_16 , low vs medium 0.25 medium vs high 0.00 low vs high 0.00
Rule: ewmac8_32 , low vs medium 0.12 medium vs high 0.00 low vs high 0.00
Rule: ewmac16_64 , low vs medium 0.08 medium vs high 0.00 low vs high 0.00
Rule: ewmac32_128 , low vs medium 0.07 medium vs high 0.00 low vs high 0.00
Rule: ewmac64_256 , low vs medium 0.03 medium vs high 0.00 low vs high 0.00
Rule: carry , low vs medium 0.00 medium vs high 0.32 low vs high 0.00

These are p-values, so a low number means statistical significance. Generally speaking, with the exception of carry, the biggest effect is when we jump from medium to high vol; the jump from low to medium doesn't usually result in a significantly worse performance.

So it's something special about the high-vol enviroment where returns get badly degraded.

Is this an effect we can actually capture?

One concern I have is how quickly we move in and out of the different vol regimes; here for example is Eurodollar:

To exploit this effect we're going to have to do something like radically reduce our leverage whenever an instrument enters 'zone 2: high vol'. That clearly would have worked in early 2020 when there was a persistent high vol environment for some reason that escapes me now. But would we really get the chance to do very much for those brief few days in late 2019 when Eurodollar enters the highest vol zone?

Above you may have noticed I put in a one day lag on the vol estimate - this is to ensure we aren't conditioning todays return based on a vol estimate that uses todays return - clearly we couldn't change our leverage or otherwise react until we actually got the close of business price.

[In my backtest I automatically lag trades by a day, so when I finally come to test anything this shift can be removed]

In fact I have a confession to make... when first running this code I omitted the shift(1) lag, and the results were even stronger; with heavily negative returns for all trading rules in the highest vol region (except carry, which was barely positive). So this makes me suspicous that we wouldn't have the chance to react in time to make much of this.

Still, repeating the results with a 2 and even 3 day lag I still have some pretty low p-values, so there is probably something in it. Also, interestingly, with these greater lags there is more differentiation between low and medium regimes. Here for example are the T-statistics for a 3 day lag:

Rule: ewmac2_8, low vs medium 0.06 medium vs high 0.01 low vs high 0.00
Rule: ewmac4_16, low vs medium 0.16 medium vs high 0.04 low vs high 0.00
Rule: ewmac8_32, low vs medium 0.13 medium vs high 0.08 low vs high 0.00
Rule: ewmac16_64, low vs medium 0.03 medium vs high 0.06 low vs high 0.00
Rule: ewmac32_128, low vs medium 0.01 medium vs high 0.06 low vs high 0.00
Rule: ewmac64_256, low vs medium 0.02 medium vs high 0.14 low vs high 0.00
Rule: carry, low vs medium 0.08 medium vs high 0.46 low vs high 0.01

A more graduated system

Rather than using regimes, I think it would make more sense to do something involving a more continous variable, which is the quantile percentile itself, rather than the regime bucket that it falls into. Then we won't drastically shift gears between regimes.

Recall our three regimes:
  • Low: Normalised vol in the bottom 25% quantile
  • Medium: Between 25% and 75%
  • High: Between 75% and 100% 
One temptation is to introduce something just for the high regime, where we start degearing when our quantile percentile is above 75%; but that makes me feel queasy (it's clearly implicit fitting), plus the results with higher lags indicate that it might not be a 'high vol is especially bad' effect, but rather a general 'as vol gets higher we make less money'.

After some thought (well 10 seconds) I came up with the following:

Multiply raw forecasts by L where (if Q is the percentile expressed as a decimal, eg 1 = 100%):

L = 2 - 1.5Q

That will vary L between 2 (if vol is really low) and 0.5 (if vol is really high). The reason we're not turning off the system completely for high vol is for all the usual reasons; although this is a strong effect it's still not a certainty 

I use the raw forecast here. I do this because there is no guarantee that the above will result in the forecast retaining the correct scaling. So if I then estimate forecast scalars using these transformed forecasts, I will end up with something that has the right scaling.

These forecasts will then be capped at -20,+20; which may undo some of the increases in leverage done when vol is particularly low - but 


Smoothing vol forecast attenuation

The first thing I did was to see what the L factor actually looks like in practice. Here it is for Eurodollar [I will give you the code in a few moments]:

It sort of seems to make sense; there for example you can see the attenuation backing right off in early 2020 when we had the COVID inspired high vol. However it worries me that this thing is pretty noisy. Laid on top of a relatively smooth slow moving average this thing is going to boost trading costs quite a lot. I think the appropriate thing to do here is smooth it before applying it to the raw forecast. Of course if we smooth it too much then we'll be lagging the vol period.

Once again, the wrong thing to do here would be some kind of optimisation of post cost returns to find the best smoothing lookback, or something that was keyed into the speed of the relevant trading rule; instead I'm just going to plump for a ewma with a 10 day span. 

Testing the attenuation, rule by rule

Here then is the code that implements the attenuation:

from systems.forecast_scale_cap import *

class volAttenForecastScaleCap(ForecastScaleCap):

def get_vol_quantile_points(self, instrument_code):
## More properly this would go in raw data perhaps
self.log.msg("Calculating vol quantile for %s" % instrument_code)
daily_vol = self.parent.rawdata.get_daily_percentage_volatility(instrument_code)
ten_year_vol = daily_vol.rolling(2500, min_periods=10).mean()
normalised_vol = daily_vol / ten_year_vol

normalised_vol_q = quantile_of_points_in_data_series(normalised_vol)

return normalised_vol_q

def get_vol_attenuation(self, instrument_code):
normalised_vol_q = self.get_vol_quantile_points(instrument_code)
vol_attenuation = normalised_vol_q.apply(multiplier_function)

smoothed_vol_attenuation = vol_attenuation.ewm(span=10).mean()

return smoothed_vol_attenuation

def get_raw_forecast_before_attenuation(self, instrument_code, rule_variation_name):
## original code for get_raw_forecast
raw_forecast = self.parent.rules.get_raw_forecast(
instrument_code, rule_variation_name

return raw_forecast

def get_raw_forecast(self, instrument_code, rule_variation_name):
## overriden methon this will be called downstream so don't change name
raw_forecast_before_atten = self.get_raw_forecast_before_attenuation(instrument_code, rule_variation_name)

vol_attenutation = self.get_vol_attenuation(instrument_code)

attenuated_forecast = raw_forecast_before_atten * vol_attenutation

return attenuated_forecast
def quantile_of_points_in_data_series(data_series):
results = [quantile_of_points_in_data_series_row(data_series, irow) for irow in range(len(data_series))]
results_series = pd.Series(results, index = data_series.index)

return results_series

from statsmodels.distributions.empirical_distribution import ECDF

# this is a little slow so suggestions for speeding up are welcome
def quantile_of_points_in_data_series_row(data_series, irow):
if irow<2:
return np.nan
historical_data = list(data_series[:irow].values)
current_value = data_series[irow]
ecdf_s = ECDF(historical_data)

return ecdf_s(current_value)

def multiplier_function(vol_quantile):
if np.isnan(vol_quantile):
return 1.0

return 2 - 1.5*vol_quantile

And here's how to implement it in a new futures system (we just copy and paste the futures_system code and change the object passed for the forecast scaling/capping stage)::
from systems.provided.futures_chapter15.basesystem import *

def futures_system_with_vol_attenuation(data=None, config=None, trading_rules=None, log_level="on"):

if data is None:
data = csvFuturesSimData()

if config is None:
config = Config(

rules = Rules(trading_rules)

system = System(


return system

And now I can set up two systems, one without attenuation and one with:
system =futures_system()
# will equally weight instruments

# need to do this to deal fairly with attenuation
# do it here for consistency
system.config.use_forecast_scale_estimates = True

# will equally weight forecasts

# standard stuff to account for instruments coming into the sample
system.config.use_instrument_div_mult_estimates = True

system_vol_atten = futures_system_with_vol_attenuation()
system_vol_atten.config.use_forecast_scale_estimates = True
system_vol_atten.config.use_instrument_div_mult_estimates = True

rule_list =list(system.rules.trading_rules().keys())

for rule in rule_list:
sr1= system.accounts.pandl_for_trading_rule(rule).sharpe()
sr2 = system_vol_atten.accounts.pandl_for_trading_rule(rule).sharpe()

print("%s before %.2f and after %.2f" % (rule, sr1, sr2))

Let's check out the results:
ewmac2_8 before 0.43 and after 0.52
ewmac4_16 before 0.78 and after 0.83
ewmac8_32 before 0.96 and after 1.00
ewmac16_64 before 1.01 and after 1.07
ewmac32_128 before 1.02 and after 1.07
ewmac64_256 before 0.96 and after 1.00
carry before 1.07 and after 1.11

Now these aren't huge improvements, but they are very consistent across every single trading rule. But are they statistically significant?
from syscore.accounting import account_test

for rule in rule_list:
acc1= system.accounts.pandl_for_trading_rule(rule)
acc2 = system_vol_atten.accounts.pandl_for_trading_rule(rule)
print("%s T-test %s" % (rule, str(account_test(acc2, acc1))))

ewmac2_8 T-test (0.005754898313025798, Ttest_relResult(statistic=4.23535684665446, pvalue=2.2974165336647636e-05))
ewmac4_16 T-test (0.0034239182014355815, Ttest_relResult(statistic=2.46790714210943, pvalue=0.013603190422737766))
ewmac8_32 T-test (0.0026717541872894254, Ttest_relResult(statistic=1.8887927423648214, pvalue=0.058941593401076096))
ewmac16_64 T-test (0.0034357601899108192, Ttest_relResult(statistic=2.3628815728522112, pvalue=0.018147935814311716))
ewmac32_128 T-test (0.003079560056791747, Ttest_relResult(statistic=2.0584403445859034, pvalue=0.03956754085349411))
ewmac64_256 T-test (0.002499427499123595, Ttest_relResult(statistic=1.7160401190191614, pvalue=0.08617825487582882))
carry T-test (0.0022278238232666947, Ttest_relResult(statistic=1.3534155676590192, pvalue=0.17594617201514515))

A mixed bag there, but with the exception of carry there does seem to be a reasonable amount of improvement; most markedly with the very fastest rules.
Again, I could do some implicit fitting here to only use the attenuation on momentum, or use less of it on slower momentum. But I'm not going to do that.


To return to the original question: yes we should change our trading behaviour as vol changes.
But not in the way you might think, especially if you had extrapolated the performance from March 2020.

As vol gets higher faster trading rules do relatively badly, but actually the bigger story is that all momentum rules suffer
(as does carry, a bit). Not what I had expected to find, but very interesting. So a big thanks to the internet's hive mind for voting for this option.

Monday 1 March 2021

Does X work, some brief thoughts and choose your adventure

When I was a spotty teenager I was a walking nerd cliche. I liked computers; both for programming and games. I was terrified of girls. I was rubbish at nearly all sports*.  And I played D&D (and Tunnels and Trolls, and Runequest).

* Nearly all: Not, I'm not talking about the 'sport' of Chess: I was also rubbish at Chess and still am. But due to some weird anomaly I was a dinghy sailing champion at school, and later world champion.

I also remember reading the Fighting Fantasy books written by Games Workshop founders Steve Jackson and Ian Livingstone. 

Copyright image used without permission, but if you click here you can buy this book so that seems fair

These books were 'choose your adventure' style. So you'd read page 1 and it would say something like 'You are outside a mysterious castle... long dull description of castle follows.... Do you (a) climb the castle wall (p.34), (b) dress up as a washerwoman and try and enter the castle through the front gate (p.172) or (c) attack the sentries head on' (p.91). And after making your choice you'd turn to page 34, or 172, or 91; and you'd find out what the consequences of your actions were, and there would be a new set of options unless you'd already died (hint: option (c) is a poor choice).

As well as playing the book properly you could do 'fun' stuff like reverse engineering the network graph for the pages and working out the optimal shortest route through the book.

Now anyone under the age of 35 will be open mouthed at how archaic this sounds, but yes, this is what we had to do for entertainment in the 1980's. And that was partly because even then parents were worred about screen time, and if you were curled up in a corner with an actual book they would be happier than it you were sat in front of your ZX spectrum playing Jet Set Willy (kids: google it). Of course now I spend entire days in front of a screen, and my idea of relaxation is to read books, so go figure.

You may be wondering what this has to do with anything, but bear with me for a bit as I'm now going to radically change the subject. 

Quite a few of the queries I get asked run along the lines of 'Does X make sense?'. Here are a few recent examples:

  • Does changing your moving average speed make sense in different periods of volatility?
  • Does it make sense to use a different system on the long or the short side?
  • Does it make sense to use a different system in bull or bear markets?
  • Should I use different parameters for different instruments?
These 'does it make sense' questions, are interesting. For starters, they all seem to make intuitive sense. It seems crazy that the market would behave in exactly the same way in periods of high and low vol. It's obvious the market doesn't go up and down in the same way. And naturally, trading Eurodollar is completely different from trading VIX. 

These 'does it make sense' questions are also dangerous. Every single one of them is an invitation to make your system more complex and less intuitive. Every single one is replete with the opportunity to overfit. They introduce a new set of parameters, to define market state; and then multiply the existing set of parameters by allowing different values for different states.

These questions are also complicated, and involve answer several different questions. They involve modifying or changing your system according to some kind of exogenous input, but exactly what that input should be isn't always obvious. It isn't always clear how we should make the change. And they also bring up some interesting questions about how we should evaluate the relevant changes.

Take the first question as an example. First of all we need to define what a different level of volatility looks like, how it should be measured, how many states there should be, and so on. There is room for a lot of implicit fitting there, so we should probably keep things simple and try and stick to some predefined measure; maybe 2 to 4 states of volatility using quartile measures based on the full history of the relevant instrument.

We then need to work out how to change our moving average speed (which for once is a very tightly defined 'how'). For me that at least is relatively straightforward: I'd change the forecast weights that determine how my forecasts are linearly combined. That at least can be done with explicit fitting to avoid exploding the number of parameters we have to consider. 

As for evaluation, well that isn't so bad, we can just compare a system before and after. Of course we need to decide if we're just going to look at Sharpe Ratio, or do we also care about skew. Also, are we happy with any improvement no matter how small? Or should we seek a higher threshold given we're making our system more complicated? Or is this a change that makes sense in it's own right, and we should be happy to do it even if it performs slightly worse (although not worse in a statistically significant sense).

What about the long and short system? Defining long and short is easy enough, but what parameters should we change? Something like the response function to the forecast (this kind of plot) perhaps. That might mean that we take on smaller positions when short in certain markets, which I suppose is what people would like and expect to see. 

However, should we evaluate such a change based on pure Sharpe Ratio? If we did this, then we'd end up never going short markets which have barely gone down in the past, and basically increase our long bias to eg bonds (we can make things worse, incidentally, by not forcing the response function to go through zero). That would only make sense for someone who is only investing in this trading strategy, and doesn't also have something like a 60:40 portfolio already. 

So a better evaluation would be something like 'alpha', where we are looking for improved returns versus the market rather than outright improvements (of course how we define 'the market' is another moot point).

Bull and bear markets is a related question. The first question, once again, is how to define bull and bear in a non forward looking way. Something like the risk adjusted 200 day EWMA seems reasonable without being tempted down the path of overfitting. I used changes in US interest rates in this post as a proxy (something that's relevant after the wild changes in rates products over the last few days.

How should we implement the change? I'd be most tempted to allow my forecast weights to change; rather than modifying the behaviour of any individual rule. And again, evaluation should rightly consider 'alpha' not just outright Sharpe Ratio. And consistency of performance should probably come into it; higher SR isn't as good as seeing better performance in bear markets as that 'insurance' payoff is part of what people like to buy CTA type strategies for.

Different parameters for different markets is another massive can of worms. The same issues of how to change things and how to evaluate them is relevant. Again, I'd be most tempted to fit different forecast weights, which I do a bit anyway because different instruments have different cost levels and I take those into account. 

But there is surely some value in pooling data from other markets as well? Shouldn't we fit based on some blend of an instruments own data, plus data from other markets (where the blend would be different depending on how much data the relevant instrument as). Should we also give higher weight to markets that are more similar? (same asset class, same country, in some kind of correlation cluster....?)

Also, what if we do this and one or more of the other changes? Should we trade long/short markets differently for US 10 year bonds as we do for VIX? 


As you can see, these 'does X work' questions can quickly become very complicated. So you can see why I'm wary of doing them.

But I've decided to one of these investigations this month, and you get to choose which one!

Yes, just for this month, I'm going to make this a 'choose your adventure' blog where you get to decide the ending. These are the options:
  • Does changing your moving average speed make sense in different periods of volatility?
  • Does it make sense to use a different system on the long or the short side?
  • Does it make sense to use a different system in bull or bear markets?
I haven't included 'fitting for instruments' since that is a significant project rather than something I can do in a few days, although I will probably look at it in the future.

Let me know which of the ideas above you'd like me to investigate, and I'll write a longer blog post about the one that is the most popular. The poll is here on twitter and on quiz-maker....


The votes are in! The votes are (in traditional reverse order):

  • Does it make sense to use a different system in bull or bear markets? 23.8%
  • .... to use a different system on the long or the short side? 32.9% 
  • ... to change your moving average speed make sense in different periods of volatility? 43.3 votes is the winner
 The vote breakdown is here, including Twitter (TWTR), quiz-maker (QM) and below the line (BTL) comments:

        TWTR BTL QM Total
Vol & speed 44 18 62
Long & Short 33 3 11 47
Bull & Bear 28 6 34

So at some point in the very near future I'll be posting on "Does changing your moving average speed make sense in different periods of volatility?"

UPDATE 4th March