Thursday, 4 March 2021

Does it make sense to change your trading behaviour in different periods of volatility?

 A few days ago I was browsing on the elitetrader.com forum site when someone posted this:

I am interested to know if anyone change their SMA/EMA/WMA/KAMA/LRMA/etc. when volatility changes? Let say ATR is rising, would you increase/decrease the MA period to make it more/less sensitive? And the bigger question would be, is there a relationship between volatility and moving average?

Interesing I thought, and I added it to my very long list of things to think about (In fact I've researched something vaguely like this before, but I couldn't remember what the results were, and the research was done whilst at my former employers which means it currently behind a firewall and a 150 page non disclosure agreement). 

Then a couple of days ago I ran a poll off the back of this post as to what my blogpost this month should be about (though mainly the post was an excuse to reminisce about the Fighting Fantasy series of books).

And lo and behold, this subject is what people wanted to know about. But even if you don't want to know about it, and were one of the 57% that voted for the other two options, this is still probably a good post to read. I'm going to be discussing principles and techniques that apply to any evaluation of this kind of system modification.

However: spolier alert - this little piece of research took an unexpected turn. Read on to find out what happened...



Why this is topical


This is particularly topical because during the market crisis that consumed much of 2020 it was faster moving averages that outperformed slower. Consider these plots which show the average Sharpe Ratio for different kinds of trading rule averaged across instruments. The first plot is for all the history I have (back to the 1970's), then the second is for the first half of 2020, and finally for March 2020 alone:



The pattern is striking: going faster works much better than it did in the overall sample. What's more, it seems to be confined to the financial asset classes (FX, Rates and especially equities) where vol exploded the most:



Furthermore, we can see a similar effect in another notoriously turbulent year:

If we were sell side analysts that would be our nice little research paper finished, but of course we aren't... a few anecdotes do not make up a serious piece of analysis.


Formally specifying the problem

Rewriting the above in fancy sounding language, and bearing in mind the context of my trading system, I can write the above as:

Are the optimal forecast weights across trading rules of different speeds different when conditioned on the current level of volatility?

As I pointed out in my last post this leaves a lot of questions unanswered. How should we define the current level of volatility? How we define 'optimality'? How do we evaluate the performance of this change to our simple unconditional trading rules?



Defining the current level of volatility


For this to be a useful thing to do, 'current' is going to have to be based on backward looking data only. It would have been very helpful to have known in early February last year (2020) that vol was about to rise sharply, and thus perhaps different forecast weights were required, but we didn't actually own the keys to a time machine so we couldn't have known with certainty what was about to happen (and if we had, then changing our forecast weights would not have been high up our to-do list!).

So we're going to be using some measure of historic volatility. The standard measure of vol I use in my trading system (exponentially weighted, equivalent to a lookback of around a month) is a good starting point which we know does a good job of predicting vol over the next 30 days or so (although it does suffer from biases, as I discuss here). Arguably a shorter measure of vol would be more responsive, whilst a longer measure of vol would mean that our forecast weights aren't changing as much thus reducing the costs.

Now how do we define the level of volatility? In that previous post I used current vol estimate / 10 year rolling average of the  vol for the relevant. That seems pretty reasonable. 

Here for example is the rolling % vol for SP500:

import  pandas as pd
from systems.provided.futures_chapter15.basesystem import *

system =futures_system()

instrument_list = system.get_instrument_list()

all_perc_vols =[system.rawdata.get_daily_percentage_volatility(code) for code in instrument_list]


 And here's the same, after dividing by 10 year vol:

ten_year_averages = [vol.rolling(2500, min_periods=10).std() for vol in all_perc_vols]
normalised_vol_level = [vol / ten_year_vol for vol, ten_year_vol in zip(all_perc_vols, ten_year_averages)]
The picture is very similar, but importantly we can now compare and pool results across instruments.

def stack_list_of_pd_series(x):
stacked_list = []
for element in x:
stacked_list = stacked_list + list(element.values)

return stacked_list

stacked_vol_levels = stack_list_of_pd_series(normalised_vol_level)

stacked_vol_levels = [x for x in stacked_vol_levels if not np.isnan(x)]
matplotlib.pyplot.hist(stacked_vol_levels, bins=1000)

What's immediately obvious is that this is a very skewed distribution. This is made clear if we stack up all the normalised vols across markets and plot the distribution:

The median is 2.6, the mean is 2.9, but look at that right tail! About 1% of the observations are over 8.4, and the maximum value is nearly 30. You might think this is due to some particularly horrible markets (VIX?), but nearly all the instruments have normalised vol that is more than 10 times the ten year average, and the 1% tail is at least 6 times normal vol in every single instrument.

At this point we need to think about how many vol regimes were going to have, and how they should be selected. More regimes will mean we can more closely fit our speed to what is going on, but we'd end up with fewer data points (I'm reminded of this post where someone had inferred behaviour from just 18 days when the VIX was especially low). Fewer data points will mean our forecast weights will eithier revert to an average, or worse take extreme values if we're not fitting robustly.

I decided to use three regimes:
  • Low: Normalised vol in the bottom 25% quantile [using the entire historical period so far to determine the quantile] (over the whole period, normalised vol between a quarter and 1.85 times the ten year average)
  • Medium: Between 25% and 75% (over the whole period, normalised vol 1.85 to 3.5 times the ten year average)
  • High: Between 75% and 100% (over the whole period, normalised vol 3.5 to 30 times more than the ten year average)
There could be a case for making these regimes equal size, but I think there is something about relatively high vol that is unique so I made that smaller (with low vol the same size for symettry). Equally, there is a case for making them more extreme. There certainly isn't a case for jumping ahead and seeing which range of regimes performs the best - that would be implicit fitting!

def historic_quantile_groups(system, instrument_code, quantiles = [.25,.5,.75]):
daily_vol = system.rawdata.get_daily_percentage_volatility(instrument_code)
    # We shift by one day to avoid forward looking information
ten_year_vol = daily_vol.rolling(2500, min_periods=10).std().shift(1)
normalised_vol = daily_vol / ten_year_vol

quantile_points = [get_historic_quantile_for_norm_vol(normalised_vol, quantile) for quantile in quantiles]
stacked_quantiles_and_vol = pd.concat(quantile_points+[normalised_vol], axis=1)
quantile_groups = stacked_quantiles_and_vol.apply(calculate_group_for_row, axis=1)

return quantile_groups

def get_historic_quantile_for_norm_vol(normalised_vol, quantile_point):
return normalised_vol.rolling(99999, min_periods=4).quantile(quantile_point)

def calculate_group_for_row(row_data: pd.Series) -> int:
values = list(row_data.values)
if any(np.isnan(values)):
return np.nan
vol_point = values.pop(-1)
group = 0 # lowest group
for comparision in values[1:]:
if vol_point<=comparision:
return group
group = group+1

# highest group will be len(quantiles)-1
return group

Over all instruments pooled together...
quantile_groups = [historic_quantile_groups(system, code) for code in instrument_list]
stacked_quantiles = stack_list_of_pd_series(quantile_groups)
.... the size of each group comes out at:
  • Low vol: 59.5% of observations
  • Medium vol: 20.7% 
  • High vol: 19.8%
That's different from the 25,50,25 you'd expect. That's because  vol isn't stable over this period and we're using backward looking quantiles, rather than doing a forward looking cheat where we use the entire period to determine our quantiles (which would give us exactly 25,50,25).

Still we've got almost a quarter in our high vol group, which was what we are aiming for. And I feel it would be some kind of cheating to go back and change the quantile cutoffs having seen these numbers.


Unconditional performance of momentum speeds


Let's get the unconditional returns for the rules in our trading system: momentum using exponentially weighted moving average crossovers from 2_8 (2 day lookback - 8 days) up to 64_256, plus the carry rule (not strictly speaking part of the problem we're looking at today, but what the hell: we can use this as a proxy for determining whether 'divergent' / momentum or 'convergent' systems do worse or better when vol is high or low). These are average returns across instruments; which won't be as good as the portfolio level returns for each rule (we'll look at those later).

rule_list  =list(system.rules.trading_rules().keys())
perf_for_rule = {}
for rule in rule_list:
perf_by_instrument = {}
for code in instrument_list:
perf_for_instrument_and_rule = system.accounts.pandl_for_instrument_forecast(code, rule)
perf_by_instrument[code] = perf_for_instrument_and_rule

perf_for_rule[rule] = perf_by_instrument

# stack
stacked_perf_by_rule = {}
for rule in rule_list:
acc_curves_this_rule = perf_for_rule[rule].values()
stacked_perf_this_rule = stack_list_of_pd_series(acc_curves_this_rule)
stacked_perf_by_rule[rule] = stacked_perf_this_rule

def sharpe(x):
# assumes daily data
return 16*np.nanmean(x) / np.nanstd(x)

for rule in rule_list:
print("%s:%.3f" % (rule, sharpe(stacked_perf_by_rule[rule])))

ewmac2_8:0.064
ewmac4_16:0.202
ewmac8_32:0.303
ewmac16_64:0.345
ewmac32_128:0.351
ewmac64_256:0.339
carry:0.318

Similar to the plot we saw earlier; unconditionally medium and slow momentum (and carry) tends to outperform fast momentum.

Now what if we condition on the current state of vol?
historic_quantiles = {}
for code in instrument_list:
historic_quantiles[code] = historic_quantile_groups(system, code)

conditioned_perf_for_rule_by_state = []

for condition_state in [0,1,2]:
print("State:%d \n\n\n" % condition_state)

conditioned_perf_for_rule = {}
for rule in rule_list:
conditioned_perf_by_instrument = {}
for code in instrument_list:
perf_for_instrument_and_rule = perf_for_rule[rule][code]
condition_vector = historic_quantiles[code]==condition_state
condition_vector = condition_vector.reindex(perf_for_instrument_and_rule.index).ffill()
conditioned_perf = perf_for_instrument_and_rule[condition_vector]

conditioned_perf_by_instrument[code] = conditioned_perf

conditioned_perf_for_rule[rule] = conditioned_perf_by_instrument

conditioned_perf_for_rule_by_state.append(conditioned_perf_for_rule)

stacked_conditioned_perf_by_rule = {}
for rule in rule_list:
acc_curves_this_rule = conditioned_perf_for_rule[rule].values()
stacked_perf_this_rule = stack_list_of_pd_series(acc_curves_this_rule)
stacked_conditioned_perf_by_rule[rule] = stacked_perf_this_rule

print("State:%d \n\n\n" % condition_state)
for rule in rule_list:
print("%s:%.3f" % (rule, sharpe(stacked_conditioned_perf_by_rule[rule])))

State:0  (Low vol)
ewmac2_8:0.172
ewmac4_16:0.277
ewmac8_32:0.364
ewmac16_64:0.423
ewmac32_128:0.446
ewmac64_256:0.428
carry:0.395

Interesting! These numbers are better than the unconditional figures we saw above, but fast momentum still looks poor relatively speaking (these numbers, like all those in this post, are after costs). But overall the pattern isn't that different from the unconditional performance; nowhere near enough to justify changing forecast weights very much.

State:1 (Medium vol)
ewmac2_8:0.080
ewmac4_16:0.263
ewmac8_32:0.381
ewmac16_64:0.401
ewmac32_128:0.365
ewmac64_256:0.311
carry:0.243

The 'medium' level of vol is more similar to the unconditional figures. Again this is nothing to write home about in terms of differences in relative performance, although relatively speaking fast is looking a little worse.


State:2 (High vol)
ewmac2_8:-0.254
ewmac4_16:-0.079
ewmac8_32:0.042
ewmac16_64:0.043
ewmac32_128:0.030
ewmac64_256:0.064
carry:0.160

Now you've probably noticed a pattern here, and I know everyone is completely distracted by it, but just for a moment lets' focus on relative performance, which is what this post is supposed to be about. Relatively speaking fast is still worse than slow, and it's now much worse. 

Carry has markedly improved, but.... oh what the hell I can't contain myself anymore. There is nothing that interesting or useful in the relative performance, but what is clear is that the absolute performance of everything is reducing as we get to a higher volatility environment.


Testing the significance of overall performance in different vol environments

I really ought to end this post here, as the answer to the original question is a firm no: you shouldn't change your speed as vol increases. 

However we've now been presented with a new hypothesis: "Momentum and carry will do badly when vol is relatively high"

Let's switch gears and test this hypothesis.

First of all let's consider the statistical significance of the differences in return we saw above:

from scipy import stats

for rule in rule_list:
perf_group_0 = stack_list_of_pd_series(conditioned_perf_for_rule_by_state[0][rule].values())
perf_group_1 = stack_list_of_pd_series(conditioned_perf_for_rule_by_state[1][rule].values())
perf_group_2 = stack_list_of_pd_series(conditioned_perf_for_rule_by_state[2][rule].values())

t_stat_0_1 = stats.ttest_ind(perf_group_0, perf_group_1)
t_stat_1_2 = stats.ttest_ind(perf_group_1, perf_group_2)
t_stat_0_2 = stats.ttest_ind(perf_group_0, perf_group_2)

print("Rule: %s , low vs medium %.2f medium vs high %.2f low vs high %.2f" % (rule,
t_stat_0_1.pvalue,
t_stat_1_2.pvalue,
t_stat_0_2.pvalue))

Rule: ewmac2_8, low vs medium 0.26 medium vs high 0.00 low vs high 0.00
Rule: ewmac4_16, low vs medium 0.85 medium vs high 0.00 low vs high 0.00
Rule: ewmac8_32, low vs medium 0.96 medium vs high 0.00 low vs high 0.00
Rule: ewmac16_64, low vs medium 0.60 medium vs high 0.00 low vs high 0.00
Rule: ewmac32_128, low vs medium 0.20 medium vs high 0.00 low vs high 0.00
Rule: ewmac64_256, low vs medium 0.08 medium vs high 0.01 low vs high 0.00
Rule: carry, low vs medium 0.04 medium vs high 0.40 low vs high 0.00
These are p-values, so a low number means statistical significance. Generally speaking, with the exception of carry, the biggest effect is when we jump from medium to high vol; the jump from low to medium doesn't result in a significantly worse performance (in fact in some cases, where the p-value is greater than 0.5, performance actually reduces).

So it's something special about the high-vol enviroment where returns get badly degraded.


Is this an effect we can actually capture?


One concern I have is how quickly we move in and out of the different vol regimes; here for example is Eurodollar:




To exploit this effect we're going to have to do something like radically reduce our leverage whenever an instrument enters 'zone 2: high vol'. That clearly would have worked in early 2020 when there was a persistent high vol environment for some reason that escapes me now. But would we really get the chance to do very much for those brief few days in late 2019 when Eurodollar enters the highest vol zone?

Above you may have noticed I put in a one day lag on the vol estimate - this is to ensure we aren't conditioning todays return based on a vol estimate that uses todays return - clearly we couldn't change our leverage or otherwise react until we actually got the close of business price.

[In my backtest I automatically lag trades by a day, so when I finally come to test anything this shift can be removed]

In fact I have a confession to make... when first running this code I omitted the shift(1) lag, and the results were even stronger; with heavily negative returns for all trading rules in the highest vol region (except carry, which was barely positive). So this makes me suspicous that we wouldn't have the chance to react in time to make much of this.

Still, repeating the results with a 2 and even 3 day lag I still have some pretty low p-values, so there is probably something in it. Also, interestingly, with these greater lags there is more differentiation between low and medium regimes. Here for example are the T-statistics for a 3 day lag:

Rule: ewmac2_8, low vs medium 0.06 medium vs high 0.01 low vs high 0.00
Rule: ewmac4_16, low vs medium 0.16 medium vs high 0.04 low vs high 0.00
Rule: ewmac8_32, low vs medium 0.13 medium vs high 0.08 low vs high 0.00
Rule: ewmac16_64, low vs medium 0.03 medium vs high 0.06 low vs high 0.00
Rule: ewmac32_128, low vs medium 0.01 medium vs high 0.06 low vs high 0.00
Rule: ewmac64_256, low vs medium 0.02 medium vs high 0.14 low vs high 0.00
Rule: carry, low vs medium 0.08 medium vs high 0.46 low vs high 0.01


A more graduated system


Rather than using regimes, I think it would make more sense to do something involving a more continous variable, which is the quantile percentile itself, rather than the regime bucket that it falls into. Then we won't drastically shift gears between regimes.

Recall our three regimes:
  • Low: Normalised vol in the bottom 25% quantile
  • Medium: Between 25% and 75%
  • High: Between 75% and 100% 
One temptation is to introduce something just for the high regime, where we start degearing when our quantile percentile is above 75%; but that makes me feel queasy (it's clearly implicit fitting), plus the results with higher lags indicate that it might not be a 'high vol is especially bad' effect, but rather a general 'as vol gets higher we make less money'.

After some thought (well 10 seconds) I came up with the following:

Multiply raw forecasts by L where (if Q is the percentile expressed as a decimal, eg 1 = 100%):

L = 2 - 1.5Q

That will vary L between 2 (if vol is really low) and 0.5 (if vol is really high). The reason we're not turning off the system completely for high vol is for all the usual reasons; although this is a strong effect it's still not a certainty 

I use the raw forecast here. I do this because there is no guarantee that the above will result in the forecast retaining the correct scaling. So if I then estimate forecast scalars using these transformed forecasts, I will end up with something that has the right scaling.

These forecasts will then be capped at -20,+20; which may undo some of the increases in leverage done when vol is particularly low - but 

 

Smoothing vol forecast attenuation


The first thing I did was to see what the L factor actually looks like in practice. Here it is for Eurodollar [I will give you the code in a few moments]:


It sort of seems to make sense; there for example you can see the attenuation backing right off in early 2020 when we had the COVID inspired high vol. However it worries me that this thing is pretty noisy. Laid on top of a relatively smooth slow moving average this thing is going to boost trading costs quite a lot. I think the appropriate thing to do here is smooth it before applying it to the raw forecast. Of course if we smooth it too much then we'll be lagging the vol period.

Once again, the wrong thing to do here would be some kind of optimisation of post cost returns to find the best smoothing lookback, or something that was keyed into the speed of the relevant trading rule; instead I'm just going to plump for a ewma with a 10 day span. 


Testing the attenuation, rule by rule


Here then is the code that implements the attenuation:

from systems.forecast_scale_cap import *

class volAttenForecastScaleCap(ForecastScaleCap):

@diagnostic()
def get_vol_quantile_points(self, instrument_code):
## More properly this would go in raw data perhaps
self.log.msg("Calculating vol quantile for %s" % instrument_code)
daily_vol = self.parent.rawdata.get_daily_percentage_volatility(instrument_code)
ten_year_vol = daily_vol.rolling(2500, min_periods=10).std()
normalised_vol = daily_vol / ten_year_vol

normalised_vol_q = quantile_of_points_in_data_series(normalised_vol)

return normalised_vol_q

@diagnostic()
def get_vol_attenuation(self, instrument_code):
normalised_vol_q = self.get_vol_quantile_points(instrument_code)
vol_attenuation = normalised_vol_q.apply(multiplier_function)

smoothed_vol_attenuation = vol_attenuation.ewm(span=10).mean()

return smoothed_vol_attenuation

@input
def get_raw_forecast_before_attenuation(self, instrument_code, rule_variation_name):
## original code for get_raw_forecast
raw_forecast = self.parent.rules.get_raw_forecast(
instrument_code, rule_variation_name
)

return raw_forecast

@diagnostic()
def get_raw_forecast(self, instrument_code, rule_variation_name):
## overriden methon this will be called downstream so don't change name
raw_forecast_before_atten = self.get_raw_forecast_before_attenuation(instrument_code, rule_variation_name)

vol_attenutation = self.get_vol_attenuation(instrument_code)

attenuated_forecast = raw_forecast_before_atten * vol_attenutation

return attenuated_forecast
def quantile_of_points_in_data_series(data_series):
results = [quantile_of_points_in_data_series_row(data_series, irow) for irow in range(len(data_series))]
results_series = pd.Series(results, index = data_series.index)

return results_series

from statsmodels.distributions.empirical_distribution import ECDF

# this is a little slow so suggestions for speeding up are welcome
def quantile_of_points_in_data_series_row(data_series, irow):
if irow<2:
return np.nan
historical_data = list(data_series[:irow].values)
current_value = data_series[irow]
ecdf_s = ECDF(historical_data)

return ecdf_s(current_value)

def multiplier_function(vol_quantile):
if np.isnan(vol_quantile):
return 1.0

return 2 - 1.5*vol_quantile

And here's how to implement it in a new futures system (we just copy and paste the futures_system code and change the object passed for the forecast scaling/capping stage)::
from systems.provided.futures_chapter15.basesystem import *


def futures_system_with_vol_attenuation(data=None, config=None, trading_rules=None, log_level="on"):

if data is None:
data = csvFuturesSimData()

if config is None:
config = Config(
"systems.provided.futures_chapter15.futuresconfig.yaml")

rules = Rules(trading_rules)

system = System(
[
Account(),
Portfolios(),
PositionSizing(),
FuturesRawData(),
ForecastCombine(),
volAttenForecastScaleCap(),
rules,
],
data,
config,
)

system.set_logging_level(log_level)

return system

And now I can set up two systems, one without attenuation and one with:
system =futures_system()
# will equally weight instruments
del(system.config.instrument_weights)

# need to do this to deal fairly with attenuation
# do it here for consistency
system.config.use_forecast_scale_estimates = True
system.config.use_forecast_div_mult_estimates=True

# will equally weight forecasts
del(system.config.forecast_weights)

# standard stuff to account for instruments coming into the sample
system.config.use_instrument_div_mult_estimates = True

system_vol_atten = futures_system_with_vol_attenuation()
del(system_vol_atten.config.forecast_weights)
del(system_vol_atten.config.instrument_weights)
system_vol_atten.config.use_forecast_scale_estimates = True
system_vol_atten.config.use_forecast_div_mult_estimates=True
system_vol_atten.config.use_instrument_div_mult_estimates = True

rule_list =list(system.rules.trading_rules().keys())

for rule in rule_list:
sr1= system.accounts.pandl_for_trading_rule(rule).sharpe()
sr2 = system_vol_atten.accounts.pandl_for_trading_rule(rule).sharpe()

print("%s before %.2f and after %.2f" % (rule, sr1, sr2))

Let's check out the results:
ewmac2_8 before 0.43 and after 0.52
ewmac4_16 before 0.78 and after 0.83
ewmac8_32 before 0.96 and after 1.00
ewmac16_64 before 1.01 and after 1.07
ewmac32_128 before 1.02 and after 1.07
ewmac64_256 before 0.96 and after 1.00
carry before 1.07 and after 1.11

Now these aren't huge improvements, but they are very consistent across every single trading rule. But are they statistically significant?
from syscore.accounting import account_test

for rule in rule_list:
acc1= system.accounts.pandl_for_trading_rule(rule)
acc2 = system_vol_atten.accounts.pandl_for_trading_rule(rule)
print("%s T-test %s" % (rule, str(account_test(acc2, acc1))))

ewmac2_8 T-test (0.005754898313025798, Ttest_relResult(statistic=4.23535684665446, pvalue=2.2974165336647636e-05))
ewmac4_16 T-test (0.0034239182014355815, Ttest_relResult(statistic=2.46790714210943, pvalue=0.013603190422737766))
ewmac8_32 T-test (0.0026717541872894254, Ttest_relResult(statistic=1.8887927423648214, pvalue=0.058941593401076096))
ewmac16_64 T-test (0.0034357601899108192, Ttest_relResult(statistic=2.3628815728522112, pvalue=0.018147935814311716))
ewmac32_128 T-test (0.003079560056791747, Ttest_relResult(statistic=2.0584403445859034, pvalue=0.03956754085349411))
ewmac64_256 T-test (0.002499427499123595, Ttest_relResult(statistic=1.7160401190191614, pvalue=0.08617825487582882))
carry T-test (0.0022278238232666947, Ttest_relResult(statistic=1.3534155676590192, pvalue=0.17594617201514515))

A mixed bag there, but with the exception of carry there does seem to be a reasonable amount of improvement; most markedly with the very fastest rules.
Again, I could do some implicit fitting here to only use the attenuation on momentum, or use less of it on slower momentum. But I'm not going to do that.

Summary


To return to the original question: yes we should change our trading behaviour as vol changes.
But not in the way you might think, especially if you had extrapolated the performance from March 2020.

As vol gets higher faster trading rules do relatively badly, but actually the bigger story is that all momentum rules suffer
(as does carry, a bit). Not what I had expected to find, but very interesting. So a big thanks to the internet's hive mind for voting for this option.


Monday, 1 March 2021

Does X work, some brief thoughts and choose your adventure

When I was a spotty teenager I was a walking nerd cliche. I liked computers; both for programming and games. I was terrified of girls. I was rubbish at nearly all sports*.  And I played D&D (and Tunnels and Trolls, and Runequest).

* Nearly all: Not, I'm not talking about the 'sport' of Chess: I was also rubbish at Chess and still am. But due to some weird anomaly I was a dinghy sailing champion at school, and later world champion.

I also remember reading the Fighting Fantasy books written by Games Workshop founders Steve Jackson and Ian Livingstone. 

Copyright image used without permission, but if you click here you can buy this book so that seems fair


These books were 'choose your adventure' style. So you'd read page 1 and it would say something like 'You are outside a mysterious castle... long dull description of castle follows.... Do you (a) climb the castle wall (p.34), (b) dress up as a washerwoman and try and enter the castle through the front gate (p.172) or (c) attack the sentries head on' (p.91). And after making your choice you'd turn to page 34, or 172, or 91; and you'd find out what the consequences of your actions were, and there would be a new set of options unless you'd already died (hint: option (c) is a poor choice).

As well as playing the book properly you could do 'fun' stuff like reverse engineering the network graph for the pages and working out the optimal shortest route through the book.

Now anyone under the age of 35 will be open mouthed at how archaic this sounds, but yes, this is what we had to do for entertainment in the 1980's. And that was partly because even then parents were worred about screen time, and if you were curled up in a corner with an actual book they would be happier than it you were sat in front of your ZX spectrum playing Jet Set Willy (kids: google it). Of course now I spend entire days in front of a screen, and my idea of relaxation is to read books, so go figure.

You may be wondering what this has to do with anything, but bear with me for a bit as I'm now going to radically change the subject. 


Quite a few of the queries I get asked run along the lines of 'Does X make sense?'. Here are a few recent examples:

  • Does changing your moving average speed make sense in different periods of volatility?
  • Does it make sense to use a different system on the long or the short side?
  • Does it make sense to use a different system in bull or bear markets?
  • Should I use different parameters for different instruments?
These 'does it make sense' questions, are interesting. For starters, they all seem to make intuitive sense. It seems crazy that the market would behave in exactly the same way in periods of high and low vol. It's obvious the market doesn't go up and down in the same way. And naturally, trading Eurodollar is completely different from trading VIX. 

These 'does it make sense' questions are also dangerous. Every single one of them is an invitation to make your system more complex and less intuitive. Every single one is replete with the opportunity to overfit. They introduce a new set of parameters, to define market state; and then multiply the existing set of parameters by allowing different values for different states.

These questions are also complicated, and involve answer several different questions. They involve modifying or changing your system according to some kind of exogenous input, but exactly what that input should be isn't always obvious. It isn't always clear how we should make the change. And they also bring up some interesting questions about how we should evaluate the relevant changes.

Take the first question as an example. First of all we need to define what a different level of volatility looks like, how it should be measured, how many states there should be, and so on. There is room for a lot of implicit fitting there, so we should probably keep things simple and try and stick to some predefined measure; maybe 2 to 4 states of volatility using quartile measures based on the full history of the relevant instrument.

We then need to work out how to change our moving average speed (which for once is a very tightly defined 'how'). For me that at least is relatively straightforward: I'd change the forecast weights that determine how my forecasts are linearly combined. That at least can be done with explicit fitting to avoid exploding the number of parameters we have to consider. 

As for evaluation, well that isn't so bad, we can just compare a system before and after. Of course we need to decide if we're just going to look at Sharpe Ratio, or do we also care about skew. Also, are we happy with any improvement no matter how small? Or should we seek a higher threshold given we're making our system more complicated? Or is this a change that makes sense in it's own right, and we should be happy to do it even if it performs slightly worse (although not worse in a statistically significant sense).

What about the long and short system? Defining long and short is easy enough, but what parameters should we change? Something like the response function to the forecast (this kind of plot) perhaps. That might mean that we take on smaller positions when short in certain markets, which I suppose is what people would like and expect to see. 

However, should we evaluate such a change based on pure Sharpe Ratio? If we did this, then we'd end up never going short markets which have barely gone down in the past, and basically increase our long bias to eg bonds (we can make things worse, incidentally, by not forcing the response function to go through zero). That would only make sense for someone who is only investing in this trading strategy, and doesn't also have something like a 60:40 portfolio already. 

So a better evaluation would be something like 'alpha', where we are looking for improved returns versus the market rather than outright improvements (of course how we define 'the market' is another moot point).

Bull and bear markets is a related question. The first question, once again, is how to define bull and bear in a non forward looking way. Something like the risk adjusted 200 day EWMA seems reasonable without being tempted down the path of overfitting. I used changes in US interest rates in this post as a proxy (something that's relevant after the wild changes in rates products over the last few days.

How should we implement the change? I'd be most tempted to allow my forecast weights to change; rather than modifying the behaviour of any individual rule. And again, evaluation should rightly consider 'alpha' not just outright Sharpe Ratio. And consistency of performance should probably come into it; higher SR isn't as good as seeing better performance in bear markets as that 'insurance' payoff is part of what people like to buy CTA type strategies for.

Different parameters for different markets is another massive can of worms. The same issues of how to change things and how to evaluate them is relevant. Again, I'd be most tempted to fit different forecast weights, which I do a bit anyway because different instruments have different cost levels and I take those into account. 

But there is surely some value in pooling data from other markets as well? Shouldn't we fit based on some blend of an instruments own data, plus data from other markets (where the blend would be different depending on how much data the relevant instrument as). Should we also give higher weight to markets that are more similar? (same asset class, same country, in some kind of correlation cluster....?)

Also, what if we do this and one or more of the other changes? Should we trade long/short markets differently for US 10 year bonds as we do for VIX? 

Phew!

As you can see, these 'does X work' questions can quickly become very complicated. So you can see why I'm wary of doing them.

But I've decided to one of these investigations this month, and you get to choose which one!

Yes, just for this month, I'm going to make this a 'choose your adventure' blog where you get to decide the ending. These are the options:
  • Does changing your moving average speed make sense in different periods of volatility?
  • Does it make sense to use a different system on the long or the short side?
  • Does it make sense to use a different system in bull or bear markets?
I haven't included 'fitting for instruments' since that is a significant project rather than something I can do in a few days, although I will probably look at it in the future.

Let me know which of the ideas above you'd like me to investigate, and I'll write a longer blog post about the one that is the most popular. The poll is here on twitter and on quiz-maker....


UPDATE 3rd March 11:47 GMT: VOTING CLOSED

The votes are in! The votes are (in traditional reverse order):

  • Does it make sense to use a different system in bull or bear markets? 23.8%
  • .... to use a different system on the long or the short side? 32.9% 
  • ... to change your moving average speed make sense in different periods of volatility? 43.3 votes is the winner
 The vote breakdown is here, including Twitter (TWTR), quiz-maker (QM) and below the line (BTL) comments:

        TWTR BTL QM Total
Vol & speed 44 18 62
Long & Short 33 3 11 47
Bull & Bear 28 6 34

So at some point in the very near future I'll be posting on "Does changing your moving average speed make sense in different periods of volatility?"


UPDATE 4th March

Monday, 1 February 2021

So you want to be a quant/systematic trader?

 One of the upsides of having a (very, very minor) public profile is that you get a lot of people asking you for advice, which is flattering (and if you say otherwise, you need to consider just how first world that particular 'problem' is). The only downside of this is you get asked the same sort of question a number of different times. At some point it becomes worth writing a blog article about the subject, which saves time, but also means the person asking will get a much better answer.

(Also, cynically, posts like this get more clicks than ones about obscure corners of portfolio optimisation)

The generic question this article seeks to answer is "How do I become like you, Rob?" And by 'like you', they don't mean "How do I become a bald middled aged bloke with three kids, a mortgage, and an awesome shed?" They want to know how to become a systematic / quantitative trader.

Now there is a trite answer to this which is 'read all my books and stop bothering me you peasant', but of course even the most arrogant and prolific author cannot really believe that their canon alone is sufficient reading material to prepare someone for their future career.

This post is divided into three parts; firstly I define what I mean precisely by the end goal of becoming a systematic/quantitative trader. Secondly I discuss routes to market, how you can actually end up in this lofty position. Finally I talk about the resources I would recommend to help you.



Where you want to end up?

The phrase 'quant / systematic trader' I began this post with this deliberately vague; it's not clear exactly what this means. And the reason for that is that I don't want this post to be limited only to someone who wants to end up exactly like me, trading futures with a holding period averaging a few weeks with a fully automated system lovingly coded in python, using mostly momentum and carry type signals.

For starters there are a whole bunch of different trading styles and assets that are ripe for trading in a systematic or quantitative way; options, ETFs, equities, cash bonds, swaps and CDS; and you can trade those from high frequency up to buy and HODL forever; using valuation factors, relative value basis, or by providing liquidity, or in a thousand different ways.

And of course you can trade purely systematically, or in a purely discretionary way but guided by numbers (so still a quant), or in some mixture of the two; with or without a fully automated system.

And there is more to finance than trading; there is risk management, there is portfolio management, execution trading, quant software developing, risk management, quant pricing and many other associated jobs.

I'm pointing this out for a few reasons. Firstly, there is a lot of overlap between the skill sets required for these jobs. So even if you don't want to become a medium speed fully automated python futures trading with a bias towards momentum and carry (to be abbreviated to M.S.F.A.P.F.T.B.M.A.C. for the rest of this post), then a lot of what I will say will still be relevant to you. 

For example, pretty much everyone working in the math'y end of finance will need to code. But there is coding, and there is coding. So quant developers in high frequency trading will probably need to be fluent in C, and at the other extreme quant options traders of the 'shift-F9' monkey flavour will need to know some VBA but little else.

Secondly, and this will become important in the next part of this post, it's not uncommon for people to transfer between these roles. Almost nobody in finance is still doing the job they started doing. Just today I had a linkedin message from an old colleague whose CV looks like this: Maths phd -> statistical forecasting -> rates trader -> teacher -> software engineer. 

Remember that you don't neccessarily know where you will end up, and it's good to keep an open mind. Two things are very valuable in finance, and equally valid in life:

  • Optionality: keep your options open
  • Diversification: don't put your eggs in one basket


Routes to market

OK, so let's assume you have at least a vague idea of where you want to end up, how do you get there?

I did do a post on this some time ago, but it's still worth reading, and I've also written about it elsewhere. A key differentation is whether you want to end up trading your own money, or other peoples. Many people assume the correct approach is to trade your own money first, build up an amazing track record, and then fight off all the hedge fund managers who will be desperately trying to recruit you, or the outside investors who will be throwing money at you.

But there are a number of reasons why this is extremely unlikely. In practice the journey in the other direction is more common; the world is full of ex-professional money managers like me sitting in their sheds (or if they are more successful than I was, in their ski lodge in Verbier) trading their own money, but there are relatively few ex-shed dwellers working on Wall Street (at least pre-pandemic; in this time of COVID pretty much everyone is currently working in an actual or metaphorical shed).

For most people then the answer is to:

  • get a fancy finance job, and eithier do it forever or at some point retire and trade your own money
  • have another job, earn enough money to trade with, and then at some point hopefully have enough money to stop working and just live off your trading earnings

The skillsets for these two routes do have some overlap, but there are some important differences. For example, if you are going to try and make a living as a finance professional it helps to have some political and people skills, even amongst the rough and tumble of a trading floor or the autistic spectrum of a cliched quant group. 

Joking(?) aside, formal qualifications are extremely important in the world of professional finance (and they will also matter to outside investors if you were to go for the lottery ticket option of starting your own fund) but will not matter at all if you can only lose your own capital.

So the first step if you are going down the pro-route is to get a degree... and probably more than one. The CEO at AHL who I worked under was hired straight out of uni in 1991 with an undergraduate degree. Fifteen years after that, I was hired in 2006 with a masters (and some experience). Another fifteen years later, in 2021, and it will be much harder to get an elite front office quant job without a Phd.

It goes without saying the degree should probably be in maths, science, engineering, computer science or some variety of economics. And from as good a university as you can get into. It's better, from a job perspective, to be doing a degree that's less prestigous at a good university rather than vice versa as long as you're going to get at least a 2:1; a 2:1 from a good university is seen as better than a first from a poorer one by most recruiters (wrongly! but this is the world we live in), but a 2:2 even from Cambridge won't even get you through the door (clearly this is a UK centric opinion). It's also better to do a degree in a more traditional subject; computer science rather than game design for example.

Having said all that, if you really love history and get a place at a good university to do it then you should do it. Yes it's unlikely you will end up writing option pricing code (lucky you!), but there are still plenty of excellent jobs in finance that you can do, and you will also be able to do lots of other jobs as well: optionality.

The next piece of advice I give everybody is to think about the following heirarchy:

  1. The job you want at the place you want to work
  2. The job you want at a place that isn't quite as good
  3. Another front office job at the place you want to work
  4. Another front office job at a place that isn't quite as good
  5. The job you want at somewhere that's not good at all
  6. Something else that uses your skill set, not in finance
  7. Something else in finance

Clearly if you have a choice you should probably prioritise 1 above 2, and so on. I'd say generally it's better to have the job you want, even if it means working at Morgan Stanley rather than Goldmans: people hop between firms all the time, and if you're good you will have no trouble moving up the IB ranking or HF AUM table. The exception is (5), because having somewhere rubbish on your CV can harm your future career. 

So it's probably unwise to take a job as a 'trader' at some third rate bucket shop (where you'll all your time hedging customer flow and earning a relatively meagre income, as well as not being able to look at yourself in the mirror because of all the poor slobs you are ripping off). Better to work in risk management at a half decent bank, where you will get a feel for what the opportunities are, and have a reasonable chance of becoming a proper trader if it turns out that is what floats your boat.

I've spoken to several students who have said things like 'Well I was offered a job in sales at <tier one investment bank>, but I really want to be a hedge fund trader so I've turned them down'. This is very stupid! From sales in IB to hedge fund trader is two or three hops on the snakes and ladders board of life, and none of those hops is insurmountably large. 

And it may turn out that you're much more suited to sales anyway, you never know those recruitment people may have seen something in you that you didn't see in yourself (and I speak as someone who interviewed for a banking research job, and ended up getting an offer from the trading desk "Yes this guy is a a total nerd and on the face of it ideal research fodder. But his personality profile indicates a strong pyschopathic tendency, so he's our man").

I know dozens of people who started out as quants, or developers, or risk managers; and are now systematic portfolio managers or quant traders. Better to accept a job doing that, as long as it's at a half decent firm, than hold out for a lottery ticket that may never pay off. As I said above, it's unlikely that you even know at the age of 21 (or whatever) what you want to end up doing. 

This also means you shouldn't prioritise any job in finance over anything else. If you have a degree in computer science, and have a choice between a grunt middle office related finance job writing SQL queries for some legacy big iron database; or a more interesting job at a data science startup; for gods sakes take the second option even if it pays less. 

Although the SQL grunt is on the same org chart and possibly the same building as the trader (though unlikely the same floor), the reality is that the journey from former to latter is very difficult. Whereas if you become an expert in using big data, your chances of getting hired by a hedge fund to do the same are exponentially higher, and as I've already said from quant developer to quant trader is a relatively common journey. 

Whats more, the second job leaves you with more options open, both inside and outside of finance. Whereas the likely paths from SQL grunt include 0.001% of paths where you end up as a trader, 0.999% of paths where you get stuck somewhere on the journey, and 99% of paths where you remain an SQL grunt until someone finally works out how to copy the data in MongoDB at which point you get fired.

This also means that if you are interested in trading your own money, then you should be doing something right now that you enjoy and are good at, and if you are really lucky that also pays well enough to save money. Don't do a degree in Economics just because you think you need to. Do something you love. If you do hit the career or trading your own money jackpot you don't want to be one of those desperately boring people who retire at the age of 40 or 50 with no interests outside of finance, and aren't actually interested in finance anyway.


Resources

One of the fun things about this 'job' is that it requires a wide variety of skills to do well. This is doubly true if you're an independent trader, since you have to do everything yourself. That means this section has a lot of headings!

However a few caveats:

  • As I said above, there are a wide variety of things you can do in this field and the emphasis will be different depending on exactly what role you want to end up doing.
  • This list will inevitably be weak in areas where I am weak myself; I've never worked as a high frequency trader or options valuation quant. 
  • Like everything I write, this list is tainted by my subjective preferences and experiences.
  • I am old! I still think fondly of textbooks I was using as an undergraduate 20 years ago. More recent ones may have passed me by.
  • Other people have produced lists like this, and done a more rigorous job, for example here, and here
This section of the post is mostly a truncated version of this page, where I've focused only on the books and websites that are directly relevant for the problem in hand, and cut out most of the 'nice to haves' in favour of the 'must haves'. Nevertheless, I encourage you to check out the longer list of books on that page.

Coding

"How do I learn to code" is another question I get asked a lot. And it's very difficult for me to answer it. I learned to code nearly 40 years ago, at the age of seven, in BASIC on one of these:


TRS-80 color computer
By Bilby - Own work, CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=10858630

Since then I've learned and mostly forgotten at least 30 other languages (I've even forgotten the names of some of them). So when someone asks "How do I learn python like you did", well the truthful answer is to go back in time 40 years and learn BASIC, assembler, C, SQL ..... Matlab, R, S-plus, and then learn Python. If the questioner is a 20 year old student that isn't helpful.

In all seriousness there are dozens of websites which teach you how to code for free. And I can do no more than point to https://wiki.python.org/moin/BeginnersGuide/Programmers for python specifically. 

A question I can answer is "How do you become a better Python programmer". This is in fact two questions, how do you write better Python? And how do you become a better programmer?

Better Python:

  • Python cookbook, Beazley and Jones
  • Classic computer science problems in python, Kopec
  • Effective python, Slatkin (some overlap with the cookbook, but a lot shorter and therefore cheaper)

Better programmer:

  • Clean code, Martin: Concise and brilliant 
  • The Art of Unix programming, Raymond: Useful even for non Unix people 
  • Code complete, McConnell: Large reference manual 

Alongside this, there is some specific Python that it's super useful to know for finance. I don't actually own these, and I haven't read the third or fourth, but the author is highly rated. 

  • Python for finance, Hilpisch.
  • Python for data analysis; by the creator of Pandas Wes McKinny
  • Derivatives Analytics with Python, Hilpisch.
  • Python for Algorithmic Trading, Hilpisch (note covers OANDA and FXCM but not IB)

Of course there are other languages than Python like R and Matlab or C (all of which I've used in the past) and Java (which I haven't used extensively, and therefore I naturally hate). This isn't the place for a language war (there is some discussion here of what might work best), but if you want references on material for other languages you might try here (for R), and here (for C++).

There are some coding blogs and websites that I've found particularly useful and interesting.


Automated trading (with interactive brokers)

A very specific coding need is to send orders to a broker. If you use interactive brokers like me (via IBinsyc and using the IB controller), then you'll need to become very familiar with the following web addresses:

You may also want to look at my open source backtesting and trading engine, plus my series of posts on using the python TWS API.

Econometrics, statistics and all that jazz

The problem with young people today, is they think they know everything because they have played around with some black box machine learning package. But they haven't got a firm grasp on the basics. Which means they are very likely to end up overfitting the hell out of everything.

  • Fundamental methods of Mathematical Economics, Chiang. Good starting point if you've forgotten a lot of maths
  • Econometric Analysis, Greene: Best introductory econometrics textbook mainly because of the absurdly long but endlessly entertaining chapter endnotes
  • Market models, Alexander. 
  • The Elements of Statistical Learning, Hastie. The classic ML book.
  • Advances in Financial Machine Learning, Lopez de Prado. You're only allowed to read this once you've got the basics under your belt. Read my review.


Derivatives pricing and trading

Clearly what you read here depends on whether you are going to be a pricing quant in which case you need to able to throw around Itos lemma in your sleep, or just punt around a few futures.

  • Quantitative finance for dummies, Bell. Good for dummies.
  • Paul Wilmott introduces quantitative finance, by .... well guess. Good for beginners.
  • Options, futures and other derivatives, Hull. The absolute classic, but overkill for many people. But by law it has to be on thist list.
  • Derivative securities, Jarrow & Turnbull. Similar level to Hull, and actually (whispers) I prefer it.
  • Dynamic Hedging, Taleb. A bit of a marmite book (like Taleb himself really) but I found it very helpful when I was working as an options trader.


Risk management

  • Red-Blooded Risk: The Secret History of Wall Street, Aaron Brown. Non technical history of quant risk management over recent years from a dude that was there. 
  • Quantitative risk management, McNeil, Frey, Embrechts. Technical manual for risk managers.


Behavioural finance

  • Beyond greed and fear, Shefrin. Quite an old book now but a very good accessible introduction to the world of behavioural finance and relatively brief.  I suggest you read Thinking Fast and Slow after this if you are in a hurry; otherwise reverse the order.
  • Thinking Fast and Slow, Kahneman. Not just a great finance book. This book will literally change the way you think about thinking (see what I did there). Arguably it isn't necessary to read this to follow the behavioural finance literature. However if you care about whether behavioural finance has some kind of underpinning then its an absolute must.


Forecasting

  • How to predict the unpredictable, Poundstone.  
  • The signal and the noise, Silver. Yes it's the 538 guy
  • Thinking in Bets: Making Smarter Decisions When You Don't Have All the Facts/ Annie Duke
  • Forecast: What Physics, Meteorology, and the Natural Sciences Can Teach Us About Economics. Mark Buchanan
  • Radical Uncertainty: Decision-making for an unknowable future. Mervyn King, John Kay


Financial economics

  • Fortunes Formula. Superb non technical book about the Kelly criteria. This book manages to be an entertaining but also incredibly instructive book about the history of links between gambling and the financial markets.
  • A random walk down Wall Street. This book has been around longer than me; and its like marmite you either agree with its efficient markets hypothesis creed or you don't. Certainly the later editions have drifted far from being a useful survey of the various factor inefficiencies to being yet another 'how to' on personal investment. If you find an earlier edition of this book in a second hand bookshop its worth buying, otherwise Expected Returns is a better use of your money.
  • Expected returns- Anti Ilmanen. Absolute classic on return factors
  • Irrational exuberance. Excellent book by Robert Shiller on speculative bubbles.
  • Capital ideas and Capital Ideas Evolving. Interesting history of the whole efficient market hypothesis approach.
  • Adaptive markets, Lo. 
  • Active Portfolio Management, Grinold and Kahn: A quantative approach for producing superior returns and selecting superior money managers.
  • Narrative Economics: How Stories Go Viral and Drive Major Economic Events. Robert J. Shiller
  • Modern Investment Management: An Equilibrium Approach: Bob Litterman et al. Absolute bible.


High frequency trading

These are very good general reading albeit somewhat polemical; I would like to see a recommendation for a good technical book on this subject:
  • Dark pools, Patterson.
  • Flash boys, Lewis. 



Fixed income

There isn't much here that is asset specific, but fundamentally I've spent slightly more time trading fixed income than anything else, so:
  • The Handbook of Fixed Income Securities, Fabozzi.
  • STIR futures, Aiken

General interest quant books

  • Nerds on wall street, Leinweber. Entertaining book written by someone who was there as the whole quant thing developed.
  • The Predictors : How a Band of Maverick Physicists Used Chaos Theory to Trade Their Way to a Fortune on Wall Street, Bass: Not as cheesy as the subtitle suggests. This is the book that got me into the systematic investment game. Doyne Farmer now at Oxford, is one of the more interesting people in the finance world and a great speaker if you get the chance to listen to him. Also worth reading (though a little less relevant to finance) the prequel: The Eudaemonic Pie, which is about betting on roulette.
  • The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution: Gregory Zuckerman. "Rentech. Probably the most hedge fund in the world". Also launched Donald Trumpt thanks to Bob Mercer's money, but nobodys perfect.


    Books by traders

    • The education of a speculator. Victor Niederhoffer. Is incredibly random and there is no attempt to impose a coherent worldview or grand theory of everything. Imposing such an overview would be a ridiculous thing to do anyway, but Taleb and Soros would have tried to do so...
    • Market wizards series, Schwager. You must have heard of this guy. Surely.
    • Why Aren't They Shouting?: A Banker’s Tale of Change, Computers and Perpetual Crisis. Kevin Rodgers. Great history of the markets
    • All those books that Nassim Taleb guy has written. 'Fooled by Randomness' is my favourite. They get a little more mad and harder to follow as time goes on.


    Trading books

    • Following the trend: Diversified Managed Futures Trading - Andreas Clenow. Nice book on trading futures CTA style.
    • Stocks on the move, Clenow. Trading equities with momentum.
    • A Complete Guide To The Futures Markets, Jack Schwager. Buy this rather than the other futures books Jack has written. Unless you really like Jack, and would like him to have as much of your money as his humanly possible.
    • Trading systems and methods- Perry Kaufman. A massive book with a four figure page count. Nevertheless it really is the bible of trading signals and that is why everyone should buy it. Perry - is my cheque in the post?
    • Efficiently inefficient, Pedersen. Excellent book on trading some popular hedge fund strategies, interspersed with interviews.
    • The rise of Carry, Lee & Coldiron. My review.
    • Ernie Chen's various books. 
    • Systematic Trading - Robert Carver
    • Leveraged Trading - Robert Carver


    Useful blogs and websites


    Summary

    As always please feel free to comment below (then wait until I have the time to moderate your comment before publishing it). I'm especially looking for ideas for additional resources that I haven't come across, which I'll add to the lists above.