Friday, 2 July 2021

Talking to the dead / simple heuristic position selection / small account problems - part four / EPIC FAIL #2

 Over the last few posts I've been grappling with the difficulties of trading futures with a retail sized account. I've tried a couple of things so far - a complex dynamic optimisation (here and here) where I try and optimise the portfolio every day in the knowledge that I can only take integer positions, and then a simpler static approach where I try to pick the best fixed set of instruments to trade given my account size - and then trade them.

In this post I return to a dynamic approach (choosing the best positions to hold each day from a very large set of instruments), but this time I'm going to use much simpler heuristic methods. I use the term heuristic to mean something you could explain to an eldery relative: let's call them Auntie Barbara.

I used to have an Auntie Barbara, but she died a long time ago. If there is an afterlife, and if they have internet there, and if she subscribes to this blog: Hi!

I've written this post to be fairly self contained (I can't really expect Auntie Barbara to read all the previous posts, she will be too busy playing tennis with Marilyn Monroe or something) and also a bit simpler than the previous three to follow.

The setup

Here's the setup again. I have a universe currently of 48 futures markets I'd like to trade (for now - in practice I'm adding new instruments every few days, and in an ideal world there are around 150  I'd like to trade if I could). If I backtest their performance it looks great (this is just with the set of trading rules in chapter 15 of my book, 3 EWMAC + carry; but I do allow the instrument weights to be optimised):

That's a Sharpe Ratio of 1.18, pretty good for two trading rules (ewmac and carry). Oh the power of diversification...

Not only does it make money, it also (on average) has good risk targeting. Here's the rolling annualised standard deviation (which come in at 22.2% on average, slightly under the target)

Auntie Barbara (AB): "Great! You always were a little smart alec. Can I get back to my jacuzzi now? I've got James Dean and Heath Ledger waiting for me."

* Auntie is communicating with me from the spirit world via telnet, hence the Courier typeface

Sorry Auntie, I cheated slightly there. That's the performance if I can take fractional futures positions, o equivalently what I could do with many millions of dollars. 

This is what it looks like if I trade it with $100K (about £80K: this particular FX rate is roughly unchanged since my Auntie died)

I normally use $500K for these tests - but I'm trying to make the results starker.

AB "Why does it start going wrong, weirdly, not long after I've died? Are you saying this is my fault?"

Not at all! No, to begin with there are only a few instruments in the data. Then as more are added, we struggle to take positions in every instrument due to rounding. We end up with many instruments that have no position at all; the positions we end up making (or losing money) from just happen to be those with relatively small contract sizes. 

So the portfolio becomes more concentrated, and in expectation (and also in reality here), has worse performance. It also undershoots it's risk due to all that 'wasted' capacity of the instruments which can't take a position. There are many instruments here that we are just collecting data for, but can't hope to ever take a position in.

Now look at the rolling realised standard deviation again:

We're systematically undershooting, especially in more recent years when I have a lot more instruments in the dataset. The risk is also 'lumpier', reflecting the close to binary nature of the system.

AB "Hang on, I've just read your last couple of posts again. Or tried to. What happens if you do some kind of fancy dynamic optimisation on your positions each day?"

That doesn't work and is way too complicated.

AB "And what if you just select a group of markets and trade with those?"

Well if I use the 16 instruments I identified in my last post  as suitable for a $100K account I get these results:

Fewer markets is handicapped by having later starting data, but if I account for that:

AB "When does that data start now?"

The 11th May 1974

AB "Ah - that's your birthday. Coincidence?"

Well actually the data starts on 22nd April 1974, but that's close enough.

That feels slightly like cheating since they're identified using some forward looking information, but if I selected any 16 instruments on a rolling basis using any vaguely sensible methodology I'd expect on average to get similar results.

Basically we make up some of the ground on the full 40+ instrument portfolio compared to the rounded situation, but we never quite manage it (although the green curve looks as good, it's actually got a lower SR and underperforms in more recent years as we get more and more instruments in the full portfolio). In expectation 16 instruments, no matter how carefully chosen, will underperform 50; never mind 150.

The simplest possible approach?

AB "Well it's obvious what you should do"

Is it?

AB "Do you remember when you were a boy, and you'd invite all your friends to your birthday parties?"

I'm 47 Auntie Barbara. I'm not 100% sure what I did last thursday.

AB "Well just bear with me then. Suppose you had 50 friends, and you could only invite 16 to your party. What would you do?"

I'd.... well I'd pick my favourite 16 friends (this is hypothetical! What kind of person has fifty 'friends'?).

AB "Now suppose you had a birthday party every single day. What would happen?"

Well... I suppose I'd pick whoever was my favourite 16 friends on that day. But, with respect, what on earth, (sorry insensitive), what the hell (worse!),  what in heaven does this have this to do with the problem at hand.

AB "Hasn't the penny dropped yet? I thought you were a smart alec."

OK it has finally dropped. What I need to do is just hold positions in the 16 instruments that have the strongest absolute forecast on that day.

AB "Someone give the boy a medal"

I choose to ignore that. Let's see some code:

class newPositionSizing(PositionSizing):

def get_subsystem_position(self, instrument_code: str) -> pd.Series:
all_positions = self.get_pd_df_of_subsystem_positions()
return all_positions[instrument_code]

def get_pd_df_of_subsystem_positions(self) -> pd.DataFrame:
all_forecasts =self.get_all_forecasts()
list_of_dates =all_forecasts.index

list_of_positions = []
previous_days_positions = portfolioWeights()
for passed_date in list_of_dates:

positions = self.get_subsystem_positions_for_day(passed_date, previous_days_positions)
previous_days_positions = copy(positions)


df_of_positions = pd.DataFrame(list_of_positions)
df_of_positions.index = list_of_dates

return df_of_positions

def get_subsystem_positions_for_day(self,
passed_date: datetime.datetime,
previous_days_positions: portfolioWeights = arg_not_supplied) -> portfolioWeights:

if previous_days_positions is arg_not_supplied:
previous_days_positions = portfolioWeights()
forecasts = self.get_forecasts_for_day(passed_date)

initial_positions_all_capital = self.get_initial_positions_for_day_using_all_capital(passed_date)

positions = calculate_positions_for_day(previous_days_positions = previous_days_positions,
forecasts = forecasts,
initial_positions_all_capital = initial_positions_all_capital)
list_of_instruments = self.parent.get_instrument_list()
positions = positions.with_zero_weights_for_missing_keys(list_of_instruments)

return positions

def get_initial_positions_for_day_using_all_capital(self,passed_date: datetime.datetime) -> portfolioWeights:
all_positions = self.get_all_initial_positions_using_all_capital()
all_positions_on_day = all_positions.loc[passed_date]

return portfolioWeights(all_positions_on_day.to_dict())

def get_forecasts_for_day(self, passed_date: datetime.datetime)->portfolioWeights:
all_forecasts = self.get_all_forecasts()

todays_forecasts = all_forecasts.loc[passed_date]

return portfolioWeights(todays_forecasts.to_dict())

def get_all_forecasts(self) -> pd.DataFrame:
instrument_list = self.parent.get_instrument_list()
forecasts = [self.get_combined_forecast(instrument_code)
for instrument_code in instrument_list]

forecasts_as_pd = pd.concat(forecasts, axis=1)
forecasts_as_pd.columns = instrument_list
forecasts_as_pd = forecasts_as_pd.ffill()

return forecasts_as_pd

def get_all_initial_positions_using_all_capital(self) -> pd.DataFrame:
instrument_list = self.parent.get_instrument_list()
positions = [self.get_initial_position_using_all_capital(instrument_code)
for instrument_code in instrument_list]

positions_as_pd = pd.concat(positions, axis=1)
positions_as_pd.columns = instrument_list
positions_as_pd = positions_as_pd.ffill()

return positions_as_pd

def get_initial_position_using_all_capital(self, instrument_code: str) -> pd.Series:

"Calculating subsystem position for %s" % instrument_code,

inital_position = self.get_volatility_scalar(instrument_code)

return inital_position

This code actually contains some future proofing, in that it is written for path dependence in positions - which we're not actually going to use yet. 

def calculate_positions_for_day(previous_days_positions: portfolioWeights,
forecasts: portfolioWeights,
initial_positions_all_capital: portfolioWeights):

## Get risk budget per market
risk_budget_per_market = proportionate_risk_budget(forecasts)
maximum_positions = int(1.0 / risk_budget_per_market)
idm = min(maximum_positions**.35, 2.5)
idm_with_risk = risk_budget_per_market * idm

initial_positions = signed_initial_position_given_risk_budget(initial_positions_all_capital,
forecasts = forecasts,

list_of_tradeable_instruments = tradeable_instruments(initial_positions=initial_positions,

current_instruments_with_positions = []

## Sort markets by abs strength of forecast
## Iteratively from strongest to weakest:
list_of_instruments_strongest_forecast_first = \

for instrument_to_add in list_of_instruments_strongest_forecast_first:
## If already have position, keep it on - wouldn't be in this list
if len(current_instruments_with_positions)<maximum_positions:
## If haven't got a position on, and risk budget remaining, add a position
## If no markets remain with current positions in could be removed group, halt

new_positions = fill_positions_from_initial(current_instruments_with_positions=current_instruments_with_positions,

return new_positions

Most of that should be self explanatory, the 'initial' position (perhaps badly named) is the position the system would want to take if we put all of our trading capital into that single instrument. We then scale that by a risk budget, which is equivalent to an 'instrument weight' that here is just 1/N (N is the number of assets we're currently trading), with a lower limit of 6.25% (to avoid having no more than 16 positions; this value can be tweaked depending on your capital), and an IDM calculated as N^0.35 (note if all subsystems had zero correlation this would be N^0.5, so this is a reasonable approximation), with my normal limit of IDM=2.5

def proportionate_risk_budget(forecasts: portfolioWeights):
market_count = market_count_in_forecasts(forecasts)
proportion = 1.0/market_count

use_proportion = max(proportion, 1.0/16)

return use_proportion

Now a 'tradeable' instrument is one with a non na forecast, but also a position that is equal to a single contract or more. No point wasting risk capital on a position that isn't at least one contract.

AB "No point inviting a kid to the party who can't come. That's a waste of an invitation."


def tradeable_instruments(initial_positions: portfolioWeights,
forecasts: portfolioWeights):
## Non tradeable instruments:
## We don't open up new positions in non tradeable instruments, but we may
## maintain positions in existing ones

valid_forecasts = instruments_with_valid_forecasts(forecasts)
possible_positions = instruments_with_possible_positions(initial_positions)

valid_instruments = list(set(possible_positions).intersection(set(valid_forecasts)))

return valid_instruments

def instruments_with_valid_forecasts(forecasts: portfolioWeights) -> list:
valid_keys = [instrument_code
for instrument_code, forecast_value in forecasts.items()
if _valid_forecast(forecast_value)]
return valid_keys

def _valid_forecast(forecast_value: float):
if np.isnan(forecast_value):
return False
if forecast_value==0.0:
return False
return True

def instruments_with_possible_positions(initial_positions: portfolioWeights) -> list:
valid_keys = [instrument_code
for instrument_code, position in initial_positions.items()
if _possible_position(position)]
return valid_keys

def _possible_position(position: float):
if np.isnan(position):
return False
if abs(position)<1.0:
return False

Let's have a gander at what this thing is doing:

I've zoomed in to the end of this plot, which shows positions for Eurodollar at various stages. The blue line shows what position we'd have on without position rounding, and with a fixed capital weight of 6.25% (equal weight across 16 instruments) multiplied by the IDM (2.5 here). The orange line - which is mostly on the blue line - shows the position we'd have on without rounding, once we've applied the 'You need to have one of the 16 strongest forecasts to come to the party' rule (I need a catchier name).

So for example between March and mid April this goes to zero, as the forecast weakens.

 Finally the green line shows the rounded position, once I've applied my usual buffering rule. You can see that's mostly a rounded version of the orange line.

OK. It's not great, although the last 10 years is pretty good. Also the vol targeting is somewhat poor:

... coming in at an average of 12% a year. 

Horrible path dependence

Let's turn our attention first to the poor performance. Some of that is due to costs; which go up from around 10bp of SR in the large and reduced benchmarks, to 26bp of SR. As I've said many times before, pre-cost performance is (to an extent) random but costs are predictable. Not a surprise when a forecast going from being ranked 16th best to 15th best will result in a trade; and then possibly the next day the same position being closed.

AB "It seems unfair to kick someone out of the party, just because they've gone from being your 15th to 16th favourite friend. Maybe you should let kids stay until they are really not your friends anymore."

OK, let's try it. I propose the following rule (bear in mind that my forecasts are scaled such that a forecast of +10 is an average long):
  • If we have a position, and the absolute forecast is more than 5, then hang on to it.
  • If we don't have a position, and the absolute forecast is more than 5 then try to open a new position. Starting with the instuments with the highest forecasts:
  • If we already have the maximum number of positions open, then:
    • For instruments that have open positions, starting with the lowest forecast close the position and replace it with the new instrument.
    • Do not close a position if the absolute forecast is more than 5. 
    • Once all possible positions (absolute forecast<5) have been closed, do not open any new positions

  • Absolute forecasts greater than 10:
    • Existing position: won't be closed
    • New position: probably will be opened
  • Absolute forecasts between 5 and 10:
    • Existing positions: won't be closed
    • New positions: may be opened
  • Forecasts less than 5:
    • Existing positions: may be closed

def calculate_positions_for_day(previous_days_positions: portfolioWeights,
forecasts: portfolioWeights,
initial_positions_all_capital: portfolioWeights):

risk_budget_per_market = proportionate_risk_budget(forecasts)
maximum_positions = int(1.0 / risk_budget_per_market)
idm = min(maximum_positions**.35, 2.5)
idm_with_risk = risk_budget_per_market * idm

initial_positions = signed_initial_position_given_risk_budget(initial_positions_all_capital,
forecasts = forecasts,

list_of_tradeable_instruments = tradeable_instruments(initial_positions=initial_positions,

current_instruments_with_positions = from_portfolio_weights_to_instrument_list(previous_days_positions)

## forecast less than +5 or non tradable (could be removed)
list_of_removable_instruments = removable_instruments_with_positions_weakest_forecasts_last(current_instruments_with_positions,
## ordered by weakness of forecast

## Sort markets by abs strength of forecast
## Iteratively from strongest to weakest:
list_of_instruments_with_no_position_strongest_forecast_first = \

for instrument_to_add in list_of_instruments_with_no_position_strongest_forecast_first:
## If already have position, keep it on - wouldn't be in this list
if len(current_instruments_with_positions)<maximum_positions:
## If haven't got a position on, and risk budget remaining, add a position

elif len(list_of_removable_instruments)>0:
## If haven't got a position on, and no risk budget remaining,
## Remove position from market with current position and weakest forecast in 'could be removed' group
instrument_to_remove = list_of_removable_instruments.pop()
## If no markets remain with current positions in could be removed group, halt

new_positions = fill_positions_from_initial(current_instruments_with_positions=current_instruments_with_positions,


return new_positions

def from_portfolio_weights_to_instrument_list(positions: portfolioWeights):
instrument_list = [instrument_code for instrument_code, position in positions.items()
if _valid_position(position)]
return instrument_list

def _valid_position(position: float):
if np.isnan(position):
return False
if position==0.0:
return False

return True

def removable_instruments_with_positions_weakest_forecasts_last(current_instruments_with_positions: list,
forecasts: portfolioWeights):
instrument_with_weak_forecasts = instruments_with_weak_or_non_existent_forecasts(forecasts)
instruments_with_positions_and_weak_forecasts = list(set(current_instruments_with_positions).intersection(instrument_with_weak_forecasts))

instruments_with_positions_and_weak_forecasts_weakest_forecast_last = \

return instruments_with_positions_and_weak_forecasts_weakest_forecast_last

def instruments_with_weak_or_non_existent_forecasts(forecasts: portfolioWeights) -> list:
weak_forecasts = [instrument_code
for instrument_code, forecast_value in forecasts.items()
if _weak_forecast(forecast_value)]
return weak_forecasts

def _weak_forecast(forecast_value: float):
if np.isnan(forecast_value):
return True
if abs(forecast_value)<5.0:
return True
return False

def sort_list_of_instruments_by_forecast_strength(forecasts: portfolioWeights,
instrument_list) -> list:

tuples_to_sort = [(instrument_code,
for instrument_code in instrument_list]
sorted_tuples = sorted(tuples_to_sort, key=lambda tup: tup[1], reverse=True)
list_of_instruments = [x[0] for x in sorted_tuples]

return list_of_instruments

def _get_forecast_sort_key_given_value(forecast_value:float):
if np.isnan(forecast_value):
return 0.0
return abs(forecast_value)

def instruments_with_no_position_strongest_forecast_first(forecasts: portfolioWeights,
current_instruments_with_positions: list,
list_of_tradeable_instruments: list):

tradeable_instruments_setted = set(list_of_tradeable_instruments)
instruments_with_no_position = list(tradeable_instruments_setted)
list_of_instruments_with_strong_forecasts = instruments_with_strong_forecasts(forecasts)

list_of_instruments_with_strong_forecasts_and_no_position = \

sorted_instruments = sort_list_of_instruments_by_forecast_strength(forecasts=forecasts,

return sorted_instruments

def instruments_with_strong_forecasts(forecasts: portfolioWeights) -> list:
strong_forecasts = [instrument_code
for instrument_code, forecast_value in forecasts.items()
if _strong_forecast(forecast_value)]
return strong_forecasts

def _strong_forecast(forecast_value: float):
if np.isnan(forecast_value):
return False
if abs(forecast_value)<5.0:
return False
return True

That improves things a little; the cost comes down to 20SR units. But that's still a lot - about double what it is in the benchmark cases.

Let's restrict our universe of instruments we can consider adding to forecasts over 10, rather than over 5. Then we have:

  • Absolute forecasts greater than 10:
    • Existing position: won't be closed
    • New position: probably will be opened
  • Absolute forecasts between 5 and 10:
    • Existing positions: won't be closed
    • New positions: won't be opened
  • Forecasts less than 5:
    • Existing positions: may be closed
This creates a 'no trade zone' for forecasts between 5 and 10.

.... and makes almost no difference; lowering the costs by 1 SR unit.

Clearly I could play with these boundaries until I got a nicer result, but this reeks of implicit fitting and I feel the gap is just too large.

Some other things we could try

There are more complicated things we could do here, for example considering diversification when adding potential instrument positions, allocating the risk bucket by asset class or instrument cluster, perhaps a more sophisticated approach to costs.... but I think we'll just end up in the bad old world of complex dynamic optimisation that I narrowly escaped from in the second post


I feel this particular dead horse has been flogged enough. There is no easy way to get around the problem of having insufficient capital to trade loads and loads of futures markets. Any kind of dynamic optimisation, eithier by simple ranking (this post), or complex formula (posts 1 and 2) just isn't very effective, and involves making the nice simple straightforward trading system very ugly indeed.

By far the simplest approach is to sensibly choose some subset of those markets, and use those as your static set of instruments as I did in post #3 of this series. This also happens to be the best performing option in a backtest. For the $500K of capital that I have the effect on performance is fairly minimal in any case.

Yes there will FOMO if an instrument I don't own shows a seriously good trend, but I will just have to live with that.

Things are clearly tougher if you only have $100K or less, but then as my third book points out maybe you should be trading other leveraged instruments.

My personal 'to do' list now consists of tactically reweighting my portfolio towards the 28 instruments I found to be optimal for my account size here, and putting into place the technology to allow regular (annual?) reviews of my set of instruments.

Thanks for your help Auntie B.

AB "You're welcome. And I hope for your sake that the Jacuzzi is still warm."

Tuesday, 29 June 2021

Static optimisation of the best set of instruments to hold in a futures trading system

 In a couple of recent posts (here and here) I explored the idea of using dynamic optimisation to deal with the following problem: diversification across markets is good, but requires more capital.

That didn't work out so well!

I can also appreciate that this is *way* beyond most peoples idea of a simple trend following system. And it flies in the face of much I've said in terms of keeping things as simple as possible. Many people would prefer to trade a fixed subset of markets, which gives them the best expected outcome for their capital.

I've explored this somewhat in the past, in particular in this post and also in my third book Leveraged Trading, but it's fair to say I have never before presented and tested systematic and automated methodology for market selection given capital requirements.

Until now...

How should we choose which instrument(s) to trade?

This should be a fairly well worn list of criteria if you've read anything of mine before, but here goes:

  • Instruments should be not too expensive. In this post I talked about a maximum cost of 1 SR unit per trade.
  • Instruments should be not too large. In particular (as discussed here) if we can't hold at least three contracts with a maximum forecast of 20 (twice the average forecast), then we'll suffer a lower SR through not being able to target our required expected risk as closely. 
  • Instruments should meet basic liquidity criteria; this post suggests at least $1.25 million per day in risk units, and 100 contracts per day.
  • Instruments should be as diversifying as possible. A portfolio consisting of all the US bond futures would be spectacularly pointless.
What is not on this list? Well, we aren't interested in the pre-cost backtested SR of an instrument. These are just too unreliable and have minimal statistical significance.

Thinking a little more deeply about the above criteria, I would say that we can quantify:
  • The effect of costs: once we've fitted some forecast weights for a given instrument, we know it's expected annualised turnover, and then we can calculate the expected cost penalty in SR units.
  • The effect of size (at least in the handwaving way described here: "There is around a 20% penalty to Sharpe if you can only hold one contract; around 5% with a maximum position of two. This falls to zero if you can hold 3 or 4 contracts" 
  • The effect of diversification. Correlations are fairly predictable, and the correlation of instrument trading subsystems are even more so. 
With some assumption of pre-cost SR for each instrument (call it 0.5), and some instrument weights (naturally fitted by handcrafting), we can then calculate the expected post cost SR for a portfolio of any set of instruments. This portfolio will consist of a series of trade offs; an expensive instrument may be included because it is massively diversifying.

Liquidity is harder to think about: instead I'd stick to using the hard limit mentioned above.

To proceed we could do a massive grid search (I'm good at those!) where each node of the grid is a subset of the possible instruments, but I think it makes more sense to proceed iteratively. We begin with one instrument and then succesively add further instruments to that portfolio. At some point it wouldn't be worth adding more instruments (because the benefit of more diversification would be outweighed by the cost of size effects, or by running out of cheap instruments to trade).

The advantage of this is that we can easily do things like add more instruments as our capital grows, or replace instruments if some have to be deleted.  

It is also possible that the goodness of instruments can change over time. In particular, if an instrument becomes riskier then we'll hold smaller positions (can potentially cause size issues), but it would also be cheaper to trade on a risk adjusted basis (improves cost). The reverse would be true if volatility fell. Liquidity and correlations can also change, but typically more slowly (I use very long lookbacks to measure this kind of correlation).

Most likely this substitution of instruments would be an annual process in live trading. 

With the iterative method we can produce what is effectively a 'Top 100' of instruments, ranked in order of preference, allowing some kind of buffering on addition and deletion (like with equity indices; we'd only drop an instrument if it fell more than a certain number of places outside the top section).

We can envisage a situation in which we occasionally swap instruments round by phasing their instrument weight from an old to a new instrument.

Alternatively, we could just keep the set of instruments fixed.

How should we choose the starting instrument

First we remove all instruments that fail the liquidity criteria from our universe (we don't do this in the backtest, but it's something to consider for production). Then we calculate, estimating across all instruments:

  • the expected position size (for now assuming a nominal instrument weight of 5% and instrument diversification multiplier of 2.5), and hence the size effect penalty
  • the expected turnover of the instrument
  • an expected SR for that instrument  equal to Notional SR - (turnover * cost per trade SR) - (size effect penalty)

The size effect penalty is unlikely to have any effect, except for the very largest contracts (like Lumber and full size Bitcoin). It is calculated as follows (assuming notional SR of 0.5):

  • Using current risk, what is the current optimal position with a forecast of 20. Call that P
  • Remember: "There is around a 20% penalty to Sharpe if you can only hold one contract; around 5% with a maximum position of two. This falls to zero if you can hold 3 or 4 contracts". 
  • With a notional SR of 0.5 a 20% penalty is 0.1 SR units and a 5% penalty is 0.025 units. A slightly conservative fit to these points is a penalty of 0.125 / P^2 SR units.
  • Something with a P of less than 0.5 is effectively untradeable and should have a penalty of 'infinity' SR units. 
Here's some pysystemtrade code. You can of course hopefully modify this for your own purposes if you're not a user.

def net_SR_for_instrument_in_system(system, instrument_code, instrument_weight_idm=0.25):

maximum_pos_final = calculate_maximum_position(system, instrument_code, instrument_weight_idm=instrument_weight_idm)
trading_cost = calculate_trading_cost(system, instrument_code)

return net_SR_for_instrument(maximum_position=maximum_pos_final,

# To begin with, we assume that the instrument weight is at least 5% with an IDM of 1.0
# Otherwise we'd end up adding too many large sized contracts initially
# You may need to tweak this for small portfolios

max_instrument_weight = 0.05
notional_starting_IDM = 1.0
minimum_instrument_weight_idm = max_instrument_weight * notional_starting_IDM

from copy import copy
def calculate_maximum_position(system, instrument_code,
instrument_weight_idm = 0.25
    if instrument_weight_idm ==0:
    return 0.0

    if instrument_weight_idm>minimum_instrument_weight_idm:
    instrument_weight_idm = copy(minimum_instrument_weight_idm)
pos_at_average = system.positionSize.get_volatility_scalar(instrument_code)
pos_at_average_in_system = pos_at_average * instrument_weight_idm
forecast_multiplier = system.combForecast.get_forecast_cap() / system.positionSize.avg_abs_forecast()

maximum_pos_final = pos_at_average_in_system.iloc[-1] * forecast_multiplier

return maximum_pos_final

def calculate_trading_cost(system, instrument_code):
turnover = system.accounts.subsystem_turnover(instrument_code)
SR_cost_per_trade = system.accounts.get_SR_cost_per_trade_for_instrument(instrument_code)

trading_cost = turnover * SR_cost_per_trade

return trading_cost

def net_SR_for_instrument(maximum_position, trading_cost, notional_SR= 0.5):
return notional_SR - trading_cost - size_penalty(maximum_position)

def size_penalty(maximum_position):
    if maximum_position<0.5:
           return 9999
return 0.125 / maximum_position**2

list_of_instruments = system.get_instrument_list()
all_results = []
for instrument_code in list_of_instruments:
net_SR_for_instrument_in_system(system, instrument_code)))

all_results = sorted(all_results, key = lambda tup: tup[1])

The 'worst' instruments using this metric are Copper, AEX and Palladium which all have less than 0.5 contracts of position.

And here are the very best instruments right now:

[('EU-DIV30', 0.452), ('US10', 0.455), ('EDOLLAR', 0.455), 
('KOSPI_mini', 0.458), ('GAS_US_mini', 0.46), ('US5', 0.463), 
('NASDAQ_micro', 0.466), ('MXP', 0.472), ('SP500_micro', 0.483), 
('GOLD_micro', 0.483)]

So we're going to start trading with the Gold micro future as our first instrument.

best_market = all_results[-1][0]

How should we choose the n+1 instrument

Now what? We need to choose another instrument! And the another, and then another...

  • iterate over all instruments not currently in the portfolio
    • for a given instrument, construct a portfolio consisting of the old portfolio + the given instrument
    • allocate instrument weights using the handcrafting portfolio weighting methodology
    • Given the expected SR for each instrument, and the instrument weights, measure the expected portfolio SR
  • Choose the instrument with the highest expected portfolio SR. This will be an instrument that provides the best tradeoff between diversification, costs, and size penalty.
  • Repeat

list_of_correlations = system.portfolio.get_instrument_correlation_matrix()
corr_matrix = list_of_correlations.corr_list[-1]

from sysquant.optimisation.optimisers.handcraft import *
from sysquant.estimators.estimates import Estimates, meanEstimates, stdevEstimates
from sysquant.optimisation.shared import neg_SR
from syscore.dateutils import WEEKS_IN_YEAR

def portfolio_sizes_and_SR_for_instrument_list(system, corr_matrix, instrument_list):

estimates = build_estimates(

handcraft_portfolio = handcraftPortfolio(estimates)
risk_weights = handcraft_portfolio.risk_weights()

SR = estimate_SR_given_weights(system=system,

portfolio_sizes = estimate_portfolio_sizes_given_weights(system,

return portfolio_sizes, SR

def build_estimates( instrument_list, corr_matrix, notional_years_data=30):
    # we ignore differences in SR for creating instrument weights

mean_estimates = meanEstimates(
(instrument_code, 1.0)
for instrument_code in instrument_list

stdev_estimates = stdevEstimates(
1.0) for instrument_code in instrument_list

estimates = Estimates(
data_length=notional_years_data * WEEKS_IN_YEAR)

return estimates

def estimate_SR_given_weights(system, risk_weights, handcraft_portfolio: handcraftPortfolio):
instrument_list =

mean_estimates = mean_estimates_from_SR_function_actual_weights(system,


SR = -neg_SR(wt, cm, mu)

return SR

def mean_estimates_from_SR_function_actual_weights(system, risk_weights, handcraft_portfolio: handcraftPortfolio):
instrument_list =
actual_idm =
min(2.5, handcraft_portfolio.div_mult(risk_weights))
mean_estimates = meanEstimates(
(instrument_code, net_SR_for_instrument_in_system(system, instrument_code,
instrument_weight_idm=actual_idm * risk_weights[instrument_code]))
for instrument_code in instrument_list

return mean_estimates

def estimate_portfolio_sizes_given_weights(system, risk_weights, handcraft_portfolio: handcraftPortfolio):
instrument_list =
idm = handcraft_portfolio.div_mult(risk_weights)

portfolio_sizes =

for instrument_code in instrument_list

return portfolio_sizes

set_of_instruments_used = [best_market]

unused_list_of_instruments = copy(list_of_instruments)
max_SR = 0.0
while len(unused_list_of_instruments)>0:
SR_list= []
portfolio_sizes_dict = {}
for instrument_code in unused_list_of_instruments:
instrument_list= set_of_instruments_used+[instrument_code]

portfolio_sizes, SR_this_instrument =\
portfolio_sizes_dict[instrument_code] = portfolio_sizes

SR_list =
sorted(SR_list, key=lambda tup: tup[1])
selected_market = SR_list[-
new_SR = SR_list[-
if (new_SR)<(max_SR*.9):
print("PORTFOLIO TOO BIG! SR falling")
portfolio_size_with_market = portfolio_sizes_dict[selected_market]
print("Portfolio %s SR %.2f" % (str(set_of_instruments_used), new_SR))

if new_SR>max_SR:
max_SR = new_SR

And here is the output, well at least some of it:

Portfolio ['GOLD_micro'] SR 0.69
{'GOLD_micro': 80.0, 'KOSPI_mini': 54.9}
Portfolio ['GOLD_micro', 'KOSPI_mini'] SR 0.87
{'GOLD_micro': 62.2, 'KOSPI_mini': 58.8, 'SHATZ': 334.5}
Portfolio ['GOLD_micro', 'KOSPI_mini', 'SHATZ'] SR 1.09
Expected SR is ramping up as we add our first few instruments: a metal, an equity, and a bond.  Let's skip ahead a bit:

Portfolio ['GOLD_micro',.... 'GBP'] SR 1.67
Portfolio ['GOLD_micro', ..., 'KR3'] SR 1.71
Portfolio ['GOLD_micro', ... 'V2X'] SR 1.73
Portfolio ['GOLD_micro', ... 'NZD'] SR 1.74
Portfolio ['GOLD_micro', .... 'BTP'] SR 1.75
Portfolio ['GOLD_micro', ...'NASDAQ_micro'] SR 1.76
Portfolio ['GOLD_micro', ...., 'EUR'] SR 1.77
Portfolio ['GOLD_micro', ...., 'KR10'] SR 1.79
Portfolio ['GOLD_micro', .... 'LIVECOW'] SR 1.77
Portfolio ['GOLD_micro', ..., 'SMI'] SR 1.76
Portfolio ['GOLD_micro', ...., 'US10'] SR 1.77
Portfolio ['GOLD_micro', ...., 'BITCOIN'] SR 1.77
Portfolio ['GOLD_micro', .... 'EU-DIV30'] SR 1.77
Portfolio ['GOLD_micro', ..., 'BOBL'] SR 1.77
Portfolio ['GOLD_micro', ...., 'EUROSTX'] SR 1.77
Portfolio ['GOLD_micro', ...., 'WHEAT'] SR 1.77
Portfolio ['GOLD_micro', ...., 'OAT'] SR 1.74
Portfolio ['GOLD_micro', ..., 'CORN'] SR 1.75
Portfolio ['GOLD_micro', ...., 'US20'] SR 1.72
Portfolio ['GOLD_micro', ..., 'BUND'] SR 1.70
Portfolio ['GOLD_micro', ...., 'PLAT'] SR 1.71
Portfolio ['GOLD_micro', ...., 'SP500_micro'] SR 1.70
Portfolio ['GOLD_micro', ... 'AUD'] SR 1.68
Portfolio ['GOLD_micro', ... 'FEEDCOW'] SR 1.66

You can hopefully see why I allow a 10% tolerance from the maximum achieved Sharpe Ratio before halting. For starters, it's possible for the Sharpe Ratio to go down before rising again to a new level. Next, it's possible to have several portfolios with every similar Sharpe Ratios. On balance we'd want to choose the one with the most instruments I think. Finally, you might be willing to add slightly more instruments than is optimal for a modest theoretical loss in Sharpe Ratio. 

I decided that the portfolio with a SR of 1.75 (ending in Corn) was the one I wanted. After that there is a consistent fall in SR. It has 28 instruments, versus the 18 instruments of the strict maximal SR (ending in KR10 above).

Here it is in full:
Portfolio ['GOLD_micro', 'KOSPI_mini', 'SHATZ', 'US2', 'JPY', 
'LEANHOG', 'MXP', 'GAS_US_mini', 'EDOLLAR', 'CRUDE_W_mini', 
'GBP', 'KR3', 'V2X', 'NZD', 'BTP', 'NASDAQ_micro', 'EUR',
 'KR10', 'LIVECOW', 'SMI', 'US10', 'BITCOIN', 'EU-DIV30', 
'BOBL', 'EUROSTX', 'WHEAT', 'OAT', 'CORN'] SR 1.75

Maximum positions, contracts:
{'GOLD_micro': 9.4, 'KOSPI_mini': 8.1, 'SHATZ': 32.3, 'US2': 45.6,
 'JPY': 4.3, 'LEANHOG': 3.4, 'MXP': 8.9, 'GAS_US_mini': 8.0, 
'EDOLLAR': 11.3, 'CRUDE_W_mini': 3.8, 'GBP': 3.2, 'KR3': 23.2, 
'V2X': 6.1, 'NZD': 2.7, 'BTP': 2.6, 'NASDAQ_micro': 2.7, 'EUR': 2.1,
 'KR10': 3.4, 'LIVECOW': 3.3, 'SMI': 1.3, 'US10': 4.1, 'BITCOIN': 8.5,
 'EU-DIV30': 2.2, 'BOBL': 5.6, 'EUROSTX': 1.2, 'WHEAT': 1.9,
'OAT': 3.1, 'CORN': 1.8, 'US20': 3.2}

Well this is a pretty nice portfolio. It is pretty well diversified, with 28 instruments. About what you'd expect with $500,000 in capital. We have all the sectors represented:
  • Metals 2 (including Bitcoin)
  • Energies 2
  • Equities 5
  • Bonds 9
  • Ags 4
  • Currencies 5
  • Vol 1
A few of the instruments do have maximum positions of a couple of contracts or less, but in most of them we're able to have some decent position adjustment. 

We are a little bit overweight bonds perhaps, but that reflects the fact we have quite a few bond markets in the broader universe and many of them are able to take smaller contract sizes. Of course the instrument weights will apportion risk more evenly anyway (I'd do a proper fit rather than the quick and dirty method done here, although the results probably wouldn't be that different).

Different account sizes

Let's run this thing with a few different fund sizes and see what comes out:
A $1 million portfolio
{'GOLD_micro': 8.6, 'NASDAQ_micro': 2.7, 'SHATZ': 81.9, 'US2': 74.4, 
'JPY': 5.0, 'EDOLLAR': 34.2, 'KR3': 47.4, 'CORN': 3.2, 
'CRUDE_W_mini': 4.7, 'LEANHOG': 3.5, 'MXP': 14.7, 'GBP': 3.8, 
'NZD': 5.1, 'BTP': 6.0, 'LIVECOW': 6.7, 'BITCOIN': 10.4, 
'GAS_US_mini': 23.2, 'US10': 12.4, 'WHEAT': 3.4, 'KOSPI_mini': 11.5, 
'SOYBEAN': 2.4, 'OAT': 4.8, 'V2X': 6.1, 'EU-DIV30': 2.5, 'SMI': 1.5,
 'BOBL': 14.2, 'KR10': 9.7, 'COPPER': 1.3, 'FEEDCOW': 4.1,
 'BUND': 2.5, 'SP500_micro': 5.3, 'PLAT': 1.3, 'US20': 1.8, 
'EUR': 4.0, 'EUROSTX': 1.4, 'AUD': 3.2, 'VIX': 0.8}
36 markets. Not perhaps as much of an improvement as you'd have expected - there are diminishing returns to adding markets. 

A $100K portfolio
max_instrument_weight = 0.20
{'GOLD_micro': 2.4, 'KOSPI_mini': 1.4, 'SHATZ': 28.8, 'US2': 23.3, 
'JPY': 0.9, 'LEANHOG': 1.1, 'GAS_US_mini': 4.5, 'EDOLLAR': 8.0, 
'KR3': 5.4, 'NASDAQ_micro': 0.8, 'CRUDE_W_mini': 0.9, 'MXP': 2.8, 
'GBP': 0.9, 'BTP': 1.0, 'NZD': 0.7, 'KR10': 1.2}

A $50K portfolio 
max_instrument_weight = 0.33
{'GOLD_micro': 2.8, 'KOSPI_mini': 1.2, 'SHATZ': 9.8, 
'US2': 3.9, 'V2X': 0.9, 'EDOLLAR': 4.5, 'KR3': 4.3,
 'GAS_US_mini': 3.6, 'LEANHOG': 0.7, 'BITCOIN': 0.7, 
'JPY': 1.0, 'BOBL': 1.7, 'MXP': 0.8, 'EU-DIV30': 0.6}

Backtesting: I don't think so

I could very easily backtest the above code: reselecting a group of instruments every single year. However I don't see the point. I'm not expecting it to add performance value compared to a benchmark of just using my current set of instruments, or a randomly chosen set of instruments - performance per se isn't one of the things I'm considering. I wouldn't expect it do as well as the hypothetical portfolio where I can take unrounded positions (equivalent to having a much larger account size).

Running in production

To run this concept in production requires a few decisions to be made, and things to be set up:

How often do we want to run this process? 

Costs and volatility will change. Liquidity may also change, and I'm in the process of continously adding potential instruments to my database. New instruments are launched all the time (micro Bitcoin recently, and coming this summer some new yield curve futures, to name just a few). But constant chopping and changing isn't ideal; perhaps once a year?

How will we get the information to make these decisions? 

  • Liquidity (used as a filter): I do collect volume information, but I would need a process to aggregate this across contracts and combine with risk information.
  • Trading costs: Commissions. Should hopefully be reasonably stable.
  • Trading costs: Slippage. For instruments I already trade, I'd need a process to automate the analysis of bid/ask and execution costs. For others, I'd need to set up a process to regularly collect bid/ask price data.

Other calculations can be pulled out of a backtest, once the above are calculated.

What action to take
Once some instruments are ranked in the process described, what changes should we make? Should we always trade the top N instruments, or use some kind of buffer (eg for 30 instruments, if an instrument falls below 35 then always replace it, if it goes above 25 then always include it: similar to how index buffering works). 

Should we have stricter rules for instruments that have failed the liquidity criteria: remove immediately?

How to make changes

How should we transition between the old and new set of instruments? For example, should we use <close only> overrides on instruments that are falling out of favour? Should we smoothly change instrument weights and allow the system to do the rest, with buffering reducing trading costs? Should we add new instruments before removing old ones?

The first transition

For now I have the following current portfolio of instruments (in no particular order):
'AEX', 'AUD', 'BOBL', 'BTP', 'BUND', 'CAC', 'COPPER', 'CORN', 'CRUDE_W_mini', 'EDOLLAR', 'EUR', 'GAS_US_mini', 'GBP', 'GOLD_micro', 'JPY', 'KOSPI_mini', 'KR10', 'KR3', 'LEANHOG', 'LIVECOW', 'MXP', 'NASDAQ_micro', 'NZD', 'OAT', 'BITCOIN', 'SHATZ', 'SMI', 'SOYBEAN', 'SP500_micro', 'US10', 'US2', 'US20', 'US5', 'V2X', 'VIX', 'WHEAT'A
And I want the following reduced set (in order of preference):
['GOLD_micro', 'KOSPI_mini', 'SHATZ', 'US2', 'JPY', 'LEANHOG', 'MXP', 'GAS_US_mini', 'EDOLLAR', 'CRUDE_W_mini', 'GBP', 'KR3', 'V2X', 'NZD', 'BTP', 'NASDAQ_micro', 'EUR', 'KR10', 'LIVECOW', 'SMI', 'US10', 'BITCOIN', 'EU-DIV30', 'BOBL', 'EUROSTX', 'WHEAT', 'OAT', 'CORN']
In theory that would involve dropping the following instruments:
{'AEX', 'BUND', 'SP500_micro', 'AUD', 'VIX', 'COPPER', 'US20', 'SOYBEAN', 'CAC', 'US5'}
And adding these:
And in the process gradually changing/ increasing the instrument weights on other markets.
I'm going to sit on this decision for a little bit longer, whilst I think about the best way to implement this. It may involve a tactical game, waiting for positions to be closed before replacing instruments.


As a retail trader you are unlikely to have the money to trade 200+ futures markets. You probably only need 15 to 30 for adequate diversification, but which 15 to 30? I've shown how to use a systematic method to select markets based on contract size and costs, but ignoring pre-cost performance - something that isn't sufficiently robust to make these kinds of decisions.
In the next (And final) post on this series I'll consider yet another way of making the best use of small capital - using a dynamic instrument selection method on top of a relatively simple futures trading system.