Saturday, 9 February 2019

Portfolio construction through handcrafting: Empirical tests

This post is all about handcrafting; a method for doing portfolio construction which human beings can do without computing power, or at least with a spreadsheet. The method aims to achieve the following goals:
  • Humans can trust it: intuitive and transparent method which produces robust weights
  • Can be easily implemented by a human in a spreadsheet
  • Can be back tested
  • Grounded in solid theoretical foundations
  • Takes account of uncertainty in data estimates
  • Decent out of sample performance
  • Addresses the problem of allocating capital to assets on a long only basis, or to trading strategies. 
This is the final post in a series on the handcrafting method.
  1. The first post can be found here, and it motivates the need for a method like this.
  2. In the second post I build up the various components of the method, and discuss why they are needed. 
  3. In the third post, I explained how you'd actually apply the method step by step, with code. 
  4. This post will test the method with real data, addressing the question of robust weights and out of sample performance
The testing will be done using psystemtrade. If you want to follow along, get the latest version.

PS apologies for the weird formatting in this post. It's out of my hands...


The Test Data


The test data is the 37 futures instruments in my usual data set, with the following trading rules:
  • Carry
  • Exponentially weighted moving average crossover (EWMAC) 2 day versus 8 day
  • EWMAC 4,16
  • EWMAC 8,32
  • EWMAC 16,64
  • EWMAC 32,128
  • EWMAC 64,256

I'll be using handcrafting to calculate both the forecast and instrument weights. By the way, this isn't a very stern test of the volatility scaling, since everything is assumed to have the same volatility in a trading system. Feel free to test it with your own data.
The handcrafting code lives here (you've mostly seen this before in a previous post, just some slight changes to deal with assets that don't have enough data) with a calling function added here in my existing optimisation code (which is littered with #FIXME NEEDS REFACTORING comments, but this isn't the time or the place...).



The Competition


I will be comparing the handcrafted method to the methods already coded up in pysystemtrade, namely:
  • Naive Markowitz
  • Bootstrapping
  • Shrinkage
  • Equal weights
All the configuration options for each optimiser will be the default for pysystemtrade (you might want to read this). All optimisation will be done on an 'expanding window' out of sample basis.
from systems.provided.futures_chapter15.estimatedsystem import *
system = futures_system()
system.config.forecast_weight_estimate['method']='handcraft' # change as appropriate
system.config.instrument_weight_estimate['method']='handcraft'  # change as appropriate
del(system.config.instruments)
del(system.config.rule_variations)

system.set_logging_level("on")


Evaluating the weights


Deciding which optimisation to use isn't just about checking profitability (although we will check that in a second). We also want to see robust weights; stable, without too many zeros.
Let's focus on the forecast weights for the S&P 500 (not quite arbitrary example, this is a cheap instrument so can allocate to most of the trading rules. Looking at say instrument weights would result in a massive messy plot).
system.combForecast.get_forecast_weights("SP500")

Forecast weights with handcrafting
Pretty sensible weights here, with ~35% in carry and the rest split between the other moving averages. There are some variations when the correlations shift instruments slightly between groups.

# this will give us the final Portfolio object used for optimisation (change index -1 for others)
# See previous post in this series (https://qoppac.blogspot.com/2018/12/portfolio-construction-through_14.html)

portfolio=system.combForecast.calculation_of_raw_estimated_forecast_weights("SP500").results[-1].diag['hc_portfolio']


# eg to see the sub portfolio tree

portfolio.show_subportfolio_tree()
[' Contains 3 sub portfolios',
 ["[0] Contains ['ewmac16_64', 'ewmac32_128', 'ewmac64_256']"], # slow momentum
 ["[1] Contains ['ewmac2_8', 'ewmac4_16', 'ewmac8_32']"],  # fast momentum
 ["[2] Contains ['carry']"]]  # carry
Makes a lot of sense to me...
Forecast weights with naive Markowitz
The usual car crash you’d expect from Naive Markowitz, with lots of variation, and very unrobust weights (at the end it’s basically half and half between carry and the slowest momentum).

Forecast weights with shrinkage

Smooth and pretty sensible. This method downweights the faster moving averages a little more than the others; they are more expensive and also don't perform so well in equities.

Forecast weights with bootstrapping
A lot noisier than shrinkage due to the randomness involved, but pretty sensible.

I haven't shown equal weights, as you can probably guess what those are.

Although I’m not graphing them, I thought it would be instructive to look at the final instrument weights for handcrafting:


system.portfolio.get_instrument_weights().tail(1).transpose()
AEX        0.016341
AUD        0.024343
BOBL       0.050443
BTP        0.013316
BUND       0.013448
CAC        0.014476
COPPER     0.024385
CORN       0.031373
CRUDE_W    0.029685
EDOLLAR    0.007732
EUR        0.010737
EUROSTX    0.012372
GAS_US     0.031425
GBP        0.010737
GOLD       0.012900
JPY        0.012578
KOSPI      0.031301
KR10       0.051694
KR3        0.051694
LEANHOG    0.048684
LIVECOW    0.031426
MXP        0.028957
NASDAQ     0.034130
NZD        0.024343
OAT        0.014660
PALLAD     0.013194
PLAT       0.009977
SHATZ      0.057006
SMI        0.040494
SOYBEAN    0.029706
SP500      0.033992
US10       0.005511
US2        0.031459
US20       0.022260
US5        0.007168
V2X        0.042326
VIX        0.042355
WHEAT      0.031373

Let's summarise these:

Ags 17.2%
Bonds 31.8%
Energy 6.1%
Equities 18.3%
FX 11.1%
Metals 6.0%
STIR 0.77%
Vol 8.4%

portfolio=system.portfolio.calculation_of_raw_instrument_weights().results[-1].diag['hc_portfolio']
portfolio.show_subportfolio_tree()
[' Contains 3 sub portfolios', # bonds, equities, other ['[0] Contains 3 sub portfolios', # bonds ["[0][0] Contains ['BOBL', 'SHATZ']"], # german short bonds ["[0][1] Contains ['KR10', 'KR3']"], # korean bonds ['[0][2] Contains 3 sub portfolios', # other bonds ["[0][2][0] Contains ['BUND', 'OAT']"], # european 10 year bonds ex BTP ['[0][2][1] Contains 2 sub portfolios', # US medium and long bonds ["[0][2][1][0] Contains ['EDOLLAR', 'US10', 'US5']"], # us medium bonds ["[0][2][1][1] Contains ['US20']"]], # us long bond ["[0][2][2] Contains ['US2']"]]], # us short bonds ['[1] Contains 3 sub portfolios', # equities and vol ['[1][0] Contains 2 sub portfolios', # European equities ["[1][0][0] Contains ['AEX', 'CAC', 'EUROSTX']"], # EU equities ["[1][0][1] Contains ['SMI']"]], # Swiss equities ["[1][1] Contains ['NASDAQ', 'SP500']"], # US equities ["[1][2] Contains ['V2X', 'VIX']"]], # US vol ['[2] Contains 3 sub portfolios', # other ['[2][0] Contains 3 sub portfolios', # FX and metals ['[2][0][0] Contains 2 sub portfolios', # FX, mostly ["[2][0][0][0] Contains ['EUR', 'GBP']"], ["[2][0][0][1] Contains ['BTP', 'JPY']"]], ["[2][0][1] Contains ['AUD', 'NZD']"], ['[2][0][2] Contains 2 sub portfolios', # Metals ["[2][0][2][0] Contains ['GOLD', 'PALLAD', 'PLAT']"], ["[2][0][2][1] Contains ['COPPER']"]]], ['[2][1] Contains 2 sub portfolios', # letfovers ["[2][1][0] Contains ['KOSPI', 'MXP']"], ["[2][1][1] Contains ['GAS_US', 'LIVECOW']"]], ['[2][2] Contains 3 sub portfolios', # ags and crude ["[2][2][0] Contains ['CORN', 'WHEAT']"], ["[2][2][1] Contains ['CRUDE_W', 'SOYBEAN']"], ["[2][2][2] Contains ['LEANHOG']"]]]]
Some very interesting groupings there, mostly logical but a few unexpected (eg BTP, KOSPI). Also instructive to look at the smallest weights:
US10, US5, EDOLLAR, PLAT, EUR, GBP, EUROSTX (used to hedge), JPY
Those are markets I could potentially think about removing if I wanted to. 


Evaluating the profits


As the figure shows the ranking of performance is as follows:

- Naive Markowitz, Sharpe Ratio (SR) 0.82
- Shrinkage, SR 0.96
- Bootstrap, SR 0.97
- Handcrafted SR 1.01
- Equal weighting SR 1.02
So naive is definitely sub optimal, but the others are pretty similar, with perhaps handcrafting and equal weights a fraction ahead of the rest. This is borne out by the T-statistics from doing pairwise comparisons between the various curves. 

boot = system.accounts.portfolio() ## populate the other values in the dict below appropriately
results = dict(naive=oneperiodacc, hc=handcraft_acc, equal=equal_acc, shrink=shrink, boot=boot)

from syscore.accounting import account_test

types=results.keys()
for type1 in types:
    for type2 in types:
        if type1==type2:
            continue        print("%s vs %s" % (type1, type2))
        print(account_test(results[type1], results[type2]))

A T-statistic of around 1.9 would hit the 5% critical value, and 2.3 is a 2% critical value):

         Naive Shrink Boot   HC   Equal
Naive
Shrink   2.01
Boot     1.56   0.03
HC       2.31   0.81  0.58
Equal    2.33   0.97  0.93  0.19

Apart from bootstrapping, all the other methods handily beat naive with 5% significance. However the rest of the t-statistics are pretty indifferent.

Partly this is because I’ve constrained all the optimisations in similar ways so they don’t do anything too stupid; for example ignoring Sharpe Ratio when optimising over instrument weights. Changing this would mostly penalise the naive optimisation further, but probably wouldn't change things much elsewhere.

It’s always slightly depressing when equal weights beats more complicated methods, but this is partly a function of the data set. Everything is vol scaled, so there is no need to take volatility into account. The correlation structure is reasonably friendly: 

  • for instrument weights we have a pretty even set of instruments across different asset classes, so equal weighted and handcrafted aren’t going to be radically different, 
  • for forecast weights, handcrafting (and all the other methods) produce carry weights of between 30% and 40% for S&P 500, whilst equal weighting would give carry just 14%. However this difference won’t be as stark for other instruments which can only afford to trade 2 or 3 EWMAC crossovers.


Still there are many contexts in which equal weight wouldn't make sense

Incidentally the code for handcrafting runs pretty fast; only a few seconds slower than equal weights which of course is the fastest (not that pysystemtrade is especially quick... speeding it up is on my [long] to do list). Naive and bootstraps run a bit slower (as they are doing a single optimisation per time period), whilst bootstrap is slowest of all (as it’s doing 100 optimisations per time period).





Conclusion



Handcrafting produces sensible and reasonably stable weights, and it's out of sample performance is about as good as more complicated methods. The test for handcrafting was not to produce superior out of sample performance. All we needed was performance that was indistinguishable from more complex methods. I feel that it has passed this test with flying colours, albeit on just this one data set.

So if I review the original motivation for producing this method:

  • - Humans can trust it; intuitive and transparent method which produces robust weights (yes, confirmed in this post)
  • - Can be easily implemented by a human in a spreadsheet (yes, see post 3)
  • - Can be back tested (yes, confirmed in this post)
  • - Grounded in solid theoretical foundations (yes, see post 2)
  • - Takes account of uncertainty (yes, see post 2)
  • - Decent out of sample performance (yes, confirmed in this post)

We can see that there is a clear tick in each category. I’m pretty happy with how this test has turned out, and I will be switching the default method for optimisation in pysystemtrade to use handcrafting.

Friday, 14 December 2018

Portfolio construction through handcrafting: implementation

This post is all about handcrafting; a method for doing portfolio construction which human beings can do without computing power, or at least with a spreadsheet. The method aims to achieve the following goals:
  • Humans can trust it: intuitive and transparent method which produces robust weights
  • Can be easily implemented by a human in a spreadsheet
  • Can be back tested
  • Grounded in solid theoretical foundations
  • Takes account of uncertainty in data estimates
  • Decent out of sample performance
  • Addresses the problem of allocating capital to assets on a long only basis, or to trading strategies. It won't be suitable for a long /short portfolio.

This is the third in a series of posts on the handcrafting method.
  1. The first post can be found here, and it motivates the need for a method like this.
  2. In the second post I build up the various components of the method, and discuss why they are needed. 
  3. In this, the third post, I'll explain how you'd actually apply the method step by step, with code. 
  4. Post four will test the method with real data

This will be a 'twin track' post; in which I'll outline two implementations:
  • a spreadsheet based method suitable for small numbers of assets where you need to do a one-off portfolio for live trading rather than repeated backtest. It's also great for understanding the intution of the method - a big plus point of this technique.
  • a python code based method. This uses (almost) exactly the same method, but can be backtested (the difference is that the grouping of assets is done manually in the spreadsheet based method, but automatically here based on the correlation matrix). The code can be found here; although this will live within the pysystemtrade ecosystem I've deliberately tried to make it as self contained as possible so you could easily drop this out into your own framework.


The demonstration


To demonstrate the implementation I'm going to need some data. This won't be the full blown real data that I'll be using to test the method properly, but we do need *something*. It needs to be an interesting data set; with the following characteristics:
  • different levels of volatility (so not a bunch of trading systems)
  • heirarcy of 3 levels (more would be too complex for the human implementaiton, less wouldn't be a stern enough test)
  • not too many assets such that the human implementation is too complex

I'm going to use long only weekly returns from the following instruments: BOBL, BUND, CORN, CRUDE_W, EURODOLLAR, GAS_US, KR10, KR3, US10, US20; from 2014 to the present (since for some of these instruments I only have data for the last 5 years).

Because this isn't a proper test I won't be doing any fancy rolling out of sample optimisation, just a single portfolio.

The descriptive statistics can be found here. The python code which gets the data (using pysystemtrade), is here.

(I've written the handcrafting functions to be standalone; when I come to testing them with real data I'll show you how to hook these into pysystemtrade]


Overview of the method


Here are the stages involved in the handcrafting method. Note there are a few options involved:
  1. (Optional if using a risk target, and automated): partition the assets into high and low volatility
  2. Group the assets hierarchically (if step 1 is followed, this will form the top level grouping). This will done either by (i) an automated clustering algorithm or (ii) human common sense.
  3. Calculate volatility weights within each group at the lowest level, proceeding upwards. These weights will either be equal, or use the candidate matching technique described in the previous post.
  4. (Optionally) Calculate Sharpe Ratio adjustments. Apply these to the weights from step 3.
  5. Calculate diversification multipliers for each group. Apply these to the weights from step 4.
  6. Calculate cash weights using the volatility of each asset.
  7. (Optionally) if a risk target was used with a manual method, partition the top level groups into high and low volatility.
  8. (Optionally) if a risk target was supplied; use the technique outlined in my previous post to ensure the target is hit.


Spreadsheet: Group the assets hierarchically

A suggested grouping is here. Hopefully it's fairly self explanatory. There could be some debate about whether Eurodollar and bonds should be glued together, but part of doing it this way was to see if the diversification multiplier fixes this potential mistake.


Spreadsheet: Calculate volatility weights

The calculations are shown here.

Notice that for most groups there are only one or two assets, so things are relatively trivial. Then at the top level (level 1) we have three assets, so things are a bit more fun. I use a simple average of correlations to construct a correlation matrix for the top level groups. Then I use a weighted average of two candidate matrices to work out the required weights for the top level groups.

The weights come out as follows:
  • Developed market bonds, which we have a lot of, 3.6% each for a total of 14.4%
  • Emerging market bonds (just Korea), with 7.2% each for a total of 14.4%
  • Energies get 10.7% each, for a total of 21.4%
  • Corn gets 21.4%
  • Eurodollar gets 28.6%


Spreadsheet: Calculate Sharpe Ratio adjustments (optionally)

Adjustments for Sharpe Ratios are shown in this spreadsheet. You should follow the calculations down the page, as they are done in a bottom up fashion. I haven't bothered with interpolating the heuristic adjustments, instead I've just used VLOOKUP to match the closest adjustment row. 



Spreadsheet: Calculate diversification multipliers (DM)

DM calculations are shown in this sheet. DMs are quite low in bonds (where the assets in each country are highly correlated), but much higher in commodities. The final set of changes in particular striking; note the reallocation from the single instrument rates group (initial weight 30.7%, falls to 24.2%) to commodities (initial weight 29%, rises to 36.5%).




Spreadsheet: Calculate cash weights

(Almost) finally we calculate our cash weights, in this spreadsheet. Notice the huge weight to low volatility Eurodollar. 


Spreadsheet: Partition into high and low volatility 

(optional: if risk target used with manual method)

If we're using a risk target we'll need to partition our top level groups (this is done automatically with python, but spreadsheet people are allowed to choose their own groupings). Let's choose an arbitrary risk target: 10%. This should be achievable since the average risk of our assets is 10.6%

This is the average volatility of each group (calculated here):

Bonds: 1.83%
Commodities: 14.6%
Rates: 0.89%

So we have:

High vol: commodities
Low vol: Rates and bonds

(Not a massive surprise!!)


Spreadsheet: Check risk target is hit, adjust weights if required

(optional: with risk target)

The natural risk of the portfolio comes out at 1.09% (calculated here). Let's explore the possible scenarios:
  • Risk target lower than 1.09%, eg 1%: We'd need to add cash to the portfolio. Using the spreadsheet with a 1% risk target you'd need to put 8.45% of your portfolio into cash; with the rest going into the constructed portfolio.
  • Risk target higher than 1.09% with leverage allowed: You'd need to apply a leverage factor; with a risk target of 10% you'd need a leverage factor of 9.16
  • Risk target higher than 1.09% without leverage: You'd need to constrain the proportion of the portfolio that allocated to low risk assets (bonds and rates). The spreadsheet shows that this comes out at 31.4% cash weight, with the rest in commodities. I've also recalculated the weights with this constraint to show how it comes out.
And here are those final weights (to hit 10% risk with no leverage):

weight
BOBL 2.17%
BUND 0.78%
US10 0.44%
US20 0.23%
KR3 7.25%
KR10 1.86%
EDOLLAR 18.67%
CORN 36.67%
CRUDE_W 19.47%
GAS_US 12.45%


Python code


The handcrafting code is here. Although this file will ultimately be dumped into pysystemtrade, it's designed to be entirely self contained so you can use it in your own applications.

The code expects weekly returns, and for all assets to be present. It doesn't do rolling optimisation, or averages over multiple assets. I need to write code to hook it into pysystemtrade, and to achieve these various objectives.

The only input required is a pandas data frame returns with named columns containing weekly returns. The main object you'll be interacting with is called Portfolio

Simplest use case, to go from returns to cash weights without risk targeting:

p=Portfolio(returns)
p.cash_weights

I won't document the API or methodology fully here, but hopefully you will get the idea.


Python: Partition the assets into high and low volatility

(If using a risk target, and automated)

Let's try with a risk target of 10%:

p=Portfolio(returns, risk_target=.1)

p.sub_portfolios
Out[575]: [Portfolio with 7 instruments, Portfolio with 3 instruments]

p.sub_portfolios[0]
Out[576]: Portfolio with 7 instruments
p.sub_portfolios[0].instruments
Out[577]: ['BOBL', 'BUND', 'EDOLLAR', 'KR10', 'KR3', 'US10', 'US20']

p.sub_portfolios[1].instruments
Out[578]: ['CORN', 'CRUDE_W', 'GAS_US']


So all the bonds get put into one group, the other assets into another. Seems plausible.

Using an excessively high risk target is a bad idea:

p=Portfolio(returns, risk_target=.3)
p.sub_portfolios
Not many instruments have risk higher than target; portfolio will be concentrated to hit risk target
Out[584]: [Portfolio with 9 instruments, Portfolio with 1 instruments]

This is an even worse idea:

p=Portfolio(returns, risk_target=.4)
p.sub_portfolios
Exception: Risk target greater than vol of any instrument: will be impossible to hit risk target

The forced partitioning into two top level groups will not happen if leverage is allowed, or no risk target is supplied:

p=Portfolio(returns) # no risk target
p.sub_portfolios
Natural top level grouping used
Out[44]: 
[Portfolio with 7 instruments,
 Portfolio with 2 instruments,
 Portfolio with 1 instruments]
p=Portfolio(returns, risk_target=.3, allow_leverage=True)
p.sub_portfolios
Natural top level grouping used
Out[46]: 
[Portfolio with 7 instruments,
 Portfolio with 2 instruments,
 Portfolio with 1 instruments]


Python: Group the assets hierarchically

Here's an example when we're allowing the grouping to happen naturally:
p=Portfolio(returns)
Natural top level grouping used Out[48]: [' Contains 3 sub portfolios', ['... Contains 3 sub portfolios', ["...... Contains ['KR10', 'KR3']"], ["...... Contains ['EDOLLAR', 'US10', 'US20']"], ["...... Contains ['BOBL', 'BUND']"]], ["... Contains ['CRUDE_W', 'GAS_US']"], ["... Contains ['CORN']"]]
p.show_subportfolio_tree()
We have three top level groups: interest rates, energies, and Ags. The interest rate group is further divided into second level groupings by country: Korea, US and Germany. Here's an example when we're doing a partition by risk

p=Portfolio(returns, risk_target=.1)
p.show_subportfolio_tree()
Applying partition to hit risk target
Partioning into two groups to hit risk target of 0.100000

Out[42]: 
[' Contains 2 sub portfolios',
 ['... Contains 3 sub portfolios',
  ["...... Contains ['KR10', 'KR3']"],
  ["...... Contains ['EDOLLAR', 'US10', 'US20']"],
  ["...... Contains ['BOBL', 'BUND']"]],
 ["... Contains ['CORN', 'CRUDE_W', 'GAS_US']"]]


There are now two top level groups as we saw above.

If you're a machine learning enthusiast who wishes to play around with the clustering algorithm, then the heavy lifting of the clustering algo is all done in this method of the portfolio object:

def _cluster_breakdown(self):

    X = self.corr_matrix.values
    d = sch.distance.pdist(X)
    L = sch.linkage(d, method='complete')

    # play with this line at your peril!!!
    ind = sch.fcluster(L, MAX_CLUSTER_SIZE, criterion='maxclust')

    return list(ind)

However I've found the results to be very similar regardless of the method used.


Python: Calculate volatility weights


p=Portfolio(returns, use_SR_estimates=False)  # turn off SR estimates for now
p.show_subportfolio_tree()
Natural top level grouping used
Out[52]: 
[' Contains 3 sub portfolios',
 ['... Contains 3 sub portfolios',
  ["...... Contains ['KR10', 'KR3']"],
  ["...... Contains ['EDOLLAR', 'US10', 'US20']"],
  ["...... Contains ['BOBL', 'BUND']"]],
 ["... Contains ['CRUDE_W', 'GAS_US']"],
 ["... Contains ['CORN']"]]

Let's look at a few parts of the portfolio. Firstly the very simple single asset Corn portfolio:

# Just Corn, single asset
p.sub_portfolios[2].volatility_weights
Out[54]: [1.0]

The Energy portfolio is slightly more interesting with two assets; but this will default to equal volatility weights:


# Just two assets, so goes to equal vol weights
p.sub_portfolios[1].volatility_weights
Out[55]: [0.5, 0.5]

Only the US bonds (and STIR) portfolio has 3 assets, and so will use the candidate matching algorithm:

# The US bond group is the only interesting one
p.sub_portfolios[0].sub_portfolios[1].corr_matrix
Out[58]: 
          EDOLLAR      US10      US20
EDOLLAR  1.000000  0.974097  0.872359
US10     0.974097  1.000000  0.924023
US20     0.872359  0.924023  1.000000
# Pretty close to equal weighting
p.sub_portfolios[0].sub_portfolios[1].volatility_weights
Out[57]: [0.28812193544790643, 0.36572016685796049, 0.34615789769413313]



Python: Calculate Sharpe Ratio adjustments (optionally)


p=Portfolio(returns) # by default Sharpe Ratio adjustments are on unless we turn them off

Let's examine a simple two asset portfolio to see how these work:


# Let's look at the energies portfolio
p.sub_portfolios[1]
Out[61]: Portfolio with 2 instruments
# first asset is awful, second worse
p.sub_portfolios[1].sharpe_ratio
Out[63]: array([-0.55334564, -0.8375069 ])

# Would be equal weights, now tilted towards first asset
p.sub_portfolios[1].volatility_weights
Out[62]: [0.5399245657079913, 0.46007543429200887]

# Can also see this information in one place
p.sub_portfolios[1].diags
Out[198]: 
                      CRUDE_W    GAS_US
Raw vol (no SR adj)  0.500000  0.500000
Vol (with SR adj)    0.539925  0.460075
Sharpe Ratio        -0.553346 -0.837507 
Portfolio containing ['CRUDE_W', 'GAS_US'] instruments  


Python: Calculate diversification multipliers


p=Portfolio(returns)
Natural top level grouping used

# not much diversification for bonds /rates within each country
p.sub_portfolios[0].sub_portfolios[0].div_mult
Out[67]: 1.0389170782708381  #korea
p.sub_portfolios[0].sub_portfolios[1].div_mult
Out[68]: 1.0261371453175774  #US bonds and STIR
p.sub_portfolios[0].sub_portfolios[2].div_mult
Out[69]: 1.0226377699075955  # german bonds
# Quite decent when you put them together though p.sub_portfolios[0].div_mult
Out[64]: 1.2529917422729928

# Energies group only two assets but quite uncorrelated
p.sub_portfolios[1].div_mult
Out[65]: 1.2787613327950775

# only one asset in corn group
p.sub_portfolios[2].div_mult
Out[66]: 1.0
# Not used in the code but good to know
p.div_mult
Out[71]: 2.0832290180687183


Python: Aggregate up sub-portfolios


The portfolio in the python code is built up in a bottom up fashion. Let's see how this happens, by focusing on the 10 year US bond.

p=Portfolio(returns)
Natural top level grouping used

First the code calculates the vol weight for US bonds and rates, including a SR adjustment:

p.sub_portfolios[0].sub_portfolios[1].diags
Out[203]: 
                      EDOLLAR      US10      US20
Raw vol (no SR adj)  0.288122  0.365720  0.346158
Vol (with SR adj)    0.292898  0.361774  0.345328
Sharpe Ratio         0.218935  0.164957  0.185952 
 Portfolio containing ['EDOLLAR', 'US10', 'US20'] instruments  


This portfolio then joins the wider bond portfolio (here in column '1' - there are no meaningful names for parts of the wider portfolio - the code doesn't know this is US bonds):

p.sub_portfolios[0].diags.aggregate
Out[206]: 
                                  0         1         2
Raw vol (no SR adj or DM)  0.392114  0.261486  0.346399
Vol (with SR adj no DM)    0.423425  0.162705  0.413870
SR                         0.985267  0.192553  1.185336
Div mult                   1.038917  1.026137  1.022638 
 Portfolio containing 3 sub portfolios aggregate 


The Sharpe Ratios, raw vol, and vol weights shown here are for the groups that we're aggregating together here. So the raw vol weight on US bonds is 0.26. To see why look at the correlation matrix:
p.sub_portfolios[0].aggregate_portfolio.corr_matrix
Out[211]: 
          0         1         2
0  1.000000  0.493248  0.382147
1  0.493248  1.000000  0.715947
2  0.382147  0.715947  1.000000
You can see that US bonds are more highly correlated with asset 0 and asset 2, than they are with each other. So it gets a lower raw weight. It also has a far worse Sharpe Ratio, so get's further downweighted relative to the other countries.

We can now work out what the weight of US 10 year bonds is amongst bonds as a whole:

p.sub_portfolios[0].diags



                       BOBL      BUND   EDOLLAR      KR10       KR3      US10  \

Vol wt in group    0.519235  0.480765  0.292898  0.477368  0.522632  0.361774   

Vol wt. of group   0.413870  0.413870  0.162705  0.423425  0.423425  0.162705   
Div mult of group  1.022638  1.022638  1.026137  1.038917  1.038917  1.026137   
Vol wt.            0.213339  0.197533  0.047473  0.203860  0.223189  0.058636   
                       US20  
Vol wt in group    0.345328  
Vol wt. of group   0.162705  
Div mult of group  1.026137  
Vol wt.            0.055971   
 Portfolio containing 3 sub portfolios 





The first row is the vol weight of the asset within it's group; we've already seen this calculated. The next row is the vol weight of the group as a whole; again we've already seen the figures for US bonds calculated above. After that is the diversification multiplier for the US bond group. Finally we can see the volatility weight of US 10 year bonds in the bond group as a whole; equal to the vol weight within the group, multiplied by the vol weight of the group, multiplied by the diversification multiplier of the group; and then renormalised to add up to 1.

Finally we're ready to construct the top level group, in which the bonds as a whole is asset '0'. First the correlation matrix:

notUsedYet = p.volatility_weights
p.aggregate_portfolio.corr_matrix
Out[212]: 
          0         1         2
0  1.000000 -0.157908 -0.168607
1 -0.157908  1.000000  0.016346
2 -0.168607  0.016346  1.000000

All these assets, bonds [0], energies [1], and corn [2] are pretty uncorrelated, though bonds might just have the edge:

p.diags.aggregate
Out[208]: 
                                  0         1         2
Raw vol (no SR adj or DM)  0.377518  0.282948  0.339534
Vol (with SR adj no DM)    0.557443  0.201163  0.241394
SR                         1.142585 -0.871979 -0.801852
Div mult                   1.252992  1.278761  1.000000 
 Portfolio containing 3 sub portfolios aggregate 

Now to calculate the final weights:

p.diags

Out[241]: 

                       BOBL      BUND      CORN   CRUDE_W   EDOLLAR    GAS_US  \

Vol wt in group    0.213339  0.197533  1.000000  0.539925  0.047473  0.460075   
Vol wt. of group   0.557443  0.557443  0.241394  0.201163  0.557443  0.201163   
Div mult of group  1.252992  1.252992  1.000000  1.278761  1.252992  1.278761   
Vol wt.            0.124476  0.115254  0.201648  0.116022  0.027699  0.098863   
                       KR10       KR3      US10      US20  
Vol wt in group    0.203860  0.223189  0.058636  0.055971  
Vol wt. of group   0.557443  0.557443  0.557443  0.557443  
Div mult of group  1.252992  1.252992  1.252992  1.252992  
Vol wt.            0.118945  0.130224  0.034212  0.032657   
 Portfolio containing 3 sub portfolios 


We've now got the final volatility weights. Here's another way of viewing them:

# First remind ourselves of the volatility weights
dict([(instr,wt) for instr,wt in zip(p.instruments, p.volatility_weights)])
Out[80]: 
{'BOBL': 0.12447636469041611,
 'BUND': 0.11525384132670763,
 'CORN': 0.20164774158721335,
 'CRUDE_W': 0.11602155610023207,
 'EDOLLAR': 0.027698823230085486,
 'GAS_US': 0.09886319534295436,
 'KR10': 0.11894543449866347,
 'KR3': 0.13022374999090081,
 'US10': 0.034212303586599956,
 'US20': 0.032656989646226771}
The most striking difference to the spreadsheet is that by lumping Eurodollar in with the other US bonds it has a much smaller vol weight. German and Korean bonds have gained as a result; the energies and Corn are pretty similar.

Python: Calculate cash weights

p=Portfolio(returns)
dict([(instr,wt) for instr,wt in zip(p.instruments, p.cash_weights)])
Natural top level grouping used
Out[79]: 
{'BOBL': 0.21885945926487166,
 'BUND': 0.079116240615862948,
 'CORN': 0.036453365347104472,
 'CRUDE_W': 0.015005426640542012,
 'EDOLLAR': 0.10335586678017628,
 'GAS_US': 0.009421184504702888,
 'KR10': 0.10142345423259323,
 'KR3': 0.39929206844323878,
 'US10': 0.025088747004851766,
 'US20': 0.011984187166055982}


Obviously the less risky assets like 3 year Korean bonds and Eurodollar get a larger cash weight. It's also possible to see how these were calculated from the final volatility weights:

p.diags.cash
Out[199]: 
                  BOBL      BUND      CORN   CRUDE_W   EDOLLAR    GAS_US  \
Vol weights   0.124476  0.115254  0.201648  0.116022  0.027699  0.098863   
Std.          0.018965  0.048575  0.184449  0.257816  0.008936  0.349904   
Cash weights  0.218859  0.079116  0.036453  0.015005  0.103356  0.009421   
                  KR10       KR3      US10      US20  
Vol weights   0.118945  0.130224  0.034212  0.032657  
Std.          0.039105  0.010875  0.045470  0.090863  
Cash weights  0.101423  0.399292  0.025089  0.011984   
 Portfolio containing 10 instruments (cash calculations) 


Python: Check risk target is hit, adjust weights if required

(optional: with risk target)

The natural risk of the unconstrained portfolio is quite low: 1.59% (a bit higher than the spreadsheet version, since we haven't allocated as much to Eurodollar)


p=Portfolio(returns)
p.portfolio_std
Natural top level grouping used
Out[82]: 0.015948015324395711


Let's explore the possible scenarios:
  • Risk target lower than 1.59%, eg 1%: We'd need to add cash to the portfolio. 
p=Portfolio(returns, risk_target=.01)

# if cash weights add up to less than 1, must be including cash in the portfolio

sum(p.cash_weights)

Calculating weights to hit a risk target of 0.010000

Natural top level grouping used
Too much risk 0.372963 of the portfolio will be cash
Out[84]: 0.62703727056889502

# check risk target hit
p.portfolio_std
Out[85]: 0.01

With a 1% risk target you'd need to put 37.3% of your portfolio into cash; with the rest going into the constructed portfolio.
  • Risk target higher than 1.59% with leverage allowed, eg 10%
p=Portfolio(returns, risk_target=.1, allow_leverage=True)

# If sum of cash weights>1 we must be using leverage
sum(p.cash_weights)
Calculating weights to hit a risk target of 0.100000
Natural top level grouping used
Not enough risk leverage factor of 6.270373 applied
Out[87]: 6.2703727056889518

# check target hit
p.portfolio_std
Out[88]: 0.10000000000000001

You'd need to apply a leverage factor; with a risk target of 10% you'd need a leverage factor of 6.27
  • Risk target higher than 1.59% without leverage: 
p=Portfolio(returns, risk_target=.1)
Calculating weights to hit a risk target of 0.100000
Not enough risk, no leverage allowed, using partition method
Applying partition to hit risk target
Partitioning into two groups to hit risk target of 0.100000
Need to limit low cash group to 0.005336 (vol) 0.323992 (cash) of portfolio to hit risk target of 0.100000
Applying partition to hit risk target
Partitioning into two groups to hit risk target of 0.100000

# look at cash weights
dict([(instr,wt) for instr,wt in zip(p.instruments, p.cash_weights)])
Out[90]: 
{'BOBL': 0.07548008030352539,
 'BUND': 0.027285547606928903,
 'CORN': 0.3285778602871447,
 'CRUDE_W': 0.19743348662518673,
 'EDOLLAR': 0.035645291049388697,
 'GAS_US': 0.15010566887898191,
 'KR10': 0.034978842111056153,
 'KR3': 0.13770753839879318,
 'US10': 0.0086525875783564771,
 'US20': 0.0041330971606378854}

# check risk target hit
p.portfolio_std
Out[91]: 0.10001663416516968

In this case the portfolio to constrain the proportion of the portfolio that allocated to low risk assets (bonds and rates). 



What's next


In the next post I'll test the method (in it's back testable python format - otherwise (a) the results could arguably be forward looking, and (b) I have now seen more than enough spreadsheets for 2018 thank you very much) against some alternatives. It could take me a few weeks to post this, as I will be somewhat busy with Christmas, university, and book writing commitments!