*handcrafting*; a method for doing portfolio construction which human beings can do without computing power, or at least with a spreadsheet. The method aims to achieve the following goals:

- Humans can trust it: intuitive and transparent method which produces robust weights
- Can be easily implemented by a human in a spreadsheet
- Can be back tested
- Grounded in solid theoretical foundations
- Takes account of uncertainty in data estimates
- Decent out of sample performance
- Addresses the problem of allocating capital to assets on a long only basis, or to trading strategies. It won't be suitable for a long /short portfolio.

This is the third in a series of posts on the handcrafting method.

- The first post can be found here, and it motivates the need for a method like this.
- In the second post I build up the various components of the method, and discuss why they are needed.
- In this, the third post, I'll explain how you'd actually apply the method step by step, with code.
- Post four will test the method with artificial data
- The final post will use real data

This will be a 'twin track' post; in which I'll outline two implementations:

- a spreadsheet based method suitable for small numbers of assets where you need to do a one-off portfolio for live trading rather than repeated backtest. It's also great for understanding the intution of the method - a big plus point of this technique.
- a python code based method. This uses (almost) exactly the same method, but can be backtested (the difference is that the grouping of assets is done manually in the spreadsheet based method, but automatically here based on the correlation matrix). The code can be found here; although this will live within the pysystemtrade ecosystem I've deliberately tried to make it as self contained as possible so you could easily drop this out into your own framework.

## The demonstration

To demonstrate the implementation I'm going to need some data. This won't be the full blown real data that I'll be using to test the method properly, but we do need *something*. It needs to be an interesting data set; with the following characteristics:

- different levels of volatility (so not a bunch of trading systems)
- heirarcy of 3 levels (more would be too complex for the human implementaiton, less wouldn't be a stern enough test)
- not too many assets such that the human implementation is too complex

I'm going to use long only weekly returns from the following instruments: BOBL, BUND, CORN, CRUDE_W, EURODOLLAR, GAS_US, KR10, KR3, US10, US20; from 2014 to the present (since for some of these instruments I only have data for the last 5 years).

Because this isn't a proper test I won't be doing any fancy rolling out of sample optimisation, just a single portfolio.

The descriptive statistics can be found here. The python code which gets the data (using pysystemtrade), is here.

(I've written the handcrafting functions to be standalone; when I come to testing them with real data I'll show you how to hook these into pysystemtrade]

## Overview of the method

Here are the stages involved in the handcrafting method. Note there are a few options involved:

- (Optional if using a risk target, and automated): partition the assets into high and low volatility
- Group the assets hierarchically (if step 1 is followed, this will form the top level grouping). This will done either by (i) an automated clustering algorithm or (ii) human common sense.
- Calculate volatility weights within each group at the lowest level, proceeding upwards. These weights will either be equal, or use the candidate matching technique described in the previous post.
- (Optionally) Calculate Sharpe Ratio adjustments. Apply these to the weights from step 3.
- Calculate diversification multipliers for each group. Apply these to the weights from step 4.
- Calculate cash weights using the volatility of each asset.
- (Optionally) if a risk target was used with a manual method, partition the top level groups into high and low volatility.
- (Optionally) if a risk target was supplied; use the technique outlined in my previous post to ensure the target is hit.

## Spreadsheet: Group the assets hierarchically

A suggested grouping is here. Hopefully it's fairly self explanatory. There could be some debate about whether Eurodollar and bonds should be glued together, but part of doing it this way was to see if the diversification multiplier fixes this potential mistake.

## Spreadsheet: Calculate volatility weights

The calculations are shown here.

Notice that for most groups there are only one or two assets, so things are relatively trivial. Then at the top level (level 1) we have three assets, so things are a bit more fun. I use a simple average of correlations to construct a correlation matrix for the top level groups. Then I use a weighted average of two candidate matrices to work out the required weights for the top level groups.

The weights come out as follows:

- Developed market bonds, which we have a lot of, 3.6% each for a total of 14.4%
- Emerging market bonds (just Korea), with 7.2% each for a total of 14.4%
- Energies get 10.7% each, for a total of 21.4%
- Corn gets 21.4%
- Eurodollar gets 28.6%

## Spreadsheet: Calculate Sharpe Ratio adjustments (optionally)

Adjustments for Sharpe Ratios are shown in this spreadsheet. You should follow the calculations down the page, as they are done in a bottom up fashion. I haven't bothered with interpolating the heuristic adjustments, instead I've just used VLOOKUP to match the closest adjustment row.

## Spreadsheet: Calculate diversification multipliers (DM)

DM calculations are shown in this sheet. DMs are quite low in bonds (where the assets in each country are highly correlated), but much higher in commodities. The final set of changes in particular striking; note the reallocation from the single instrument rates group (initial weight 30.7%, falls to 24.2%) to commodities (initial weight 29%, rises to 36.5%).

## Spreadsheet: Calculate cash weights

(Almost) finally we calculate our cash weights, in this spreadsheet. Notice the huge weight to low volatility Eurodollar.

## Spreadsheet: Partition into high and low volatility

#### (optional: if risk target used with manual method)

If we're using a risk target we'll need to partition our top level groups (this is done automatically with python, but spreadsheet people are allowed to choose their own groupings). Let's choose an arbitrary risk target: 10%. This should be achievable since the average risk of our assets is 10.6%

This is the average volatility of each group (calculated here):

Bonds: 1.83%

Commodities: 14.6%

Rates: 0.89%

So we have:

High vol: commodities

Low vol: Rates and bonds

(Not a massive surprise!!)

## Spreadsheet: Check risk target is hit, adjust weights if required

#### (optional: with risk target)

##
The natural risk of the portfolio comes out at 1.09% (calculated here). Let's explore the possible scenarios:
- Risk target lower than 1.09%, eg 1%: We'd need to add cash to the portfolio. Using the spreadsheet with a 1% risk target you'd need to put 8.45% of your portfolio into cash; with the rest going into the constructed portfolio.
- Risk target higher than 1.09% with leverage allowed: You'd need to apply a leverage factor; with a risk target of 10% you'd need a leverage factor of 9.16
- Risk target higher than 1.09% without leverage: You'd need to constrain the proportion of the portfolio that allocated to low risk assets (bonds and rates). The spreadsheet shows that this comes out at 31.4% cash weight, with the rest in commodities. I've also recalculated the weights with this constraint to show how it comes out.

And here are those final weights (to hit 10% risk with no leverage):

weight
BOBL 2.17%
BUND 0.78%
US10 0.44%
US20 0.23%
KR3 7.25%
KR10 1.86%
EDOLLAR 18.67%
CORN 36.67%
CRUDE_W 19.47%
GAS_US 12.45%

The natural risk of the portfolio comes out at 1.09% (calculated here). Let's explore the possible scenarios:

- Risk target lower than 1.09%, eg 1%: We'd need to add cash to the portfolio. Using the spreadsheet with a 1% risk target you'd need to put 8.45% of your portfolio into cash; with the rest going into the constructed portfolio.
- Risk target higher than 1.09% with leverage allowed: You'd need to apply a leverage factor; with a risk target of 10% you'd need a leverage factor of 9.16
- Risk target higher than 1.09% without leverage: You'd need to constrain the proportion of the portfolio that allocated to low risk assets (bonds and rates). The spreadsheet shows that this comes out at 31.4% cash weight, with the rest in commodities. I've also recalculated the weights with this constraint to show how it comes out.

And here are those final weights (to hit 10% risk with no leverage):

weight

BOBL 2.17%

BUND 0.78%

US10 0.44%

US20 0.23%

KR3 7.25%

KR10 1.86%

EDOLLAR 18.67%

CORN 36.67%

CRUDE_W 19.47%

GAS_US 12.45%

## Python code

The handcrafting code is here. Although this file will ultimately be dumped into pysystemtrade, it's designed to be entirely self contained so you can use it in your own applications.

The code expects weekly returns, and for all assets to be present. It doesn't do rolling optimisation, or averages over multiple assets. I need to write code to hook it into pysystemtrade, and to achieve these various objectives.

The only input required is a pandas data frame returns with named columns containing weekly returns. The main object you'll be interacting with is called Portfolio

Simplest use case, to go from returns to cash weights without risk targeting:

p=Portfolio(returns)

p.cash_weights

I won't document the API or methodology fully here, but hopefully you will get the idea.

## Python: Partition the assets into high and low volatility

#### (If using a risk target, and automated)

Let's try with a risk target of 10%:

p=Portfolio(returns, risk_target=.1)

p.sub_portfolios

```
Out[575]: [Portfolio with 7 instruments, Portfolio with 3 instruments]
```

```
```

```
p.sub_portfolios[0]
Out[576]: Portfolio with 7 instruments
```

```
p.sub_portfolios[0].instruments
Out[577]: ['BOBL', 'BUND', 'EDOLLAR', 'KR10', 'KR3', 'US10', 'US20']
```

```
```

```
p.sub_portfolios[1].instruments
Out[578]: ['CORN', 'CRUDE_W', 'GAS_US']
```

So all the bonds get put into one group, the other assets into another. Seems plausible.

Using an excessively high risk target is a bad idea:

`p=Portfolio(returns, risk_target=.3)`

`p.sub_portfolios`

```
Not many instruments have risk higher than target; portfolio will be concentrated to hit risk target
Out[584]: [Portfolio with 9 instruments, Portfolio with 1 instruments]
```

p=Portfolio(returns, risk_target=.4)

p.sub_portfolios

```
Exception: Risk target greater than vol of any instrument: will be impossible to hit risk target
```

The forced partitioning into two top level groups will not happen if leverage is allowed, or no risk target is supplied:

p=Portfolio(returns) # no risk target

```
p.sub_portfolios
Natural top level grouping used
Out[44]:
[Portfolio with 7 instruments,
Portfolio with 2 instruments,
Portfolio with 1 instruments]
```

```
p=Portfolio(returns, risk_target=.3, allow_leverage=True)
p.sub_portfolios
Natural top level grouping used
Out[46]:
[Portfolio with 7 instruments,
Portfolio with 2 instruments,
Portfolio with 1 instruments]
```

## Python: Group the assets hierarchically

We have three top level groups: interest rates, energies, and Ags. The interest rate group is further divided into second level groupings by country: Korea, US and Germany. Here's an example when we're doing a partition by riskHere's an example when we're allowing the grouping to happen naturally:

`p=Portfolio(returns)`

Natural top level grouping used Out[48]: [' Contains 3 sub portfolios', ['... Contains 3 sub portfolios', ["...... Contains ['KR10', 'KR3']"], ["...... Contains ['EDOLLAR', 'US10', 'US20']"], ["...... Contains ['BOBL', 'BUND']"]], ["... Contains ['CRUDE_W', 'GAS_US']"], ["... Contains ['CORN']"]]

p.show_subportfolio_tree()

```
```

`p=Portfolio(returns, risk_target=.1)`

p.show_subportfolio_tree()

```
Applying partition to hit risk target
Partioning into two groups to hit risk target of 0.100000
```

```
Out[42]:
[' Contains 2 sub portfolios',
['... Contains 3 sub portfolios',
["...... Contains ['KR10', 'KR3']"],
["...... Contains ['EDOLLAR', 'US10', 'US20']"],
["...... Contains ['BOBL', 'BUND']"]],
["... Contains ['CORN', 'CRUDE_W', 'GAS_US']"]]
```

There are now two top level groups as we saw above.

If you're a machine learning enthusiast who wishes to play around with the clustering algorithm, then the heavy lifting of the clustering algo is all done in this method of the portfolio object:

However I've found the results to be very similar regardless of the method used.

If you're a machine learning enthusiast who wishes to play around with the clustering algorithm, then the heavy lifting of the clustering algo is all done in this method of the portfolio object:

def _cluster_breakdown(self): X = self.corr_matrix.values d = sch.distance.pdist(X) L = sch.linkage(d, method='complete')

# play with this line at your peril!!! ind = sch.fcluster(L, MAX_CLUSTER_SIZE, criterion='maxclust') return list(ind)

However I've found the results to be very similar regardless of the method used.

## Python: Calculate volatility weights

###

```
p=Portfolio(returns, use_SR_estimates=False) # turn off SR estimates for now
p.show_subportfolio_tree()
Natural top level grouping used
Out[52]:
[' Contains 3 sub portfolios',
['... Contains 3 sub portfolios',
["...... Contains ['KR10', 'KR3']"],
["...... Contains ['EDOLLAR', 'US10', 'US20']"],
["...... Contains ['BOBL', 'BUND']"]],
["... Contains ['CRUDE_W', 'GAS_US']"],
["... Contains ['CORN']"]]
```

Let's look at a few parts of the portfolio. Firstly the very simple single asset Corn portfolio:

`# Just Corn, single asset`

```
p.sub_portfolios[2].volatility_weights
Out[54]: [1.0]
```

```
```

The Energy portfolio is slightly more interesting with two assets; but this will default to equal volatility weights:

```
```

```
# Just two assets, so goes to equal vol weights
p.sub_portfolios[1].volatility_weights
Out[55]: [0.5, 0.5]
```

Only the US bonds (and STIR) portfolio has 3 assets, and so will use the candidate matching algorithm:

# The US bond group is the only interesting one

`p.sub_portfolios[0].sub_portfolios[1].corr_matrix Out[58]: EDOLLAR US10 US20 EDOLLAR 1.000000 0.974097 0.872359 US10 0.974097 1.000000 0.924023 US20 0.872359 0.924023 1.000000`

`# Pretty close to equal weighting`

```
p.sub_portfolios[0].sub_portfolios[1].volatility_weights
Out[57]: [0.28812193544790643, 0.36572016685796049, 0.34615789769413313]
```

## Python: Calculate Sharpe Ratio adjustments (optionally)

p=Portfolio(returns) # by default Sharpe Ratio adjustments are on unless we turn them off

```
```

Let's examine a simple two asset portfolio to see how these work:

`# Let's look at the energies portfolio`

```
p.sub_portfolios[1]
Out[61]: Portfolio with 2 instruments
```

`# first asset is awful, second worse`

`p.sub_portfolios[1].sharpe_ratio Out[63]: array([-0.55334564, -0.8375069 ])`

```
```

`# Would be equal weights, now tilted towards first asset`

```
p.sub_portfolios[1].volatility_weights
Out[62]: [0.5399245657079913, 0.46007543429200887]
```

# Can also see this information in one place

`p.sub_portfolios[1].diags`

Out[198]: CRUDE_W GAS_US Raw vol (no SR adj) 0.500000 0.500000 Vol (with SR adj) 0.539925 0.460075 Sharpe Ratio -0.553346 -0.837507 Portfolio containing ['CRUDE_W', 'GAS_US'] instruments

## Python: Calculate diversification multipliers

p=Portfolio(returns)

```
Natural top level grouping used
```

```
```

`# not much diversification for bonds /rates within each country`

`p.sub_portfolios[0].sub_portfolios[0].div_mult Out[67]: 1.0389170782708381 #korea p.sub_portfolios[0].sub_portfolios[1].div_mult Out[68]: 1.0261371453175774 #US bonds and STIR p.sub_portfolios[0].sub_portfolios[2].div_mult Out[69]: 1.0226377699075955 # german bonds`

# Quite decent when you put them together though p.sub_portfolios[0].div_mult

Out[64]: 1.2529917422729928

```
```

`# Energies group only two assets but quite uncorrelated`

`p.sub_portfolios[1].div_mult`

`Out[65]: 1.2787613327950775`

```
```

```
# only one asset in corn group
p.sub_portfolios[2].div_mult
Out[66]: 1.0
```

`# Not used in the code but good to know`

```
p.div_mult
Out[71]: 2.0832290180687183
```

```
```

```
```

## Python: Aggregate up sub-portfolios

The portfolio in the python code is built up in a bottom up fashion. Let's see how this happens, by focusing on the 10 year US bond.

p=Portfolio(returns)

```
Natural top level grouping used
```

First the code calculates the vol weight for US bonds and rates, including a SR adjustment:

p.sub_portfolios[0].sub_portfolios[1].diags

```
Out[203]:
EDOLLAR US10 US20
Raw vol (no SR adj) 0.288122 0.365720 0.346158
Vol (with SR adj) 0.292898 0.361774 0.345328
Sharpe Ratio 0.218935 0.164957 0.185952
Portfolio containing ['EDOLLAR', 'US10', 'US20'] instruments
```

This portfolio then joins the wider bond portfolio (here in column '1' - there are no meaningful names for parts of the wider portfolio - the code doesn't know this is US bonds):

```
p.sub_portfolios[0].diags.aggregate
Out[206]:
0 1 2
Raw vol (no SR adj or DM) 0.392114 0.261486 0.346399
Vol (with SR adj no DM) 0.423425 0.162705 0.413870
SR 0.985267 0.192553 1.185336
Div mult 1.038917 1.026137 1.022638
Portfolio containing 3 sub portfolios aggregate
```

The Sharpe Ratios, raw vol, and vol weights shown here are for the groups that we're aggregating together here. So the raw vol weight on US bonds is 0.26. To see why look at the correlation matrix:`p.sub_portfolios[0].aggregate_portfolio.corr_matrix Out[211]: 0 1 2 0 1.000000 0.493248 0.382147 1 0.493248 1.000000 0.715947 2 0.382147 0.715947 1.000000`

You can see that US bonds are more highly correlated with asset 0 and asset 2, than they are with each other. So it gets a lower raw weight. It also has a far worse Sharpe Ratio, so get's further downweighted relative to the other countries.

We can now work out what the weight of US 10 year bonds is amongst bonds as a whole:

p.sub_portfolios[0].diags

BOBL BUND EDOLLAR KR10 KR3 US10 \

Vol wt in group 0.519235 0.480765 0.292898 0.477368 0.522632 0.361774

Vol wt. of group 0.413870 0.413870 0.162705 0.423425 0.423425 0.162705

Div mult of group 1.022638 1.022638 1.026137 1.038917 1.038917 1.026137

Vol wt. 0.213339 0.197533 0.047473 0.203860 0.223189 0.058636

US20

Vol wt in group 0.345328

Vol wt. of group 0.162705

Div mult of group 1.026137

Vol wt. 0.055971

Portfolio containing 3 sub portfolios

The first row is the vol weight of the asset within it's group; we've already seen this calculated. The next row is the vol weight of the group as a whole; again we've already seen the figures for US bonds calculated above. After that is the diversification multiplier for the US bond group. Finally we can see the volatility weight of US 10 year bonds in the bond group as a whole; equal to the vol weight within the group, multiplied by the vol weight of the group, multiplied by the diversification multiplier of the group; and then renormalised to add up to 1.

Finally we're ready to construct the top level group, in which the bonds as a whole is asset '0'. First the correlation matrix:

notUsedYet = p.volatility_weights

p.aggregate_portfolio.corr_matrix

Out[212]:

0 1 2

0 1.000000 -0.157908 -0.168607

1 -0.157908 1.000000 0.016346

2 -0.168607 0.016346 1.000000

All these assets, bonds [0], energies [1], and corn [2] are pretty uncorrelated, though bonds might just have the edge:

p.diags.aggregate

Out[208]:

0 1 2

Raw vol (no SR adj or DM) 0.377518 0.282948 0.339534

Vol (with SR adj no DM) 0.557443 0.201163 0.241394

SR 1.142585 -0.871979 -0.801852

Div mult 1.252992 1.278761 1.000000

Portfolio containing 3 sub portfolios aggregate

Now to calculate the final weights:

p.diags

Out[241]:

BOBL BUND CORN CRUDE_W EDOLLAR GAS_US \

Vol wt in group 0.213339 0.197533 1.000000 0.539925 0.047473 0.460075

Vol wt. of group 0.557443 0.557443 0.241394 0.201163 0.557443 0.201163

Div mult of group 1.252992 1.252992 1.000000 1.278761 1.252992 1.278761

Vol wt. 0.124476 0.115254 0.201648 0.116022 0.027699 0.098863

KR10 KR3 US10 US20

Vol wt in group 0.203860 0.223189 0.058636 0.055971

Vol wt. of group 0.557443 0.557443 0.557443 0.557443

Div mult of group 1.252992 1.252992 1.252992 1.252992

Vol wt. 0.118945 0.130224 0.034212 0.032657

Portfolio containing 3 sub portfolios

We've now got the final volatility weights. Here's another way of viewing them:

`# First remind ourselves of the volatility weights dict([(instr,wt) for instr,wt in zip(p.instruments, p.volatility_weights)]) Out[80]: {'BOBL': 0.12447636469041611, 'BUND': 0.11525384132670763, 'CORN': 0.20164774158721335, 'CRUDE_W': 0.11602155610023207, 'EDOLLAR': 0.027698823230085486, 'GAS_US': 0.09886319534295436, 'KR10': 0.11894543449866347, 'KR3': 0.13022374999090081, 'US10': 0.034212303586599956, 'US20': 0.032656989646226771}`

The most striking difference to the spreadsheet is that by lumping Eurodollar in with the other US bonds it has a much smaller vol weight. German and Korean bonds have gained as a result; the energies and Corn are pretty similar.

## Python: Calculate cash weights

`p=Portfolio(returns)`

dict([(instr,wt) for instr,wt in zip(p.instruments, p.cash_weights)])

```
Natural top level grouping used
Out[79]:
{'BOBL': 0.21885945926487166,
'BUND': 0.079116240615862948,
'CORN': 0.036453365347104472,
'CRUDE_W': 0.015005426640542012,
'EDOLLAR': 0.10335586678017628,
'GAS_US': 0.009421184504702888,
'KR10': 0.10142345423259323,
'KR3': 0.39929206844323878,
'US10': 0.025088747004851766,
'US20': 0.011984187166055982}
```

Obviously the less risky assets like 3 year Korean bonds and Eurodollar get a larger cash weight. It's also possible to see how these were calculated from the final volatility weights:

p.diags.cash Out[199]: BOBL BUND CORN CRUDE_W EDOLLAR GAS_US \ Vol weights 0.124476 0.115254 0.201648 0.116022 0.027699 0.098863 Std. 0.018965 0.048575 0.184449 0.257816 0.008936 0.349904 Cash weights 0.218859 0.079116 0.036453 0.015005 0.103356 0.009421 KR10 KR3 US10 US20 Vol weights 0.118945 0.130224 0.034212 0.032657 Std. 0.039105 0.010875 0.045470 0.090863 Cash weights 0.101423 0.399292 0.025089 0.011984 Portfolio containing 10 instruments (cash calculations)

## Python: Check risk target is hit, adjust weights if required

#### (optional: with risk target)

##
The natural risk of the unconstrained portfolio is quite low: 1.59% (a bit higher than the spreadsheet version, since we haven't allocated as much to Eurodollar)

```
p=Portfolio(returns)
p.portfolio_std
Natural top level grouping used
Out[82]: 0.015948015324395711
```

Let's explore the possible scenarios:
- Risk target lower than 1.59%, eg 1%: We'd need to add cash to the portfolio.

p=Portfolio(returns, risk_target=.01)

# if cash weights add up to less than 1, must be including cash in the portfolio

sum(p.cash_weights)
Calculating weights to hit a risk target of 0.010000
Natural top level grouping used
Too much risk 0.372963 of the portfolio will be cash
Out[84]: 0.62703727056889502

# check risk target hit
p.portfolio_std
Out[85]: 0.01

With a 1% risk target you'd need to put 37.3% of your portfolio into cash; with the rest going into the constructed portfolio.

- Risk target higher than 1.59% with leverage allowed, eg 10%

p=Portfolio(returns, risk_target=.1, allow_leverage=True)

# If sum of cash weights>1 we must be using leverage
sum(p.cash_weights)
Calculating weights to hit a risk target of 0.100000
Natural top level grouping used
Not enough risk leverage factor of 6.270373 applied
Out[87]: 6.2703727056889518

# check target hit
p.portfolio_std
Out[88]: 0.10000000000000001

You'd need to apply a leverage factor; with a risk target of 10% you'd need a leverage factor of 6.27

- Risk target higher than 1.59% without leverage:

p=Portfolio(returns, risk_target=.1)
Calculating weights to hit a risk target of 0.100000
Not enough risk, no leverage allowed, using partition method
Applying partition to hit risk target
Partitioning into two groups to hit risk target of 0.100000
Need to limit low cash group to 0.005336 (vol) 0.323992 (cash) of portfolio to hit risk target of 0.100000
Applying partition to hit risk target
Partitioning into two groups to hit risk target of 0.100000

# look at cash weights
dict([(instr,wt) for instr,wt in zip(p.instruments, p.cash_weights)])
Out[90]:
{'BOBL': 0.07548008030352539,
'BUND': 0.027285547606928903,
'CORN': 0.3285778602871447,
'CRUDE_W': 0.19743348662518673,
'EDOLLAR': 0.035645291049388697,
'GAS_US': 0.15010566887898191,
'KR10': 0.034978842111056153,
'KR3': 0.13770753839879318,
'US10': 0.0086525875783564771,
'US20': 0.0041330971606378854}

# check risk target hit
p.portfolio_std
Out[91]: 0.10001663416516968

In this case the portfolio to constrain the proportion of the portfolio that allocated to low risk assets (bonds and rates).

The natural risk of the unconstrained portfolio is quite low: 1.59% (a bit higher than the spreadsheet version, since we haven't allocated as much to Eurodollar)

```
p=Portfolio(returns)
p.portfolio_std
Natural top level grouping used
Out[82]: 0.015948015324395711
```

Let's explore the possible scenarios:

- Risk target lower than 1.59%, eg 1%: We'd need to add cash to the portfolio.

p=Portfolio(returns, risk_target=.01)

# if cash weights add up to less than 1, must be including cash in the portfolio

sum(p.cash_weights)

Calculating weights to hit a risk target of 0.010000

Natural top level grouping used

Too much risk 0.372963 of the portfolio will be cash

Out[84]: 0.62703727056889502

# check risk target hit

p.portfolio_std

Out[85]: 0.01

- Risk target higher than 1.59% with leverage allowed, eg 10%

p=Portfolio(returns, risk_target=.1, allow_leverage=True)

# If sum of cash weights>1 we must be using leverage

sum(p.cash_weights)

Calculating weights to hit a risk target of 0.100000

Natural top level grouping used

Not enough risk leverage factor of 6.270373 applied

Out[87]: 6.2703727056889518

# check target hit

p.portfolio_std

Out[88]: 0.10000000000000001

- Risk target higher than 1.59% without leverage:

p=Portfolio(returns, risk_target=.1)

Calculating weights to hit a risk target of 0.100000

Not enough risk, no leverage allowed, using partition method

Applying partition to hit risk target

Partitioning into two groups to hit risk target of 0.100000

Need to limit low cash group to 0.005336 (vol) 0.323992 (cash) of portfolio to hit risk target of 0.100000

Applying partition to hit risk target

Partitioning into two groups to hit risk target of 0.100000

# look at cash weights

dict([(instr,wt) for instr,wt in zip(p.instruments, p.cash_weights)])

Out[90]:

{'BOBL': 0.07548008030352539,

'BUND': 0.027285547606928903,

'CORN': 0.3285778602871447,

'CRUDE_W': 0.19743348662518673,

'EDOLLAR': 0.035645291049388697,

'GAS_US': 0.15010566887898191,

'KR10': 0.034978842111056153,

'KR3': 0.13770753839879318,

'US10': 0.0086525875783564771,

'US20': 0.0041330971606378854}

# check risk target hit

p.portfolio_std

Out[91]: 0.10001663416516968

## What's next

In the next post I'll test the method (in it's back testable python format - otherwise (a) the results could arguably be forward looking, and (b) I have now seen more than enough spreadsheets for 2018 thank you very much) against some alternatives. It could take me a few weeks to post this, as I will be somewhat busy with Christmas, university, and book writing commitments!

Very interesting series. I have 2 questions about how to use the code:

ReplyDelete1) when loading historical data. how far back you look ?

2) the returns input, you mean pct_change() ?

1) I use 5 years, but it doesn't really matter as it's just an example. For correlation I'd probably use between 6 months and 2 years for assets; for strategies all the available data. For sharpe ratios (if used) all the data available. For standard deviations the last month or so of daily returns is ideal, but 6 months or so of weekly returns is fine too.

Delete2) Yes, percentage returns are the input.

Great post, looking forward to seeing more!

ReplyDeleteThanks for the post, this is a great series. I tried running it with some different data and have run into an infinite recursion problem. My portfolio looks like:

ReplyDelete[' Contains 6 sub portfolios',

["... Contains ['BUY', 'JPHF', 'PSP']"],

['... Contains 3 sub portfolios',

["...... Contains ['PBP', 'PUTW']"],

["...... Contains ['YYY']"],

["...... Contains ['JPHY']"]],

["... Contains ['AMU']"],

["... Contains ['DIVY']"],

["... Contains ['RYN']"],

["... Contains ['LAND']"]]

and I can get the cash_weights for each of the 6 sub-portfolios individually, but not the whole thing.

Looks like it gets stuck cycling through these 3 Portfolio methods:

File "/mnt/c/git/pysystemtrade/syscore/handcrafting.py", line 837, in volatility_weights

weights_vol = self._calculate_volatility_weights()

File "/mnt/c/git/pysystemtrade/syscore/handcrafting.py", line 657, in _calculate_volatility_weights

vol_weights = self._calculate_weights_aggregated_portfolio()

File "/mnt/c/git/pysystemtrade/syscore/handcrafting.py", line 586, in _calculate_weights_aggregated_portfolio

aggregate_weights = aggregate_portfolio.volatility_weights

Any idea what's going on? Thanks.

Hmm. Two weird things there. Firstly the disaggregation looks weird - shouldn't contain any groups with a single market. Secondly the code can't deal with that corner case very well. Can you send me your data file in csv format? (rob AT systematicmoney.org)

DeleteThis should now work for you - I've updated the gist. I'd forgotten to impose the maximum cluster size in the clustering algorithim.... school boy error.

Delete