This is part one of a series of posts about using optimisation to get the best possible portfolio given a relatively small amount of capital.

In this short post I present the idea, and discuss some issues that I need to resolve. It's a bit of a stream of conciousness! It's less of a blog post, and more my random jottings on the subject converted from scribbles to electronic prose. It's a precursor to further posts where I will start designing and testing the method.

## I am sorry for my size

There is a little known book about the City of London in the 80's (The buck stops here), in which there is quite an amusing anecote. The stockbroker - who has recently been fired - goes for a meal / drink with a Japanese client:

"His enzymes had let him down again, and he was a bit drunk, in a benign sort of way 'I am sorry, Mr Parton for my size' he kept on muttering. I caught the stares of a few passers-by and wanted to say to them, this man does not mean what you think he means."

Of course the Japanese fund manager is referring to the size of his *fund* which is relatively modest (and this is why the broker has been canned in the first place. As a specialist in selling European equities to Japanese investors who prefer to invest domestically, or at a push in the US, he is doomed).

My fund, or rather my trading account, is also relatively modest. It's larger than the average retail account, but by no means the multi billion dollars I used to jockey back in the days when I had a proper job.

This is.... unfortunate. Why does it matter? Obviously it means fewer bragging rights in Soho wine bars, but that doesn't bother me (especially as at the time of writing, Soho wine bars are outside table service with NHS track and trace enabled only). No, what bothers me is this:

*trading rules*but kept the number of instruments constant. There the increase is slower, and also begins to show reduced marginal gains. Here we're still getting fairly steady improvements in performance at the 33 instrument mark. If there is an optimal number of instruments (at which point the marginal improvement became non existent) one could trade it's clearly much more than 33, or even the 37 or so I've traded with (give or take) since 2014.

**Diversification across instruments is the only free lunch in finance.**

However it isn't actually a free lunch. Every extra instrument you trade will use up capital (this isn't true for trading rules, at least not the way my system is implemented). This problem is most pressing for futures traders, since you can't trade fractions of a futures contract, and most contracts are very large in dollar risk compared to the average persons trading account.

This means that with less capital you can't trade the 400+ or so instruments traded by AHL and other large CTAs. Even if we put aside the OTC instruments and cash equities that these funds trade, and just stick to futures, there are something like 70 additional futures markets I don't already trade which are liquid enough, not massive in size, have cheap data, and don't cost too much. But there is no way I could trade over 100 markets with my capital.

And this is a serious problem for retail traders, which is why I wrote a whole book about how to make the best use of scarce capital (the subject is also discussed at length in my first and second books). Diversification across instruments is the main competitive advantage that large funds have.

So I'm stuck with around 37 instruments, and I can only manage that many because of an ugly hack that I wrote about at some length here.

That ugly hack is worth a brief discussion (though you are welcome to read the post). It relies on the fact that, with some exceptions, a larger **forecast** (my scaled measure of *expected *risk adjusted return) implies a larger *ex-post risk* adjusted return. This is something I analysed in more detail in this more recent post.

So in the ugly hack I ignore forecasts that are too small, and then scale up forecasts beyond some threshold more aggresively scale up trades (to ensure that the scaling properties of the forecast are unchanged). I have to do this in markets where my modest capital is most pressing: those with relatively large contract sizes.

The important point here is that larger forecasts are better - hold on to that point.

## Optimisation to the rescue

Now any financial quant worth their salt would read what I've just written and say 'Pff! That's just an optimisation problem'.

'Pff?' I'd reply.

'Mais Oui*. All you need to do is take the expected returns and covariance matrix, limit the optimisation weights to discrete values, and press F9**'

** Thanks to their excellent Grand Ecole system, most quants are French*

*** Surprisingly large amounts of the financial system, especially on the sell side, run in Excel*

'But where do I get the expected returns from?'

'Boff! You already have the, how do you say, forecasts? A higher forecast means a higher expected return, does it not?'

'Yes, but there is no obvious mapping... Also aren't optimisations somewhat.... well not robust?'

'Only if handled by an inexperienced Rosbif like yourself. For a suitable fee I can of course help you out....'

Now I can't afford to pay this imaginary Quant a fee, and of course she is imaginary, so we'll have to come up with a better solution using a methodology that I understand (no doubt much simpler than is taught in the hallowed lecture theatres of the Ecole Polytechnique). And the building block we're going to use is **Black-Litterman.**

## A brief idiots guide to Black-Litterman

Well Black and Litterman are of course the legendary (and sadly missed) Fischer Black of BSM and BDT; and GSAM legend Bob Litterman. And their model deals with the problem I highlighted above 'But where do I get the expected returns from?'

And the answer is you get them from an **inverse portfolio optimisation**. You start with a portfolio of weights (let's put aside for the moment the question of where they come from). Then you estimate a covariance matrix. Then you run the classical Markowitz optimisation (find the optimal weights given a vector of expected returns and a covariance matrix, and some risk tolerance or utility function) in reverse so it becomes **find the expected returns given a vector of weights and a covariance matrix**.

BL (as I will say henceforth) used the market portfolio for their starting weights, and hence the resulting implied returns are the 'equilibrium returns'; the returns that are expected given that the 'average' (in a cumulative sense) investor must hold the market portfolio by construction.

Once you have your expected returns you can combine them with some **forecasted returns**. Perhaps you want to include the discretionary opinion of your chief economist. Or maybe you've got some kind of systematic model for forecasting returns. In any case you take a weighted average of the original equilibrium returns and your forecasts (so this is Bayesian in character as we shrink our forecasts towards the equilibrium returns). Now with your new vector of expected returns you run **the normal optimisation forward**; using the same covariance matrix you derive a new set of optimal weights.

(The full paper is here)

BL portfolios have some nice properties. If you make no changes at all to the expected returns then you'll recover the original weights (this is a good way to check your code is working!). If you replace them completely, you'll basically have the portfolio implied by your forecasts (which will usually be not very robust at all, with the usual extreme weights problem highlighted). But a blend of the two sets of expected returns, if weighted mostly towards the equlibrium returns, will produce robust portfolios that are tilted away from the market cap weights to reflect our forecasts.

I'm a fan of BL because it accounts, to an extent, for the hierarchy of inputs to a portfolio optimisation. Expected returns are the hardest to forecast, and small changes have a big effect on the output. Standard deviations are relatively easy to forecast, and small changes have a small effect on the output. Correlations fall somewhere in the middle. BL effectively assumes we can predict standard deviations and correlations perfectly, but doesn't make the same assumption about expected returns.

But I don't actually use BL for optimisation, mainly because in the kind of problem I'm usually dealing with (eg deciding how to linearly weight a variety of trading rules and instruments) as it isn't obvious what the 'market cap portfolio' should be. And I'm not going to use it for it's intended purpose here eithier.

## The brilliant idea

We can use the BL methodology to do something rather cool and interesting, and fun (and completely different from the original intent). We can run the backward optimisation, and then the forward, without making any changes to the expected returns. Instead we make **some other change to the optimisation**. Most commonly this would be the introduction of constraints; like a limit on Emerging market exposure, or a position size limit, or ... and this is relevant.... **a discrete position size constraint**.

So the plan looks something like this:

- Run my standard position generation function, which will produce a vector of desired contract positions across instruments, all of which will be non integer. Let's call this the 'original' portfolio weights. The main inputs into this calculation are the forecast, instrument weight (as a proportion of risk capital allocated), current volatility of the instrument, long run target volatility and the instrument diversification multiplier (see here, and search for 'why does expected risk vary')
- Estimate a covariance matrix Σ and a risk aversion coefficient λ
- Using a reverse Markowitz, BL style, calculate the implied expected returns for each instrument, µ. There is a closed form for the reverse optimisation, since this doesn't have constraints: λΣw
- Run the optimisation forward using µ, Σ, λ, with a constraint that only integer contract positions can be taken.

## The brilliant idea is harder than it first sounds: some small problems

*lot*of unanswered questions here. I've spent a long time thinking about this idea (over 18 months); and it's actually much more complicated than it might first seem.

**228 instruments.**Anything we can do to reduce the area that has to be searched would be good! For example, I'd be reluctant to put more than 10% of my risk capital in a single instrument. That sets an upper and lower limit on position size.

**reduce only**', for which the maximum would be the current position (if long, the minimum if short). This list would be updated automatically for instruments that fell below or suddenly qualified for my required criteria for volume and costs. There would no need to eliminate instruments that were 'too big to trade'; this would happen naturally if 10% of risk capital wasn't sufficient to take even a single contract of position.

## And some big problems

**naturally result in a portfolio which has about the same amount of risk as the original. Which is important, because there is information in the amount of risk that the original strategy positions want to take.**

*should***historically**they've been relatively uncorrelated with SP500), and also suppose those weights are a result of doing a naive markowitz optimisation with some specific correlation matrix of trading subsystem returns (not true in practice, but we'll come to that).

*correlation of trading subsystem returns,*then in theory we'd end up with expected returns that were equal (actually risk adjusted returns that were equal, but we're ignoring risk and focusing on correlation for now). Which is all fine and correct - since the forecasts are equal.

**the current correlation of the instrument returns**of US2, US5 and SP500 are all equal and positive (so the world has changed, and stocks and bonds are now highly correlated). Then if were to use

*this*correlation matrix in the initial forward optimisation then our implied expected returns would be higher for SP500 than it is for US2 and US10 year (ignoring risk again). This doesn't seem right.

*Reverse / Forward optimisation: which correlation matrix used*

__A: Subsystem correlation / Subsystem correlation__

__B: Current instrument correlation / current instrument correlation__

__C: Current instrument correlation / Subsystem correlation__

__D: Subsystem correlation / Current instrument correlation__

This is really interesting, thanks for posting. I wonder what the compute requirements would be if you added a faster moving trading sub-system in addition to your trendfollower and then performed a multi-period optimisation...

ReplyDelete______________________________ . \ | / .

/ / \ \ \ / /

| | ========== - -

\____________________________\_/ / / \ \

Could be expensive.

Not if the cost penalty does it's job properly.

DeleteOf course that doesn't always work in practice.

I mean't for your cloud compute bill :)

DeleteMais oui not mai oui :-)

ReplyDeleteJe l'ai corrigé

DeleteRob, c'est un très bon sujet, j'ai lutté pendant des années pour essayer de le comprendre. I currently been doing forecast filtering, but less than happy with it. I am going to start coding up this approach, don't suppose you have any worked examples to share, or is that for part deux! Thanks once again for the great through leadership, now all we need as some JGB micro contracts!

ReplyDeleteMerci! No, I'm currently writing the code for part two. So watch this space.

Deletethanks rob as always! I will be implementing mine in java (long story why java, but mainly cos i have not got the time to re-implement in python (although at some point I will have no choice) ) but happy to collaborate in anyway I can, or to cross check the results with java as a secondary check.

Delete