Monday, 6 September 2021

Truth and Liebor

 This will be a bit different from my normal posts. It's basically some personal reflections on the LIBOR fixing scandal, prompted by having just read this book written by Stelios Contogoulas





This post isn't really a book review, although I will say that the book is definitely worth buying. Most of you have probably already read the excellent Spider Network. That is arguably better written than Stelios' book (as it's written by a professional journalist, and as anyone who has read my books knows ex-traders are not always naturally gifted writers - Nassim Taleb is a black swan in this respect). Stelios' book is less polished, but he still does a good job of hooking you into the narrative and it got very exciting towards the end.

More importantly, as far as I am aware Stelios is the only person who has written a book about this scandal from the inside. And his book is very thoughtful and reflective, and his reflection has inspired some personal thoughts of my own.



Three traders


This post is about three people. One of them is Stelios. Another is an Italian by the name of Carlo Palombo. And the third is me.


Stelios


Carlo


Me


What do we have in common? Well, we're all in our forties, and our hair has long since departed our scalps. But more importantly we were all trading interest rate derivatives at Barclays Capital (as the investment banking arm of Barclays bank was known at the time) at the same time: from around September 2002 to February 2004 (when I left the bank). 

In fact until early 2004 the life and career of myself and Stelios followed an eerily similar track. Stelios of course grew up in Greece not England and is three years older than I am, but like me he lived abroad as an expat child. Like me he was interested in computers, and like me he decided a career in IT was not for him (in my case I dropped out after my first year at University, in his case after several years in IT consulting).

We both returned to education a little later in life, attending the University of Manchester at the same time. I was a mature Economics undergraduate, whilst Stelios was doing an MBA. We overlapped by about 18 months but we probably never met, although many of our lectures would have been in the same building.

Stelios was hired by Barclays in early 2002 as an associate after doing an internship (at the same time as I was doing an internship at AHL). When I was being interviewed for a position on the Barclays analyst programme, he had probably just started in the Canary Wharf office (5 North Colonnade - the home of Barclays investment bank then, and now, at least until next year). We were interviewed by some of the same people, a few months apart. 

We were both hired, I suspect, for ulterior motives. Stelios' computing experience meant that he didn't start properly trading for a couple of years, as he was initially tasked with rebuilding the banks yield curve systems. My instinct is that I was hired because I had the right personality and was a few years older than the other graduates - more of that in a second.

In September 2002 I started on the graduate programme. The programme covered around 75 analysts and associates, covering back, middle and front office. I was one of only two traders. The other was Carlo Palombo. 



Derivatives trading at Barclays


Stelios and Carlo were working within a few metres of each other, both working on the interest rate swaps desk (which also traded FRAs). I was on the next bank of desks, but no more than 10 metres from each of them. My job was a little fancier; at least in theory. I was working on the exotics rates desk, which confusingly covered both vanilla options (swaptions, caps and floors) as well as actual exotics (bermudans, CMS, PRDMC...). However like Stelios and Carlo I was very much a junior trader.

My line manager was the desk MD, a very smart and decent guy who looked like a bouncer. But I reported day to day to the desk's senior trader, who ran the main vega book (options maturing in over a year; there was also a gamma book for shorter options which I eventually took over, plus various traders trading FX, caps/floors, inflation; and we also had an on desk quant / trader for the very fancy stuff). 

A thinly disguised version of this bloke appears in my first book ('Sergei'). He was an extremely unpleasant person to work for. I suspect I was hired - despite not having the Phd everyone else on the desk had - because it was thought with a few years of work experience I would be able to deal with this character better than a 21 year old neophyte or fresh faced Phd. It sort of worked - at least for me; I didn't end up being a glorified coffee boy like most junior traders as I refused to take any crap.

But Carlo was reporting to a guy called Jay (who traded the short Euro swaps and FRAs), who made my senior trader look like an social worker.  He really gave Carlo hell, and the poor guy practically cowered under the tirade of abuse he got if he made even the slightest error. I felt sorry for Carlo, as I was working relatively relaxed hours (7am to 5pm), much less than the other analysts on the IB programme, and also a lot less than Carlo who practically had to sleep under his desk to keep up with the workload. Interestingly in Stelios' book he refers to Jay as:

 '... very demanding as a person- particularly with juniors - but when he liked someone, he was a great manager and mentor'. 

OK. Maybe I just didn't see his good side - perhaps he didn't like me or need to like me, or maybe I'm just a snowflake who was too soft to work on the trading floor. I certainly couldn't have worked on the swaps desk which was much larger than ours, and always seemed to have at least five people yelling abuse at each other. 

Outside of business I knew Carlo reasonably well as there were often nights at the pub or house parties with the other members of the grad programme, but I probably only spoke to Stelios half a dozen times during my time at Barclays. 



The crucial post it note


We didn't have a huge amount of interaction with 'the delta desk' as we disparagingly called the swaps traders, although we were supposed to do our hedging with them internally, and we also used to occasionally get them to clear up the fixing risk on our books. Sometimes a complex deal would need co-ordination between the desks, but mostly we had a friendly(ish) distant rivalry. We thought the delta traders were a bit simple (how hard could it be to trade swaps and FRAs, compared to bermudan swaptions?), and they probably thought we were a bit lazy and arrogant. As a junior trader from the stuck up exotics desk I tried to avoid the very scary looking senior swaps traders like Jay wherever possible.

One day however we had some large expiries in our book, and the market price was very close to the strike. 

About 15 minutes before the expiry (and fixing time) 'Sergei' leant over to me and in an uncharacteristically quiet voice said 

"Go tell X that we have a large expiry on this morning". 'X' was a senior swaps trader

'Oh come on, don't make me walk over there. Why don't I just message or call him' I moaned, not fancying running the gauntlet of the swaps desk.

'Don't be so f***** stupid. Go over and tell him, face to face.' hissed Sergei in reply. I rolled my eyes.

'For f**** sake' he muttered, and grabbed a post it note 'Just do it. Here is the expiry we have on. I've written it down so you don't forget it. Make sure you get it right. And make sure you bring that post-it note right back here'

Now I was intruiged. This was more like a spy mission than the normal humdrum business of trading. I wandered over to X (who fortunately was one of the nicer blokes on the swaps desk), and passed the crucial information on.

'We have this expiry today' I said, and read off the post it note. X nodded sagely but said nothing. I stood there for a few moments, not sure exactly what was supposed to happen next. He turned back to his screen, which was obviously my cue to leave. 

I returned to my desk, and sat down. Sergei held out his hand without looking at me.

'Post it note' he snapped. I pulled the scrap of yellow paper out of the pocket I had stuffed it into, and passed it over. I watched as he methodically tore it into tiny pieces, and then put the pieces into his own pocket. Then he turned to me and winked. Belatedly, I realised what had just happened.

Some background information, swaptions (options on swaps) were mostly cash settled against something which you can think of as a bit like a 'Swap Libor' fixing. Like LIBOR it was calculated daily from an average of figures given by a panel of banks. The swaps desk was resposible for submitting their estimated figures of where swaps were trading at a specific time each morning.

Note here the direct analogy with LIBOR:

The swaptions desk will gain / lose if swaps fix in a particular place
-   The swaps desk will gain / lose if LIBOR fixes in a particular place
The swaptions desk are not responsible for submitting the swap fix - the swaps desk are
-   The swaps desk are not responsible for submitting the LIBOR fix - the cash desk are
To influence the swap fix the swaptions desk will have to speak to the swaps desk
-    To influence the LIBOR fix the swap desk will have to speak to the cash desk

Now, I am not saying that Sergei was trying to influence the swaps fix that day in favour of our expiry. And indeed, the message I had passed on was not 'We'd like the fix to be higher today please' All I had told X was the position that we had on. Of course, X could have easily inferred where we would like the fix to be. And he could have used that to change the rate he submitted. 

All in all, it seemed a bit fishy. If this was kosher, why the secrecy? Why didn't Sergei want any electronic or taped record of my conversation with X to exist? Why had he torn up the post it note, and even been careful enough not to put it in the bin by his desk, but presumably take it home for more secure disposal? 

To be clear: I didn't even have the slightest thought that it might be illegal; nothing like this had been covered in eithier my regulatory exams or in the training the bank had provided. And I'd had no formal training whatsoever on the swap fix, or even the expiry process. Still it was definitely a step beyond my own moral boundaries I turned to Sergei and said as confidently as I could:

'I'd rather not do that again if it's okay with you'

He looked at me and smirked. 'Whatever. Now see if you can find a broker to buy us some lunch. I fancy some Ubon today.'

I felt like I'd failed some kind of test, but whatever he thought I was never asked again. In case you're wondering, I don't remember there being anything 'weird' about the expiry today, nor do I remember if we ended up in a profitable position. I have no idea whatsoever if X did anything at all, or if he was just being polite and pretending to do us a favour. 

And, for what it's worth, I never saw any evidence that any further requests were made by Sergei or anyone else. Perhaps he was just very discreet, perhaps it was a very rare event which I just happened to be part of, or perhaps I'd shocked him into a more virtous life (although that seems unlikely). 
 

What I did next


Over the next few months there were other things that seemed fishy to me, but I couldn't avoid doing most of them. One of them I have talked about for several years now, here, in the newspapers, to the UK parliament, and on TV: the practice of selling embedded derivatives to local authorities and housing associations as part of 'LOBO' loans.

Importantly, there was nothing secretive about the LOBO business: communication was done properly over recorded lines, and there were no post it notes bandied around. I remember only one exception, which I described in my earlier blog post:

"On this particular deal the commission was so large in percentage terms that it exceeded internal limits. Even the most hard nosed traders on the trading desk were feeling pangs of.... well not guilt perhaps but fear that this kind of thing might one day be written on a blog. But the broker agreed to take half of the commission spread over subsequent deals, so that was okay."

For that trade there was indeed a lot of whispering, and the real commission was never written down or discussed in a recorded setting - not even on a post it note (it might have been written in biro on someones hand). 

Again it was clear to me that was going on was definitely immoral, but I never even considered it might be illegal. And of course, no court has ever found that Barclays (or any other bank) were engaged in illegal activity in relation to the LOBO deals and there has been no regulatory action. But the banks have 'voluntarily' agreed to 'tear up' many of the LOBO deals and replace with straightforward loans, often taking significant mark to market losses in the process. 

(I remember going to a compulsory course on ethics at Barclays where they told us not to do anything that could end up on the front page of the newspaper, even if it was legal. That amused me no end when I was quoted on the front page of the FT in reference to the LOBO scandal).

The morally grey activities and the stress of working on the sell side all got a bit much for me.  I decided in Febuary 2004 to leave Barclays. My MD tried to make me stay; he even broke the rules and told me what my year end bonus would me if I stayed at least until April. I pointed out that I was probably giving up a lot more money in the long run, but this wasn't for me, and I wasn't entirely happy with a lot of the stuff we were doing.

You know the rest if you've followed my blog; I did a couple of years at an economics think tank and then joined AHL in 2006 where I lived happily every after (at least until 2013, when I left and now live happily ever after writing stuff for you guys to read). 


What Stelios and Carlo did next


What happened to Stelios and Carlo? Well they both stayed at Barclays, and not long after I left Stelios was allowed to begin trading properly, initially on the sterling FRA book, then subsequently covering USD short end swaps for the London desk. And at some point, both were asked to pass on requests to cash desks to ensure LIBOR and/or EURIBOR fixes reflected their trading book.

It's worth quoting from Stelios' book:

"One morning, Fred stood up from his chair...  'Come with me, there's someone I want you to meet'

The two of us walked a few rows away on the edge of the trading floor... Sitting there was Peter Johnson... He was an Englishman in his early fifties with already a long career at Barclays. He was an established, succesful, and very senior trader.

'Stelios this is Peter....' said Fred 'He is the US cash trader here at Barclays and he's the person responsibile for submitting LIBOR rates for the bank. Alex and I will be asking you on occasion to relay some information to him, relating to LIBOR rates and our preference on it. So, all you have to do is to let him know, OK?'

Peter got up... 'Nice to meet you, Stelios. Just let me know whenever you boys need something and I'll do my best to help out' he said."

And thus the die was cast.


The LIBOR scandal


When the rumours about LIBOR first surfaced in 2008 (and ironically, I think it was Tim Bond from Barclays who brought 'lowballing' to everyones attention), I immediately remember the incident from five years earlier. My first thought was 'Yes, that's absolutely what would have been happening', and then 'Wait, is that really illegal?'. 

The rest is history not worth repeating here; but for Stelios and Carlo it did not end well, as both were prosecuted for LIBOR and EURIBOR fixing respectively.  I won't tell you what happened to Stelios, you can google it if you like or better still read his book. Sadly, Carlo was sentenced to four years in prison, and could be there until 2023 (although hopefully he will qualify for an earlier release).

Several other traders were also found guilty, of which the most high profile was certainly Tom Hayes who was finally released a few months ago.


Why them, and not me?


I'm not going to discuss the rights and wrongs of the scandal here, I'm not going to debate as to whether any law was actually broken; nor will I tell you how I feel that only relatively junior people got prosectuted whilst their bosses got away with murder. You can read Stelios' book, as he's basically in broad agreement with me on all of these issues.

But there is one point I want to finish with. In Stelios' book he includes this line:

"Try to put yourself in my shoes and think about how you would have acted in my place"

For me this is especially poignant. It really could have been me. I wasn't actually in Stelios' shoes, but I was standing (or rather sitting) just a few metres away. And yet I acted quite differently.

I'd like to think that it's because I have an especially finely tuned moral compass, but if I'm being brutally honest I'm not sure that's the case (and to be fair to Stelios, in my limited personal dealings with him, and in his book, he comes across as a pretty decent guy).

Realistically, if I was in Stelios' shoes, or Carlo's for that matter, I probably would have done what he / they did. After all, we had a lot in common, quite apart from our near parallel career tracks. We had no training whatsoever on the legal or regulatory ramifications of rate fixing. Furthermore, we were working as juniors for domineering bosses who brooked no disagreement, although Stelios and I probably coped better than Carlo.

There are two main reasons why I didn't make the same decisions. Firstly, we were doing jobs that were quite different. Rate fixing had a much bigger impact on the swaps book than on ours (to use some jargon, we were running much smaller delta positions), so seeking to influence fixing rates just doesn't seem to have been such a big part of the job.

And secondly, if you reread the accounts of my brush with rate fixing and Stelios' description, they are quite different. There is none of the furtive nature of Sergei's instructions when you read what Stelios writes. There is no reason for Stelios to suspect that anything fishy is going on. It's just presented as completely normalised behaviour.

I am still not completely sure why Sergei was so secretive, given the practice of adjusting fixes was so commonplace. Perhaps he had some prescience about whaat was going go happen in the future, perhaps it was for his own amusement as part of the 'test', or perhaps it was just his Russian upbringing. 



"Try to put yourself in my shoes and think about how you would have acted in my place"


Thursday, 2 September 2021

The three kinds of (over) fitting

This post is something that I've banged on about in many presentations at several conferences* (most complete slides are here), and in various interviews, but never actually formally described in a blog post. In fact this post has existed in draft form since 2015 (!).

* you know, when you leave your house and listen to someone else speaking. Something that in late 2021 is a distant memory, although I will actually be speaking at an event later this year.

So there won't be new information here if you've been following my work closely, but it's still nice to write it down in one place.

(I'm trying to stick to my self imposed target of one blog post per month, but you will appreciate that I don't always have time for the research involved in producing them - unless it's a by product of something I'm already working on)

Trivially, it's about the fitting of trading systems and the different ways you can screw this up:

  • Explicit (over)fitting
  • Implicit (over)fitting
  • Tacit (over)fitting


What is fitting

I find it hard to believe that anyone reading this doesn't already know this, unless you've accidentally landed here after googling some unrelated search term, but let me define my terms.

The act of fitting a trading system can formally be defined as the process of discovering which combination of trading rule and parameter set(s) will produce the optimal trading system when tested on historic data: a combination I call the trading rule variation. The unspoken assumption of all quant finance is that this variation will also be the optimal system to use in the future.

A trading rule is a specific set of instructions which tells you how to trade; for example something like 'Buy if the N day return is negative, otherwise sell'. In this case the parameter set would consist only of a specific value of N.

Optimality can mean many things, but for the purposes of this post let's assume it's maximising Sharpe Ratio (it isn't that important which measure we choose in the context of our discussion here).

So for this particular example fitting could involve considering alternative values of N, and finding the value which had the highest Sharpe Ratio in an historic backtest. Alternatively, it could also involve trying out different rules - for example 'Sell if the N day return is negative, otherwise buy'. But note that these approaches are equivalent; we could parameterize this alternative set of rules as 'Buy X*units if the N day return is negative, otherwise buy' where X is eithier +1 (so we buy) or -1 (so we sell). Now we have two parameters, N and X, and our fitting process will try and find the optimal joint parameter values. 

Of course there are still numerous rules that we haven't considered here, such as selling if the N hour return is negative, or if the most recent non farm payroll was greater than N, or if there was a vomiting camel chart pattern on the Nth Wednesday in the month. So when fitting we will do so over a given parameter space, which includes the range of possible values for all our parameters. Here the parameter space will be X = [-1,1] and N = [1,2,3......] (assuming we have daily closing data). The product of possible values of X and N can loosely be thought of as the 'degrees of freedom' of the fitting process. 

All fitting thus involves the choice of some possible trading strategies from a tiny subset of all possible strategies.

The number of units to buy or sell is another question entirely, which I discuss in this series of posts

Fitting can be done in an automated fashion, purely manually, or using some combination of the above. For example, we could get some backtesting software and ask it to find the optimal values of X and N. Or we could manually test each possible variation. Or we could run the backtesting software once for X=1 (buy if N day return is negative), and then again for X=-1, each time finding the best value of N. The third option is the more common amongst most quant traders.


What is overfitting and why it be bad

Consider the following:

Hastie et al (2009) “The Elements of Statistical Learning” Springer. Figure 2.11


How does this relate to the fitting of trading systems? Well, we can think of 'prediction error' as 'Sharpe Ratio on an inverted scale' such that a low value is good. And 'model complexity' is effectively the degrees of freedom of the trading strategy.

What is the graph telling us? Well first consider the 'training sample' - the set of data we used to do the fitting on - the dirty red line. As we add complexity we will get a better performing trading strategy (in expectation). In fact it's possible to create a trading strategy with zero prediction error, and thus infinite Sharpe Ratio, if the degrees of freedom are sufficiently large (in a hand waving way, if the complexity in the strategy is equal to the amount of entropy in the data). 

How? Well consider a trading strategy which has the form 'Buy X*units if it's January', 'Buy X*units if it's February'.... If we fit this on past data it's going to do pretty well. Now let's make it even more complex: 'Buy X* units if it's January 3rd 2015', 'Buy X* units if it's January 4th 2015' .... (where January 3rd 2015 is the first day of our price history). This will perfectly predict every single day in the backtest, and thus have infinite Sharpe Ratio.

(More mathematically, if we fit a sufficiently high degree polynomial to the price data, we can get a perfect fit)

On the out of sample (dirty green) line notice that we always do worse (in expectation) than the red line. That's because we'll never do as well in predicting a different data set to what we have trained / fitted our model on. Also notice that the gap between the red and the green line grows as the model gets more complex. The more closely our model fits the backtest period, the less likely it is that it will be able to predict a novel future. 

This means that the green line has a minimum error (~maximum Sharpe Ratio) where we have the optimal amount of complexity (~degrees of freedom). Anything to the right of this point is overfitting (also known as curve fitting).

Sadly, we don't get paid based on how well we predict the in sample data. We get paid for predicting out of sample performance: for predicting the future. And this is much harder! And the Sharpe Ratios will be lower! 

At least in theory! In practice, if you're an academic then you get paid for publishing papers with nice results: papers that predict the past. If you're working for a quant hedge fund then you may be getting paid for coming up with nice backtests that also predict the past. And even as a humble independent trader, we get a kick out of a nice backtest. So for this reason it's very easy to be drawn towards trying to make the in sample line look as possible: which we'll do by making the model more complicated.

Basically: our incentives make us prone to overfitting and towards confounding the red and the green lines.



Explicit fitting


We're now ready to discuss the three kinds of (over)fitting.

The first is explicit fitting. It's what most people think of as fitting. The basic idea being that you get some kind of automated algo to select the best possible set of parameters. This could be very easy: a grid search for example that just tries every possible strategy variation. Or it could be much more complex: some kind of fancy AI technique like a neural network. 

The good news about explicit fitting is that it's possible to do it properly. By which I mean we can:
 
  • Restrict ourselves to fewer degrees of freedom
  • Enforce a realistic seperation between in and out of sample data in the backtest (the 'no time machine' rule) 
  • Use robust fitting techniques to avoid wandering into the overly complex overfitting end of the figure above.

Of course it's also possible to do explicit fitting badly (and plenty of people do!), but at least it's possible to avoid overfitting if you're careful enough.


Fewer degrees of freedom


Consider a more realistic example of an moving average crossover trading rule (MAC) which can be defined using two parameters A and B: signal = MA_A - MA_B, where MA_x is a moving average with lookback x days, and A<>B. Note that if A<B then this will be a momentum rule, whereas if A>B it will be a mean reversion rule. We assume that A and B can take any values in the range 1 to 256 (where 256 is roughly the number of business days in a year); anything longer than this would be an 'investment' rather than a 'trading' strategy.

If we try and fit all 65,280 possible values of A and B individually for each instrument we trade then we're very likely to overfit. We can reduce our degrees of freedom in various ways:

  • Restrict A<B [so just momentum]
  • Set B = k.A; fit k first, then fit A  [I do this!]
  • Restrict A and B to be in the set {1,2,4,8,16,32, ... 256}  [I do this!]
  • Use the same A, B for all instruments in a given asset class [discussed here]
  • Use the same A,B for all instruments [perhaps after accounting for costs]
Notice that this effectively involves making fitting decisions outside of the explicit fitting... I discuss this some more later. But for now you can note that it's possible to make these kinds of decisions without using real data at all.


No time machine


By 'no time machine', I mean that a parameter set should only be tested on a period of data if it has been fitted only on data that was available on data that was in the past of the testing period.

So for example if we fit from 2000 - 2020, and then test on the same period, then we're cheating - we couldn't have done this without a time machine. If we fit from 2000-2010, and then test from 2011 - 2020; then that's okay. But if we then do a classic ML technique and subsequently fit from 2011-2020 to test from 2000-2010 then we've cheated.

There are two honest options:

  • An expanding window; first we fit using data for 2000 (assuming a year gives us enough data to fit with; if we're doing a robust fit that would be fine) and test that model in the year 2001; then we fit using 2000 and 2001, and test that second model in 2002..... then we fit using 2000 - 2019, and then test in the year 2020.
  • A rolling window. Say we want to use a maximum of 10 years to fit our data, then we would proceed initially as for an expanding window until we get to .... we fit using 2000 - 2009 and test in the year 2010, then we fit using 2001 - 2010 and test in the year 2011.... then finally we fit using 2010-2019 and then test in the year 2020. 
In practice the choice between expanding and rolling windows is a tension between using as much data as possible (to reduce the chances that we overfit to a small sample), and the fact that markets change over time. A medium speed trend follower that needs decades worth of data to fit will probably want to use an expanding window: they are exploiting market effects that are relatively low Sharpe Ratio (high entropy in the data) but will also hopefully not go away. An HFT shop will want to use a rolling window, with a duration of the order of a few months: they are looking for high SR effects that will be quickly degraded once the competition finds out about them.


A robust fitting technique 


A robust fitting technique is one which accounts for the amount of entropy in the data; basically it will not over reach itself based on limited evidence that one parameter set is better than another.  

Consider for example the following:

A and B are the parameters for a MAC model trading Eurodollar futures. The best possible combination sits neatly in the centre of this plot: A=10, B=20 (a trend following model of medium speed). The Z-axis compares this optimum with all other values shown in the plot; a high value (yellow) indicates the optimium is significantly better than the relevant point.

I have removed all values below 2.0, which roughly corresponds to statistical significance. The large white area covers all possible values of A and B that can't be distinguished from the optimum. Even though we have over 30 years of data here, there is enough entropy that we can only rule out all the mean reversion systems (top triangle of the plot), and the faster momentum models (wedge at top left).

Contrast this with the picture for Mexican Peso:


Here I only have a few years of data. There is almost no evidence to suggest that the optimum parameter set (which lies at the bottom right of the plot) is any better than almost any other set of parameters. 

A simple example of robust fitting is the method I use myself: I construct a number of different parameter variations and then allocate weights to them. 

This is now a portfolio optimisation problem, a domain where there are plenty of available techniques for robust fitting (my favourite is discussed at length, in the posts that begin here). We can do this in a purely backward looking fashion (not breaking the 'no time machine' rule). A robust fitting technique will allocate equally to all considered variations where there is too much entropy and insufficient evidence that any is worth allocating more to (in the form of heterogenous correlation matricices, different cost levels, or differing pre-cost Sharpe Ratios). 

But when there is compelling evidence available it will tilt it's allocation to more diversifying, cheaper, and higher performing rule variations. It is usually a tilt rather than a wholesale reallocation, since there is rarely enough information to prove that one trading rule variation is better than all the others.



Implicit fitting


We can now think about the second form of fitting: implicit fitting. Implicit fitting occurs when you make any decision having seen the results of testing with both in and out of sample data.

Implicit fitting comes in degrees of badness. From most worst to least bad, examples of implicit fitting could include:

  • Run a few different backtests with different parameter values. Pick the one you like the best. Basically this is explicit in sample fitting, done manually. As an example, consider what I wrote earlier:  "Or we could run the backtesting software once for X=1 (buy if N day return is negative), and then again for X=-1, each time finding the best value of N." This is implicit fitting.
  • Run an explicitally fitted backtest, then modify the parameter space (eg restricting A<50) before running it again
  • Run a proper backtest, then modify the trading rule in some way before running it again (again, with explicit fitting, so you can pat yourself on the back). If this improves things, keep the modified rule.
  • Run a series of backtests, changing the fitting hyper parameters until you get a result you like. Examples of hyper parameters include expanding window lookbacks, shrinkage on robust Bayesian fitting, deciding whether to fit on a per instrument or per asset basis, and all kinds of wonderful things if you're doing fancy AI.
  • Run a series of backtests, changing some 'non core' parameters until you get a result you like. Examples include the volatility estimation lookback on your risk scaling, or the buffer window.
  • Run a single backtest to try out and idea. The idea doesn't work, so you forget about it completely.
You can probably see why these are all 'cheating': we're basically making use of a time machine that wouldn't . So for the last example, what we really ought to do is have a 'fund level' backtest in which every single idea we've ever considered is stored, and gets a risk allocation at the start of our testing period (which is then modified as the backtest fitting learns more about the historic performance of the model). Poor ideas will not appear in our 'live' model (assuming there is sufficient evidence by the ), but it will mean that our historic 'fund level' account curve won't be inflated by only ever having good ideas within it.

Other ways to deal with this also rely on knowing how many backtests you have run for a given idea; they include correcting your significance level for the number of trials you have done (which I don't like, since it treats a major case of parameter cheating the same as a tiny hyper parameter tweak), and testing on multiple paths to catch especially egregious over fitting (something like CPCV

But ultimately, you should know when you are doing implicit fitting. Try not to do it! As much as possible, if something needs fitting (and most things don't) fit in a proper explicit robust out of sample fashion. 



Tacit fitting


Barbara is a quant trader. She's read all about explicit and implicit fitting. She decides to fit a MAC model to capture momentum. First she restricts the parameter space using artifical data (as I discuss here):

  • Restrict A<B [so just momentum]
  • Set B = 4A [using artificial data]
  • Restrict A to be in the set {1,2,4,8,16,32,64}  [using artificial data]
  • Drop values of A that are too expensive for a given instrument [using artificial data]

Then she fits a series of risk weights using a robust out of sample expanding window with real data, pooling data across all instruments. Barbara is pleased with her results and goes ahead to trade the strategy.

The question is this, has Barbara used a time machine? Surely not!

In fact she has. Consider the first decision that she made:

  • Restrict A<B [so just momentum]
Could Barbara have made this decision without a time machine? Had she really been at the start of her backtest data (which we'll assume goes back to the beginning of financial market data; for the sake of argument let's say that's 1900), would she have known that momentum is more likely to be profitable than mean reversion (at least for the sort of assets and time scales that I tend to focus on, as does Barbara?). Strictly speaking the answer is no. Barbara only knows that momentum is better because of one or more pieces of tacit knowledge. Most likely:

  • She's done this backtest before  (perhaps at another shop where they were less strict about overfitting)
  • And/ or her boss has done this backtest before, and told her to fit a momentum model
  • And/ or she saw a conference presentation where someone said that momentum works 
  • ... She read a classic academic paper on the subject
  • ... Her Uber driver to the airport was an ex pit trader who favoured momentum
  • She is one of my students
  • She's read all of my books
None of this information would have been available to Barbara in 1900. By restricting A<B she's massively inflating her backtested performance over what would have been really possible had the backtest software realistically discovered over time that momentum was better. It's also possible that she will miss out on some profitable trading strategies just because she isn't looking for them (for example, some models of mean reverting A>B seem to be profitable for small A). 

Solving the problem of tacit fitting is very hard. Here are some possible ideas:

  • Widen the parameter space and fit in the wider space (so don't restrict A<B in this simple example). Of course that will result in more degrees of freedom, so you will need to be far more careful with using a robust fitting technique.
  • Use some kind of fancy neural network or similar to fit a highly general model. Even with moderm computational power it is unrealistic to fit a model that would be sufficiently general to avoid any possibility of tacit fitting (for example, if you only feed such a model daily price data, then you've arguably made a tacit decision that daily prices can predict future returns).
  • Hire people who know nothing about finance (and once they've learned, kill or brainwash them. You can't just fire them - they'll tell people your secrets!). This is surprisingly common amongst top quant funds (the hiring of ignorant people, not the killing and brainwashing).


And finally....




And if you want to get fancy, read this book.

Now go away, and overfit no more.

Friday, 2 July 2021

Talking to the dead / simple heuristic position selection / small account problems - part four / EPIC FAIL #2

 Over the last few posts I've been grappling with the difficulties of trading futures with a retail sized account. I've tried a couple of things so far - a complex dynamic optimisation (here and here) where I try and optimise the portfolio every day in the knowledge that I can only take integer positions, and then a simpler static approach where I try to pick the best fixed set of instruments to trade given my account size - and then trade them.

In this post I return to a dynamic approach (choosing the best positions to hold each day from a very large set of instruments), but this time I'm going to use much simpler heuristic methods. I use the term heuristic to mean something you could explain to an eldery relative: let's call them Auntie Barbara.

I used to have an Auntie Barbara, but she died a long time ago. If there is an afterlife, and if they have internet there, and if she subscribes to this blog: Hi!




I've written this post to be fairly self contained (I can't really expect Auntie Barbara to read all the previous posts, she will be too busy playing tennis with Marilyn Monroe or something) and also a bit simpler than the previous three to follow.



The setup


Here's the setup again. I have a universe currently of 48 futures markets I'd like to trade (for now - in practice I'm adding new instruments every few days, and in an ideal world there are around 150  I'd like to trade if I could). If I backtest their performance it looks great (this is just with the set of trading rules in chapter 15 of my book, 3 EWMAC + carry; but I do allow the instrument weights to be optimised):



That's a Sharpe Ratio of 1.18, pretty good for two trading rules (ewmac and carry). Oh the power of diversification...

Not only does it make money, it also (on average) has good risk targeting. Here's the rolling annualised standard deviation (which come in at 22.2% on average, slightly under the target)




Auntie Barbara (AB): "Great! You always were a little smart alec. Can I get back to my jacuzzi now? I've got James Dean and Heath Ledger waiting for me."

* Auntie is communicating with me from the spirit world via telnet, hence the Courier typeface

Sorry Auntie, I cheated slightly there. That's the performance if I can take fractional futures positions, o equivalently what I could do with many millions of dollars. 

This is what it looks like if I trade it with $100K (about £80K: this particular FX rate is roughly unchanged since my Auntie died)

I normally use $500K for these tests - but I'm trying to make the results starker.

AB "Why does it start going wrong, weirdly, not long after I've died? Are you saying this is my fault?"


Not at all! No, to begin with there are only a few instruments in the data. Then as more are added, we struggle to take positions in every instrument due to rounding. We end up with many instruments that have no position at all; the positions we end up making (or losing money) from just happen to be those with relatively small contract sizes. 

So the portfolio becomes more concentrated, and in expectation (and also in reality here), has worse performance. It also undershoots it's risk due to all that 'wasted' capacity of the instruments which can't take a position. There are many instruments here that we are just collecting data for, but can't hope to ever take a position in.

Now look at the rolling realised standard deviation again:


We're systematically undershooting, especially in more recent years when I have a lot more instruments in the dataset. The risk is also 'lumpier', reflecting the close to binary nature of the system.


AB "Hang on, I've just read your last couple of posts again. Or tried to. What happens if you do some kind of fancy dynamic optimisation on your positions each day?"

That doesn't work and is way too complicated.

AB "And what if you just select a group of markets and trade with those?"

Well if I use the 16 instruments I identified in my last post  as suitable for a $100K account I get these results:


Fewer markets is handicapped by having later starting data, but if I account for that:


AB "When does that data start now?"

The 11th May 1974

AB "Ah - that's your birthday. Coincidence?"

Well actually the data starts on 22nd April 1974, but that's close enough.

That feels slightly like cheating since they're identified using some forward looking information, but if I selected any 16 instruments on a rolling basis using any vaguely sensible methodology I'd expect on average to get similar results.

Basically we make up some of the ground on the full 40+ instrument portfolio compared to the rounded situation, but we never quite manage it (although the green curve looks as good, it's actually got a lower SR and underperforms in more recent years as we get more and more instruments in the full portfolio). In expectation 16 instruments, no matter how carefully chosen, will underperform 50; never mind 150.



The simplest possible approach?


AB "Well it's obvious what you should do"

Is it?

AB "Do you remember when you were a boy, and you'd invite all your friends to your birthday parties?"

I'm 47 Auntie Barbara. I'm not 100% sure what I did last thursday.

AB "Well just bear with me then. Suppose you had 50 friends, and you could only invite 16 to your party. What would you do?"

I'd.... well I'd pick my favourite 16 friends (this is hypothetical! What kind of person has fifty 'friends'?).

AB "Now suppose you had a birthday party every single day. What would happen?"

Well... I suppose I'd pick whoever was my favourite 16 friends on that day. But, with respect, what on earth, (sorry insensitive), what the hell (worse!),  what in heaven does this have this to do with the problem at hand.

AB "Hasn't the penny dropped yet? I thought you were a smart alec."

OK it has finally dropped. What I need to do is just hold positions in the 16 instruments that have the strongest absolute forecast on that day.

AB "Someone give the boy a medal"



I choose to ignore that. Let's see some code:


class newPositionSizing(PositionSizing):

@output()
def get_subsystem_position(self, instrument_code: str) -> pd.Series:
all_positions = self.get_pd_df_of_subsystem_positions()
return all_positions[instrument_code]

@diagnostic()
def get_pd_df_of_subsystem_positions(self) -> pd.DataFrame:
all_forecasts =self.get_all_forecasts()
list_of_dates =all_forecasts.index

list_of_positions = []
previous_days_positions = portfolioWeights()
p=progressBar(len(list_of_dates))
for passed_date in list_of_dates:

positions = self.get_subsystem_positions_for_day(passed_date, previous_days_positions)
list_of_positions.append(positions)
previous_days_positions = copy(positions)
p.iterate()

p.finished()

df_of_positions = pd.DataFrame(list_of_positions)
df_of_positions.index = list_of_dates

return df_of_positions

def get_subsystem_positions_for_day(self,
passed_date: datetime.datetime,
previous_days_positions: portfolioWeights = arg_not_supplied) -> portfolioWeights:

if previous_days_positions is arg_not_supplied:
previous_days_positions = portfolioWeights()
forecasts = self.get_forecasts_for_day(passed_date)

initial_positions_all_capital = self.get_initial_positions_for_day_using_all_capital(passed_date)

positions = calculate_positions_for_day(previous_days_positions = previous_days_positions,
forecasts = forecasts,
initial_positions_all_capital = initial_positions_all_capital)
list_of_instruments = self.parent.get_instrument_list()
positions = positions.with_zero_weights_for_missing_keys(list_of_instruments)

return positions


def get_initial_positions_for_day_using_all_capital(self,passed_date: datetime.datetime) -> portfolioWeights:
all_positions = self.get_all_initial_positions_using_all_capital()
all_positions_on_day = all_positions.loc[passed_date]

return portfolioWeights(all_positions_on_day.to_dict())

def get_forecasts_for_day(self, passed_date: datetime.datetime)->portfolioWeights:
all_forecasts = self.get_all_forecasts()

todays_forecasts = all_forecasts.loc[passed_date]

return portfolioWeights(todays_forecasts.to_dict())

@diagnostic()
def get_all_forecasts(self) -> pd.DataFrame:
instrument_list = self.parent.get_instrument_list()
forecasts = [self.get_combined_forecast(instrument_code)
for instrument_code in instrument_list]

forecasts_as_pd = pd.concat(forecasts, axis=1)
forecasts_as_pd.columns = instrument_list
forecasts_as_pd = forecasts_as_pd.ffill()

return forecasts_as_pd

@diagnostic()
def get_all_initial_positions_using_all_capital(self) -> pd.DataFrame:
instrument_list = self.parent.get_instrument_list()
positions = [self.get_initial_position_using_all_capital(instrument_code)
for instrument_code in instrument_list]

positions_as_pd = pd.concat(positions, axis=1)
positions_as_pd.columns = instrument_list
positions_as_pd = positions_as_pd.ffill()

return positions_as_pd


@diagnostic()
def get_initial_position_using_all_capital(self, instrument_code: str) -> pd.Series:

self.log.msg(
"Calculating subsystem position for %s" % instrument_code,
instrument_code=instrument_code,
)

inital_position = self.get_volatility_scalar(instrument_code)

return inital_position


This code actually contains some future proofing, in that it is written for path dependence in positions - which we're not actually going to use yet. 



def calculate_positions_for_day(previous_days_positions: portfolioWeights,
forecasts: portfolioWeights,
initial_positions_all_capital: portfolioWeights):

## Get risk budget per market
##
risk_budget_per_market = proportionate_risk_budget(forecasts)
maximum_positions = int(1.0 / risk_budget_per_market)
idm = min(maximum_positions**.35, 2.5)
idm_with_risk = risk_budget_per_market * idm

initial_positions = signed_initial_position_given_risk_budget(initial_positions_all_capital,
forecasts = forecasts,
risk_budget=idm_with_risk)

list_of_tradeable_instruments = tradeable_instruments(initial_positions=initial_positions,
forecasts=forecasts)

current_instruments_with_positions = []

## Sort markets by abs strength of forecast
## Iteratively from strongest to weakest:
list_of_instruments_strongest_forecast_first = \
sort_list_of_instruments_by_forecast_strength(forecasts=forecasts,
instrument_list=list_of_tradeable_instruments)

for instrument_to_add in list_of_instruments_strongest_forecast_first:
## If already have position, keep it on - wouldn't be in this list
if len(current_instruments_with_positions)<maximum_positions:
## If haven't got a position on, and risk budget remaining, add a position
current_instruments_with_positions.append(instrument_to_add)
continue
else:
## If no markets remain with current positions in could be removed group, halt
break

new_positions = fill_positions_from_initial(current_instruments_with_positions=current_instruments_with_positions,
initial_positions=initial_positions)

return new_positions


Most of that should be self explanatory, the 'initial' position (perhaps badly named) is the position the system would want to take if we put all of our trading capital into that single instrument. We then scale that by a risk budget, which is equivalent to an 'instrument weight' that here is just 1/N (N is the number of assets we're currently trading), with a lower limit of 6.25% (to avoid having no more than 16 positions; this value can be tweaked depending on your capital), and an IDM calculated as N^0.35 (note if all subsystems had zero correlation this would be N^0.5, so this is a reasonable approximation), with my normal limit of IDM=2.5


def proportionate_risk_budget(forecasts: portfolioWeights):
market_count = market_count_in_forecasts(forecasts)
proportion = 1.0/market_count

use_proportion = max(proportion, 1.0/16)

return use_proportion

Now a 'tradeable' instrument is one with a non na forecast, but also a position that is equal to a single contract or more. No point wasting risk capital on a position that isn't at least one contract.

AB "No point inviting a kid to the party who can't come. That's a waste of an invitation."


Indeed.

def tradeable_instruments(initial_positions: portfolioWeights,
forecasts: portfolioWeights):
## Non tradeable instruments:
## We don't open up new positions in non tradeable instruments, but we may
## maintain positions in existing ones

valid_forecasts = instruments_with_valid_forecasts(forecasts)
possible_positions = instruments_with_possible_positions(initial_positions)

valid_instruments = list(set(possible_positions).intersection(set(valid_forecasts)))

return valid_instruments

def instruments_with_valid_forecasts(forecasts: portfolioWeights) -> list:
valid_keys = [instrument_code
for instrument_code, forecast_value in forecasts.items()
if _valid_forecast(forecast_value)]
return valid_keys

def _valid_forecast(forecast_value: float):
if np.isnan(forecast_value):
return False
if forecast_value==0.0:
return False
return True

def instruments_with_possible_positions(initial_positions: portfolioWeights) -> list:
valid_keys = [instrument_code
for instrument_code, position in initial_positions.items()
if _possible_position(position)]
return valid_keys

def _possible_position(position: float):
if np.isnan(position):
return False
if abs(position)<1.0:
return False

Let's have a gander at what this thing is doing:




I've zoomed in to the end of this plot, which shows positions for Eurodollar at various stages. The blue line shows what position we'd have on without position rounding, and with a fixed capital weight of 6.25% (equal weight across 16 instruments) multiplied by the IDM (2.5 here). The orange line - which is mostly on the blue line - shows the position we'd have on without rounding, once we've applied the 'You need to have one of the 16 strongest forecasts to come to the party' rule (I need a catchier name).

So for example between March and mid April this goes to zero, as the forecast weakens.

 Finally the green line shows the rounded position, once I've applied my usual buffering rule. You can see that's mostly a rounded version of the orange line.




OK. It's not great, although the last 10 years is pretty good. Also the vol targeting is somewhat poor:




... coming in at an average of 12% a year. 




Horrible path dependence



Let's turn our attention first to the poor performance. Some of that is due to costs; which go up from around 10bp of SR in the large and reduced benchmarks, to 26bp of SR. As I've said many times before, pre-cost performance is (to an extent) random but costs are predictable. Not a surprise when a forecast going from being ranked 16th best to 15th best will result in a trade; and then possibly the next day the same position being closed.


AB "It seems unfair to kick someone out of the party, just because they've gone from being your 15th to 16th favourite friend. Maybe you should let kids stay until they are really not your friends anymore."



OK, let's try it. I propose the following rule (bear in mind that my forecasts are scaled such that a forecast of +10 is an average long):
  • If we have a position, and the absolute forecast is more than 5, then hang on to it.
  • If we don't have a position, and the absolute forecast is more than 5 then try to open a new position. Starting with the instuments with the highest forecasts:
  • If we already have the maximum number of positions open, then:
    • For instruments that have open positions, starting with the lowest forecast close the position and replace it with the new instrument.
    • Do not close a position if the absolute forecast is more than 5. 
    • Once all possible positions (absolute forecast<5) have been closed, do not open any new positions

So:
  • Absolute forecasts greater than 10:
    • Existing position: won't be closed
    • New position: probably will be opened
  • Absolute forecasts between 5 and 10:
    • Existing positions: won't be closed
    • New positions: may be opened
  • Forecasts less than 5:
    • Existing positions: may be closed


def calculate_positions_for_day(previous_days_positions: portfolioWeights,
forecasts: portfolioWeights,
initial_positions_all_capital: portfolioWeights):

risk_budget_per_market = proportionate_risk_budget(forecasts)
maximum_positions = int(1.0 / risk_budget_per_market)
idm = min(maximum_positions**.35, 2.5)
idm_with_risk = risk_budget_per_market * idm

initial_positions = signed_initial_position_given_risk_budget(initial_positions_all_capital,
forecasts = forecasts,
risk_budget=idm_with_risk)

list_of_tradeable_instruments = tradeable_instruments(initial_positions=initial_positions,
forecasts=forecasts)

current_instruments_with_positions = from_portfolio_weights_to_instrument_list(previous_days_positions)

## forecast less than +5 or non tradable (could be removed)
list_of_removable_instruments = removable_instruments_with_positions_weakest_forecasts_last(current_instruments_with_positions,
forecasts=forecasts)
## ordered by weakness of forecast

## Sort markets by abs strength of forecast
## Iteratively from strongest to weakest:
list_of_instruments_with_no_position_strongest_forecast_first = \
instruments_with_no_position_strongest_forecast_first(
list_of_tradeable_instruments=list_of_tradeable_instruments,
forecasts=forecasts,
current_instruments_with_positions=current_instruments_with_positions)

for instrument_to_add in list_of_instruments_with_no_position_strongest_forecast_first:
## If already have position, keep it on - wouldn't be in this list
if len(current_instruments_with_positions)<maximum_positions:
## If haven't got a position on, and risk budget remaining, add a position
current_instruments_with_positions.append(instrument_to_add)
continue

elif len(list_of_removable_instruments)>0:
## If haven't got a position on, and no risk budget remaining,
## Remove position from market with current position and weakest forecast in 'could be removed' group
instrument_to_remove = list_of_removable_instruments.pop()
current_instruments_with_positions.remove(instrument_to_remove)
current_instruments_with_positions.append(instrument_to_add)
continue
else:
## If no markets remain with current positions in could be removed group, halt
break

new_positions = fill_positions_from_initial(current_instruments_with_positions=current_instruments_with_positions,

initial_positions=initial_positions)

return new_positions

def from_portfolio_weights_to_instrument_list(positions: portfolioWeights):
instrument_list = [instrument_code for instrument_code, position in positions.items()
if _valid_position(position)]
return instrument_list

def _valid_position(position: float):
if np.isnan(position):
return False
if position==0.0:
return False

return True

def removable_instruments_with_positions_weakest_forecasts_last(current_instruments_with_positions: list,
forecasts: portfolioWeights):
instrument_with_weak_forecasts = instruments_with_weak_or_non_existent_forecasts(forecasts)
instruments_with_positions_and_weak_forecasts = list(set(current_instruments_with_positions).intersection(instrument_with_weak_forecasts))

instruments_with_positions_and_weak_forecasts_weakest_forecast_last = \
sort_list_of_instruments_by_forecast_strength(forecasts,
instruments_with_positions_and_weak_forecasts)

return instruments_with_positions_and_weak_forecasts_weakest_forecast_last


def instruments_with_weak_or_non_existent_forecasts(forecasts: portfolioWeights) -> list:
weak_forecasts = [instrument_code
for instrument_code, forecast_value in forecasts.items()
if _weak_forecast(forecast_value)]
return weak_forecasts

def _weak_forecast(forecast_value: float):
if np.isnan(forecast_value):
return True
#FIXME SHOULD COME FROM SYSTEM HARD CODING IS THE DEVILS WORK
if abs(forecast_value)<5.0:
return True
return False

def sort_list_of_instruments_by_forecast_strength(forecasts: portfolioWeights,
instrument_list) -> list:

tuples_to_sort = [(instrument_code,
_get_forecast_sort_key_given_value(forecasts[instrument_code]))
for instrument_code in instrument_list]
sorted_tuples = sorted(tuples_to_sort, key=lambda tup: tup[1], reverse=True)
list_of_instruments = [x[0] for x in sorted_tuples]

return list_of_instruments

def _get_forecast_sort_key_given_value(forecast_value:float):
if np.isnan(forecast_value):
return 0.0
return abs(forecast_value)

def instruments_with_no_position_strongest_forecast_first(forecasts: portfolioWeights,
current_instruments_with_positions: list,
list_of_tradeable_instruments: list):

tradeable_instruments_setted = set(list_of_tradeable_instruments)
tradeable_instruments_setted.difference_update(current_instruments_with_positions)
instruments_with_no_position = list(tradeable_instruments_setted)
list_of_instruments_with_strong_forecasts = instruments_with_strong_forecasts(forecasts)

list_of_instruments_with_strong_forecasts_and_no_position = \
list(set(instruments_with_no_position).intersection(set(list_of_instruments_with_strong_forecasts)))

sorted_instruments = sort_list_of_instruments_by_forecast_strength(forecasts=forecasts,
instrument_list=list_of_instruments_with_strong_forecasts_and_no_position)

return sorted_instruments

def instruments_with_strong_forecasts(forecasts: portfolioWeights) -> list:
strong_forecasts = [instrument_code
for instrument_code, forecast_value in forecasts.items()
if _strong_forecast(forecast_value)]
return strong_forecasts

def _strong_forecast(forecast_value: float):
if np.isnan(forecast_value):
return False
#FIXME SHOULD COME FROM SYSTEM
if abs(forecast_value)<5.0:
return False
return True

That improves things a little; the cost comes down to 20SR units. But that's still a lot - about double what it is in the benchmark cases.

Let's restrict our universe of instruments we can consider adding to forecasts over 10, rather than over 5. Then we have:

  • Absolute forecasts greater than 10:
    • Existing position: won't be closed
    • New position: probably will be opened
  • Absolute forecasts between 5 and 10:
    • Existing positions: won't be closed
    • New positions: won't be opened
  • Forecasts less than 5:
    • Existing positions: may be closed
This creates a 'no trade zone' for forecasts between 5 and 10.

.... and makes almost no difference; lowering the costs by 1 SR unit.

Clearly I could play with these boundaries until I got a nicer result, but this reeks of implicit fitting and I feel the gap is just too large.


Some other things we could try


There are more complicated things we could do here, for example considering diversification when adding potential instrument positions, allocating the risk bucket by asset class or instrument cluster, perhaps a more sophisticated approach to costs.... but I think we'll just end up in the bad old world of complex dynamic optimisation that I narrowly escaped from in the second post


Conclusion


I feel this particular dead horse has been flogged enough. There is no easy way to get around the problem of having insufficient capital to trade loads and loads of futures markets. Any kind of dynamic optimisation, eithier by simple ranking (this post), or complex formula (posts 1 and 2) just isn't very effective, and involves making the nice simple straightforward trading system very ugly indeed.

By far the simplest approach is to sensibly choose some subset of those markets, and use those as your static set of instruments as I did in post #3 of this series. This also happens to be the best performing option in a backtest. For the $500K of capital that I have the effect on performance is fairly minimal in any case.

Yes there will FOMO if an instrument I don't own shows a seriously good trend, but I will just have to live with that.

Things are clearly tougher if you only have $100K or less, but then as my third book points out maybe you should be trading other leveraged instruments.

My personal 'to do' list now consists of tactically reweighting my portfolio towards the 28 instruments I found to be optimal for my account size here, and putting into place the technology to allow regular (annual?) reviews of my set of instruments.

Thanks for your help Auntie B.

AB "You're welcome. And I hope for your sake that the Jacuzzi is still warm."