It's for this reason that only 2 out of the 75 posts I've published on this blog have been about trading rules (this on trend following and carry; and this one on my 'breakout' system). But ... if I look at my inbox or blog comments or my thread on elitetrader.com the most common request is for me to "write about X"... where X is some trading rule I may have casually mentioned in passing that I use, but haven't written about it.
So I have mixed feelings writing this post (in which the metaphorical kimono will be completely opened- there are no more secret trading rules hiding inside my system). I'm hoping that this will satisfy the clamour for information about other trading rules that I run. Of course it's also worth adding these rules to my open source python project pysystemtrade, since I hope that will eventually replace the legacy system I use for my own trading, and I won't want to do that unless I have a complete set of trading rules that matches what I currently use.
But I'd like to (re-)emphasise that there is much, much, much more to successful systems trading than throwing every possible trading rule into your back test and hoping for the best. Adding trading rules should be your last resort once you have a decent framework, and have done as much instrument diversification as your capital can cope with.
Pre-requisites: Although there is some messy pysystemtrade python code for this post here you don't need to use it. It will however be helpful to have a good understanding of my existing trading rules: Carry and EWMAC (Exponentially weighted Moving Average Crossover) which you can glean from my first book or this post - most of the rules I discuss here are built upon those two basic ideas.
PS You'll probably notice that I won't talk in detail about how you'd develop a new trading rule; but don't panic, that's the subject of this post.
Short volatility
I'm often asked "What do you think your trading edge is?" A tiresome question (don't ask it again if you want to stay in my good books). If I have any 'edge' it's that I've learned, the hard way, the importance of correct position sizing and sticking to your trading system. My edge certainly doesn't lie in creating novel trading rules.
Instead the rules I use all capitalise on well known risk factors: momentum and carry for example. You'll sometimes see these called return factors but you don't get return without risk. Of course we all have different risk tolerances, but if you are happy to hold positions that the average investor finds uncomfortably risky, then you'll earn a risk premium (at least it will look like a premium if you use standard measures of risk when doing your analysis). A comprehensive overview of the world of return factors can be found in this excellent book or in this website.
One well known risk factor is the volatility premium. Simply put investors are terrified of the market falling, and bid up the price of options. This means that implied volatility (effectively the price of volatility implied by option prices) will on average be higher than expected realised volatility.
How can a systematic futures trader earn the volatility premium? You could of course build a full blown options trading strategy, like my ex AHL colleague. But this is a huge amount of work. A much simpler way is to just sell volatility futures (the US VIX, and European
V2TX); in my framework that equates to using a constant forecast of -10, or what I call in my book the "no rule" trading rule (note because of position scaling we'll still have smaller positions when the volatility of volatility was higher, and vice versa).
And here is a nice picture showing a backtest of this rule:
"With hindsight Rob realised that starting his short vol strategy in late 2007 may not have been ideal timing...." |
Earning this particular premium isn't for the faint hearted. You will usually earn a consistent return with occasional, horrific, drawdowns. This is what I call a negative skew / insurance selling strategy. Indeed based on monthly returns the skew of the above is a horror show -0.664. This isn't as bad as the underlying price series, because vol scaling helps improve skew, but still pretty ugly (on S&P500 using the same strategy it's a much nicer 0.36).
It is a good compliment to the positive skew trend following rules that form the core of my system (carry is broadly skew neutral, depending on the asset class). For various reasons I don't recommend using the first contract when trading vol futures (in my data the back adjusted price is based on holding the second contract). One of these good reasons is that the skew is really, really bad on the first contract.
But... we already have trend following and carry in vol? Do we need a short bias as well?
I already include the VIX, and V2TX, in my trend following and carry strategies. That means to an extent I am already earning a volatility premium.
How come? Well imagine you're holding the first VIX contract, due to expire in a months time. The price of that (implied vol) will be higher than the current level of the VIX (which I'll call, inaccurately, spot vol), reflecting the desire of investors to pay up for protection against volatility in the next month. As the contract ages the price will drift down to spot levels, assuming nothing changes; a rolldown effect on futures prices. That's exactly what the carry strategy is designed to capture.
This isn't exactly the same as the implied versus spot vol premium; but it's very closely related.
Now consider trend following. Assuming you use back adjusted futures prices then in an environment when spot vol doesn't move, but in which there is negative rolldown for the reasons described above, then the back adjusted price will drift downwards. This will create a trend in which the trend following strategy will want to participate.
Arguably trend following and carry are actually better than being short vol, since they are reactive to changing conditions. In 2008 a short vol strategy would have remained stubbornly short in the face of rapidly rising vol levels. But trend following would have ended up going long vol (eventually, depending on the speed of the rule variation). Also in a crisis the vol curve tends to invert (further out vol becoming cheaper than nearer vol) - in this situation a carry strategy would buy vol.
The vol curve tends to invert in a crisis |
So.... what happens if I throw carry and trend following back into the mix? Using the default optimisation method in pysystemtrade (bayesian shrinkage) the short biased signal gets roughly a 10% weight (sticking to just VIX and V2X). That equates to an improvement in Sharpe Ratio on the overall account curve of the two vol futures of just 0.03, a difference that isn't statistically different. And the skew gets absolutely horrific.
So... is this worth doing? I'll discuss this general issue at the end of the post. But on the face of it using trend following and carry on vol futures might a better way of capturing the vol premium than just a fixed short bias. Using all three of course could be even better.
An aside: What about other asset classes?
An excellent question is why we don't incorporate a bias to other asset classes that are known to earn a risk premium; for example long equities (earning the equity risk premium) or long bonds (earning the term premium)?*
* I'm not convinced that there is a risk premia in Commodities, at best these might act as an inflation hedge but without a positive expected return. It's not obvious what the premia you'd earn in FX is, or which way round you should be to earn it.
This might make sense if all your capital was in systematic futures trading (which I don't recommend - it's extremely difficult to earn a regular income purely from trading). But I, like most people, own a chunk of shares and ETFs which nicely cover the equity and bond universe (and which pay relatively steady dividends which I'm happy to earn an income from). I don't really need any more exposure to these traditional asset classes.
And of course the short vol strategy has a relationship with equity prices; crashes in equities normally happen alongside spikes in the VIX / V2X (I deliberately say relationship here rather than correlation, since the relationship is highly non linear). Having both long equity and short vol in the same portfolio is effectively loading up massively on short black swan exposure.
Relative carry
The next rule I want to consider is also relatively simple - it's a relative version of the carry rule that I describe in my book and which is already implemented in pysystemtrade. As the authors of this seminal paper put it:
"For each global asset class, we construct a carry strategy that invests in high-carry
securities while short selling low-carry instruments, where each instrument is weighted
by the rank of its carry"
Remember for carry the original forecast is quite noisy, to avoid that we need to smooth it. In my own system I use a fixed smooth of 90 business days (as many futures roll quarterly) for both absolute and relative carry.
Mathematically the relative carry measure for some instrument x will be:
Rx_t = Cx_t - median(Ca_t, Cb_t, ...)
Where Ca_t is the smoothed carry forecast for some instrument a, Cb_t for instrument b and so on; where a,b, c....x are all in the same asset class.
Note - some people will apply a further normalisation here to reflect periods when the carry values are tightly clustered within an asset class, or when they are further apart - the normalisation will ensure a consistent expected cross sectional standard deviation for the forecast. However this is leveraging up on weak information - not usually a good idea.
This rule isn't super brilliant by itself. Here it is, tested using the full set of futures in my dataset:
It clearly underperforms it's cousin, absolute carry. More interestingly though the predictors look to be doing relatively different things (correlation is much lower than you might expect at around 0.6), and the optimisation actually gives the relative carry predictor around 40% of the weight when I just run a backtest with only these two predictors.
Lobbing together a backtest with both relative and absolute carry the Sharpe ratio is improved from 0.508 to 0.524 (monthly returns, annualised). Again hardly an earth shattering improvement, but it all helps.
Normalised momentum
Now for something completely different. Most trading rules rely on the idea of filtering the price series to capture certain features (the other school of thought within the technical analysis campus is that one should look for patterns, which I'm less enthusiastic about). For example an EWMAC trend following rule is a filter which tries to see trends in data. Filtering is required because price series are noisy, and a lot of that noise just contributes to potentially higher trading costs rather than giving us new information.
But there is another approach - we could normalise the price series to make it less noisy, and then apply a filter to the resulting data. The normalised series is cleaner, and so the filters have less work to do.
The normalisation I use is the cumulative normalised return. So given a price series P_0, P_1 ... P_T the normalised return is:
R_t = (P_t - P_[t-1] ) / sigma (P_0.... P_t)
Where sigma is a standard deviation calculation.Also to avoid really low vol or bad prices screwing things up I apply a cap of 6.0 in absolute values on R_t. Then the normalised price on any given day t will be:
N_t = R_1 + R_2 + R_3 + .... R_t
NOTE: For scholars of financial history I've personally never seen this trading rule used elsewhere - it's something I dreamed up myself about three years ago. However it comes under the "too simple not to have been already thought of" category so I expect to see comments pointing out that this was invented by some guy, or gal, in 1952. If nobody does then I will not feel too embarrassed to call this "Carvers Normalised Momentum".
Perceptive readers will note:
- You probably shouldn't use normalised prices to identify levels since the level of the price is stripped out by the normalisation.
- These price series will not show exponential growth; the returns will be roughly normal rather than log normal. This is a good thing since over long horizons using prices that show exponential growth tends to screw up most filters since they don't know about exponential growth. Over relatively short horizons however it makes no difference.
- Simple returns calculated using the change in normalised price can be directly compared and aggregated across different instruments, asset classes and time periods; something that you can't do with ordinary prices. We'll use this fact later.
Rather boringly I am now going to apply my favourite EWMAC filter to these normalised price series, although frankly you could apply pretty much anything you like to them.
Minor point: The volatility normalisation stage of an EWMAC calculation [remember its ewma_fast - ewma_slow / volatility] isn't strictly necessary when applied to normalised price series which will have a constant expected volatility but it's more hassle to take it out so I leave it in here.
Normalised momentum |
Performance wise there isn't much to choose between normalised and the use of standard EWMAC on the actual price; but these things aren't perfectly correlated, and that can only be a good thing.
Aggregate momentum
It's generally accepted that momentum doesn't work that well on individual stocks. It does however sort of work on industries. And it is relatively better again when applied to country level equity indices.
I have an explanation for this. The price of an individual equity is going to be related to the global equity risk premium, plus country specific, industry specific, and idiosyncratic firm specific factors. The global equity risk premium seems to show pretty decent trends. The other factors less so; and indeed by the time you are down to within industries mean reversion tends to dominate (though you might call it the value factor, which if per share fundamentals are unchanged amounts to the same thing).
Value type strategies then tend to work best when we're comparing similar assets, like equities in the same country and industry; also because accounting ratios are more comparable across two Japanese banks, than across a Japanese bank and a Belgian chocolate manufacturer. There is a more complete expounding of this idea in my new book, to be released later this year.
So trading equity index futures then means we're trying to pick up the momentum in global equity prices through a noisy measurement (the price of the equity index) with a dollop of mean reverting factor added on top.
If you follow this argument to it's logical conclusion then the best places to see momentum will be at the global asset class level*. There we will have best measure of the underlying risk factor, without any pesky mean reversion effects getting in the way.
* A future research project is to go even further. I could for example create super asset classes, like "all risky assets" [equities, vol, IMM FX which are all short USD in the numeraire, commodities...?] and "all safe assets" [bonds, precious metals, STIR, ...]. I could even try and create a single asset class using some kind of PCA analysis to identify the single most important global factor.
How do we measure momentum at the asset class level? This is by no means a novel idea (see here) so there are plenty of suggestions out there. We could use benchmarks like MSCI world for equities, but that would involve dipping into another data source (and having to adjust because futures returns are excess returns, whilst MSCI world is a total return); and it's not obvious what we'd use for certain other asset classes. Instead I'm going to leverage off the idea of normalised prices and normalised returns which I introduced above.
The normalised return for an asset class at time t will be:
RA_t = median(Ra_t, Rb_t, Rc_t, ...)
Where Ra_t, Rb_t are the normalised returns for the individual instruments within that asset class (eg for equities that might include SP500 futures, EUROSTOXX and so on). You could take a weighted average, using market cap, or your own risk allocations to each instrument, but I'm not going to bother and just use a simple average.
Then the normalised price for an asset class is just:
NA_t = RA_1 + RA_2 + RA_3 + .... RA_t
Next step is to apply a trend following filter to the normalised price... yes why not use EWMAC?
Minor point of order - it's definitely worth keeping the volatility normalisation part of EWMAC here because the volatility of NA is not constant even when the volatility of each Na, Nb... is - if equities become less correlated then the volatility of NA will fall, and vice versa; as more assets are added to the data basket and diversification increases again the volatility of NA will fall. Indeed NA should have an expected volatility that is lower than the expected volatility of any of Na, Nb...
Having done that we have a forecast that will be the same for all instruments in a particular asset class.
If I compare this to standard, and normalised, momentum:
... again performance wise not much to see here, but there is clearly diversification despite all three rules using EWMAC with identical speeds!
If I compare this to standard, and normalised, momentum:
... again performance wise not much to see here, but there is clearly diversification despite all three rules using EWMAC with identical speeds!
Cross sectional within assets
So we can improve our measure of momentum using aggregated returns across an asset class. This works because the price of an instrument within an asset class is affected by the global asset class underlying latent momentum, plus a factor that is mostly mean reverting. Won't it also make sense then to trade that mean reversion? In concrete terms if for example the NASDAQ has been outperforming the DAX, shouldn't we bet on that no longer happening?
Mathematically then, if NA_t is the normalised price for an asset class, and Nx_t is the normalised price for some instrument within that asset class, then the amount of outperformance (or if you prefer, Disequilibrium) over a given time horizon (tau, t) is:
Be careful of making t-tau too large as remember the slightly different properties of Nx and NA; the former has constant expected vol whilst the latter will, by construction, have lower and time varying vol. But also be careful of making it too small- you need sufficient time to estimate an equilibrium. A value of around 6 months probably makes sense
And my personal favourite measure of mean reversion is a smooth of this out-performance:
Dx_t = [Nx_t - Nx_tau] - [NA_t - NA_tau]
Be careful of making t-tau too large as remember the slightly different properties of Nx and NA; the former has constant expected vol whilst the latter will, by construction, have lower and time varying vol. But also be careful of making it too small- you need sufficient time to estimate an equilibrium. A value of around 6 months probably makes sense
And my personal favourite measure of mean reversion is a smooth of this out-performance:
- EWMA(Dx_t, span)
Where EWMA is the usual exponentially weighted moving average; this basically ensures we don't trade too much whilst betting on the mean reversion. The minus sign is there to show mean reversion is expected to occur (I prefer this explicit reminder, rather than reversing the stuff inside Dx).
Using my usual heuristic, finger in the air, combined with some fake data I concluded that a good value to use for the EWMA span was one quarter of the horizon length, t - tau.
Using my usual heuristic, finger in the air, combined with some fake data I concluded that a good value to use for the EWMA span was one quarter of the horizon length, t - tau.
Here is an example for US 10 year bond futures. First of all the normalised prices:
Blue is US 10 year normalised price. Orange is the normalised price for all bond futures. |
US 10 year bond future normalised price - Bond asset class normalised price |
Notice how the system first bets strongly on mean reversion occurring during the taper tantrum, but then re-estimates the equilibrium and cuts its bet. With any mean reversion system it's important to have some mechanism to stop the falling knife being caught; whether it be something simple like this, a formal test for a structural break, or a stop loss mechanism (also note that forecast capping does some work here).
What about performance? You know what - it isn't great:
Performance across all my futures markets of mean reversion rule |
BUT this is a really nice rule to have, since by construction it's strongly negatively correlated with all the trend following rules we have (in case you have lost count there are now four!: original EWMAC, breakout, normalised momentum, and aggregate momentum; with just two carry rules - absolute and relative; plus the odd one out - short volatility). Rules that are negatively correlated are like buying an insurance policy - you shouldn't expect them to be profitable (because insurance companies make profits in the long run) but you'll be glad you bought them when if your car is stolen.
In fact I wouldn't expect this rule to perform very well, since plenty of people have found that cross sectional momentum works sort of okay in some asset classes (read this: thank you my ex-colleagues at AHL) and this is doing the opposite (sort of). But strong negative correlation means we can afford to have a little slack in accepting a rule that isn't stellar in isolation (a negatively correlated asset with a positive expected return can be used to create a magic money machine).
In fact I wouldn't expect this rule to perform very well, since plenty of people have found that cross sectional momentum works sort of okay in some asset classes (read this: thank you my ex-colleagues at AHL) and this is doing the opposite (sort of). But strong negative correlation means we can afford to have a little slack in accepting a rule that isn't stellar in isolation (a negatively correlated asset with a positive expected return can be used to create a magic money machine).
Note: This rule is similar in spirit to the "Value" measure defined for commodity futures in this seminal paper (although the implementation in the paper isn't cross sectional). To reconcile this it's worth noting that momentum and value mostly operate on different time frequencies - in the paper the value measure is based on 5 year mean reversion [I use 6 months], whilst the authors use a 12 month measure for momentum [roughly congruent to my slowest variation].
Summary
Does adding these rules improve the performance of a basic trend following using EWMAC on price, plus carry strategy? It doesn't (I did warn you right at the start of the post!) but is it sill worth doing? I use a variation of Occam's Razor when evaluating changes to my trading strategy. Does the change provide a statistically significant improvement in performance? If not is it worth the effort? (By the way I make exceptions for simplifying and instrument diversifying changes when applying these rules).
I'd expect there to be a small improvement in performance given these rules are diversifying, and given that there isn't enough evidence to suggest that these rules are better or worse than any of my existing rules, but in practice it actually comes out with slightly worse performance; although not with a statistically significant difference.
But I don't care. I have a Bayesian view that the 'true' Sharpe Ratio of the expanded set of rules is higher, even if one sample (the actual backtest) comes out slightly different that doesn't dissuade me. I'm also a bit wary of relying on just one form of momentum rule to pick up trends in the future, even if it has been astonishingly successful in the past. I'd rather have some diversification.
Note if I had dropped any of the 'dud' rules like mean reversion, I'd be guilty of in sample implicit [over]fitting. Instead I choose to keep them in the backtest, and let the optimisation downweight them in as much as there was statically significant evidence they weren't any good.
The new rules have less of a long bias to assets that have gone up consistently in the backtest period; so arguably they have more 'alpha' though I haven't formally judged that.
Although on the face of it there is no compelling case for adding all these extra rules I'm prepared to make an exception. Although I don't like making my system more complex without good reason there is complexity, and there is complexity. I would rather have (a) a relatively large number of simple rules combined in a linear way, with no fancy portfolio construction, than (b) a single rule which has an insane number of parameters and is used to determine expected returns in a full blown markowitz optimisation.
So I'm going to be keeping all these numerous rule variations in my portfolio.
Rob, you have a link to a PDF file on your local disk in the article (AHL analysis of cross-sectional momentum).
ReplyDeleteAlso in asset class normalised return section there seems to be a confusion between Nx_t and Rx_t:
"Where Na_t, Nb_t are the normalised returns for the individual instruments within that asset class ...".
Fixed. Thanks Alex.
DeleteHi Rob,
DeleteThe link is still broken. I mean this one: file:///home/rob/Downloads/Man_AHL_Analysis_Dissecting_Investment_Strategies_in_the_Cross_Section_and_Time_Series_English_01-12-2015.pdf
... fixed again! Thanks again.
DeleteHi Rob,
ReplyDeleteWonderful work again.
One thing I noticed was all your signals are price driven. Do you have any suggestions as to fundamental factors that can used for different asset classes? Or promising fundamental factors that you've researched/seen?
Thank you again!
Here are some I've seen or used myself: Macro factors like GDP, interest rates, inflation and unemployment
DeleteSpecifically equities: Bottom up valuation factors (PE, Dividend yield, PB etc)
Bonds and rates: forward rates from the yield curve
GAT
Hi Rob,
ReplyDeleteThanks a lot for bringing knowledge to people, you're doing a great service to a lot of us!
I wanted to ask you about momentum in stocks, you've mentioned that momentum works better the higher in the hierarchy you go and it almost does not work on the individual stock level., therefore would be interesting to know your opinion about this strategy, that was presented on the latest QuantCon by Jack R. Vogel (details here on page 2: https://www.alphaarchitect.com/assets/pdf/Quantitative_Momentum_philosophy_final.pdf ) ?
The gist is basically you buy and hold large and mid cap stocks of operational companies with "best-quality" momentum in the last year excluding last month and re-balance this every quarter.
I don't know that strategy so I don't feel qualified to comment on it (there is insufficient detail in the paper to form an opinion). I also note that there is a difference between cross sectional momentum (which is what most academic papers are about - and this one?) and absolute momentum (what I was concerned with in that part of the post).
DeleteWhy are trading rules not important? Does this not completely depend on the correlation amongst your trading rules? Granted, if you trade ma x-over, adding breakouts or momentum is almost futile. But adding carry makes a meaningful difference, right? Doesn't that justify the search for more rules that have low correlation with the existing strategies?
ReplyDeleteI always assume that that's what the 99 PhDs at the larger CTAs do (apart from implementation research).
Adding new trading rules has diminishing returns. Yes, adding carry to a basic momentum system adds a lot of juice. Subsequently adding another 10 ways of trading momentum or carry is going to improve things a little, but not very much. Then you're going to be struggling to find things that add significant value. What tends to happen is that you struggle more and more to realise the theoretical gains from diversification through adding things that on paper are 70, 80, 90% correlated.
DeleteI'm not as convinced as others that adding new CTA style trading rules is the way to make money. Diversifying across instruments and across styles is more valuable (i.e. if you're a CTA then adding an single equity market neutral model is a good move. This in fact is what a lot of Phds are doing).
I would argue that for strategies with very limited capacity (e.g., HFT), adding new rules is more important than position sizing.
ReplyDeleteI'd disagree. I think diversification across multiple instruments is more important in HFT.
DeleteRob, one more question for you.
ReplyDeleteIf I remember right, in your book, you mention that we should aim for a granularity of at least +-4 futures. If the contract size is too large for us to do this, then you recommended trading less different instruments.
Is this implicitly saying that it is better to have more granularity within a rule, rather than more diversification among instruments? I'm just curious whether you've found that to be the case in backtesting, e.g. is it better to have 5 instruments and high granularity in your forecast, or 20 instruments, but only be able to trade +- 1 contract each?
https://qoppac.blogspot.co.uk/2016/03/diversification-and-small-account-size.html
DeleteHi Rob,
ReplyDeleteThank you for this great post.
I like your approach with strategy forecast standardization (e.g. -20 to +20). Do you think that _any_ hedge fund/asset manager/bank own strategy can be expressed in those terms (e.g. -20 to +20)? Do you see any limitations of this your approach?
Best,
Max
In theory any forecast can be standardised. It's most problematic for highly non Gaussian forecasts.
DeleteWhat are your thoughts on the "101 Formulaic Alphas" by the guys at WorldQuant(Millenium)? They claim to have 4 million of these things. Each with a sharpe > 2.5 and turnover < 40% and drawdown < 10%. They also want them to be "intuitive". The problem is how do you evaluate 4 million alphas for inuition? or 4 million alphas with modest correlations to eachother? I've tested the majority of their alphas on their own WebSim platform and every single one is considered "Inferior". I obviously conduct research differently(I don't even know if I want 4 million alphas) but when someone as successful as they are gives you hints about how they do things its worth looking into. Just wanted to get your thoughts on their work and things like this!
ReplyDeleteCan't wait to purchase the new book by the way!
I haven't read the paper so I've little idea of what is going on there.
DeletePure speculation on my part, but you could have 101 intuitive ideas and then create 4 million variations of those by just running through all the sensible parameters in the space. This also makes it easier to measure correlations (you could do it in a heiriarchal sense).
"Each with a sharpe > 2.5 and turnover < 40% and drawdown < 10%"
Obviously they're trading at a much higher frequency than I am, since I have nothing that can achieve that... It's impossible to tell if this is really what they're doing since they don't actually publish the trading rules!!
Hello:
ReplyDeleteis there more details(maybe with codes) about this topic in your book?
Nothing on these specific rules. All the python code you need is linked to.
DeleteHi Rob,
ReplyDeleteWonderful new book I will be writing a review for it!
A question on volatility scaling at the strategy level. Let's say I have only one strategy and I would like to target 15% volatility. However, the strategy does better when it becomes more volatile. So by simply, say, volatility adjusting based on recent volatility performance on a risk adjusted basis gets hurt. How would you reconcile this?
Thank you!
"volatility adjusting based on recent volatility performance on a risk adjusted basis gets hurt"
DeleteYes, but I don't vol adjust based on recent performance. The vol targeting for a *strategy* is done based on it's entire history. When forecasts are high expected vol will also be higher. And you'd expect to make more money then (stronger signal).
True. The problem is the first half of the strategy has vol of around 10% then the second half has vol of around 6%. So I think using an expanding window might be misleading.
ReplyDeleteHi Rob, I am trying to get my head around a separate carry signal in the trading system which I am attempting to devise in the image of yours. However, I am grappling with the idea that carry is already inherently built into the TF signal. To use this simplified example to illustrate, let’s say the path of the 1st nearby mth for some asset is 100 -> 99 -> 98 and that of the ‘spot’ price is 102- > 101 -> 100. The spot return is -1 per month and carry is +2 per mth. Let’s say we are trading the 1st nearby and rolling at expiry, the price we observe for the contracts we are in are: 100 -> 101 -> roll to 99 -> 100 -> roll to 98. When we stitch and backadjust the 1st nearby price we get a continuous price going UP from 96->97-> 98 (mthly spot return of -1 with the mthly carry return +2 giving us a net return of +1 per mth). If we were to look at a simple momentum forecast this would be +ve entirely because of the carry element even though the spot price is falling (carry component on the rolls overwhelms the TF element). However, when dealing with MA crossover is it the case, some of the carry effects will cancel out as we are subtracting one MA with another? Even so, there might still be some upward bias to the TF signal if the carry is greater in the shorter lookback than in the longer lookback. This led me to think that carry is in some sense already built into the TF signal for a futures TF system. If this is the case, I wonder if I need to take care to avoid ‘doubling up’ carry? Perhaps the difference in holding periods might resolve this conundrum, but I am not entirely sure.
ReplyDeleteYou're right that carry is built into the backadjusted price. This is a dilemma that I've thought about before. So for example you might think it's logical to remove carry from your backadjusted price and then trend follow the resulting price; and then have a separate carry signal. It's a difficult call and very hard to distinguish the performance if you do this; since inevitably you end up allocating more to the separate carry signal because the 'spot' trend following doesn't do as well. In practice it probably makes no difference; but be aware that your 'true' allocation to carry is higher than you might realise.
ReplyDeleteThank you Rob,
ReplyDeleteIt's very good to get your insight on this. I actually see the problem as being less acute when applying your suggested EWMACs to generate the TF signal. It seems to me that the amount of 'extra' carry in the system might be +ve or -ve depending on a second order measure of carry, or the 'delta' between the EWMA carry over the short lookback and the long lookback.
If so, it would be tempting to adjust the carry forecast by this delta amount, but that would just make the carry forecast more volatile which I suppose defeats the purpose of smoothing carry, in which case, as you seem to be saying, we might as well live with it.
Are these observations correct?
Hi Rob,
ReplyDeleteThank you for your great insight, I really liked your book. You mentioned that a staunch systematic traders should aim to adjust their positions once they have entered into a trade based on volatility, forecast and price inertia. While I understand such distinctions for certain trading rules like mean reversion (confidence and forecast based on indicators or standard deviations), I was wondering how one might go about implementing similar adjustments to a strategy like EWMACs.
For instance, a crossover is a crossover. Perhaps different forecast weights can be placed on a crossover depending on indicated price movements, or something along those lines. But what about after the trade has occurred? For example, if the fast and the slow ema diverge from the initial crossover, what would that entail for a trader as to whether he should accumulate or decrease his position sizing on the portfolio position?
Thank you once again!
"For example, if the fast and the slow ema diverge from the initial crossover, what would that entail for a trader as to whether he should accumulate or decrease his position sizing on the portfolio position?"
DeleteExactly that. If the fast EMA is higher than the slow (so we are long), and the gap grows we would get longer. If the gap shrinks we would reduce our position.
My next blog post will be on this subject.
Following the methodology for combining trading systems outlined in Leveraged Trading, suppose I combined a trend following forecast with a carry forecast 50/50. If I scale the average trend following forecast to 10 across instruments, the average trend following score for any particular instrument may be a bit higher or lower than 10, but won't be far off. The same is not true for the carry forecasts since even on a volatility adjusted basis some instruments have much more structural carry than average and some have much less. For example, a carry score on many of the metals never gets much above 5 and is usually much lower. So, isn't the carry score "wasted" on these instruments with lower structural carry. Gold, for example, will have an average trend following score of around 10 but an average carry score of something like 2. Am I thinking about this correctly? Is it worth making some sort of adjustment, like upweighting the trend following score for instruments with low structural carry? That seems like gross over-fitting but doing nothing would result in metals and other instruments having a much lower average combined forecast than other instruments. Thank you for any thoughts!
ReplyDeleteThis is a well known 'thing' with all forecasting rules, but it especially applies to carry and slower momentum. If you want every instrument in your portfolio to have on average, in your backtest, an average position of zero then yes you will need to do some kind of adjustment. There are different ways of doing this, personally I would rescale forecasts so rather than targeting the average absolute vale, instead targeting a mean of zero and standard deviation of 10. You can do this at the total forecast level or for individual forecasts, and in a nice backward looking way. However you may take the view that you want to be overweight instruments with higher structural carry or higher momentum, since this has historically been an indication of higher performance. Since I take this view, I don't apply such adjustments.
DeleteHi Rob,
ReplyDeleteWhat variation of Occam's razor do you employ to evaluate changes to your trading strategy? Is it an information-theoretic criterion?
Thank you for all of your insights.
I haven't the foggiest what you are talking about.
DeleteHi Rob,
ReplyDeleteThis is another great post, and i really liked the 'systematic trading' book.
Just one quick question, for the market neutral strategy (i.e. cross sectional momentum or mean reversion) you get forecast value of each instrument, then you take position with (forecast value * unit risk per instrument)/10 * "risk allocation per instrument". My point is that, due to risk allocation, we don't get what it has to be(market neutral or dollar neutral). so should we apply different risk allocation per instrument for cross sectional and time series strategy?
Yeah I have had a £1 for every time I was asked this question I'd have about £100 :-)
DeleteThe way to think about it is like this; we are trying to predict the performance of a given instrument. And one way to do that is to see if it is overvalued or undervalued versus it's peers using mean reversion type / RV type systems; whilst another way is to do so in absolute terms eg momentum.
We decide how best to predict the performance of an individual system by combining lots of different ways of predicting that instrument. We don't care what those different ways are, we're just interested in the linear combination of weights that will give us the best expected out of sample performance.
Once we have predicted the performance of each individual instrument then we worry about creating a portfolio of little trading subsystems that each trade one instrument. And that is another problem where we set the risk allocation per instrument to get the best performance, not caring about how we've forecasted those instrument returns, whether there is relative value or absolute value within each subsystem.
Of course you can do it as a joint problem, but seperating it this way is a lot easier.
Thank you Rob and sorry for late reply.
DeleteIt sounds clear, that we apply usual risk allocation to those trading strategy whether it's time-series wise or cross-sectional.
One more thing i struggle with, is how we measure the cross sectional; can be median or use some ranking system.
Let say we get (20,20,10,8,6,-20) for absolute carry measure. Applying median approach, we get (11,11,1,-1,-3,-29) and the -29 measurement is somewhat over our maximum forecast value(from your book).so would you just clip the value between -20 and 20 or, scale down rel. carry measure by (20/29) to keep the distance between rel.carry. I know there isn't answer to which algo to use, but i would like to clarify which approach is more sensible and robust. Thank you in advance!
I'd do the median averaging before the forecast scaling. That makes it less likely that we'd get values over 20. Depending on the kind of system I'm running, if it's relative carry within a bigger directional system I'd just cap extreme values. If it was a 'pure' relative value system I'd rescale everything to ensure the zero average was maintained.
DeleteHi Rob,
ReplyDeletethank you for sharing all this content so generously!
I have a question regarding trading rules as I am currently working through “Systematic Trading” to design my own system to trade commodities and FX (my core portfolio is stocks/bonds/gold and already has some trendy, e.g. vol targeting, and mean-reversionary stuff to it so I am deliberately concentrating on assets that are “hard” to buy and hold).
With respect to commodities, I have seen some evidence that seasonality patterns exist and have predictive power. Have you ever considered seasonality rules in your trading? Guess it wouldn’t make a lot of sense outside commodities where one could expect buy and demand pressure to follow typical harvesting seasons and weather patterns. Such rules could be a diversifying addition to momentum rules. Consider, e.g., momentum rules that extrapolate an observed negative trend although this trend is really due to a negative seasonality that should rather be expected to turn positive over the upcoming weeks and months.
Am I wasting my time going down a rabbit hole here?
Best, Moritz
Yes... will be in my next book
DeleteBeen going through the back catalogue of posts, your blog is an absolute goldmine Rob! Very motivated to put together a futures strategy and run it side by side with a microcap factor strategy. Just ordered Systematic Trading.
DeleteLooking forward to see what you write about seasonality.
Hi Rob,
ReplyDeleteI recently finished your latest podcast interview with the author of the "Rise of Carry." I got to wondering if you would ever scale back the weight of the carry rule in your system. I know the carry rule has the highest sharpe of all of the rules in the system and you've said if you were forced to trade a single rule it would be carry ( I'm not taking a Jerry Parker perspective that it should be entirely excluded). I am only wondering if you might reduce the "risk" by reducing the weighting. Perhaps there is a way to do it systematically? Maybe one could use a (very) long term measure of volatility in markets.
Thanks,
Chris
Hi Chris
DeleteWhat reasons could there be to scale back carry? What piece of exogenous information would we use to tell us that our carry/momentum allocation should be different in the near future?
The reason might be the reversal of conditions that gave "rise to carry." Your second question is what I'm struggling with. During the interview I kept thinking of the charts that Chris Cole references in his initial Dragon portfolio paper that we are at a secular low in both volatility and price trend. But, using either of those measures to scale back carry seems like a bad idea. My other thought was to use some measure of central bank accommodation, though this is not well fleshed out. In any case, I was just curious if you'd considered making any changes based on that interview.
DeleteChris
No I haven't changed anything.
DeleteFWIW there is some research on this question in my new book, which I'm currently writing.
I look forward to reading it!
DeleteHi Rob, thanks for the post.
ReplyDeleteYou mentioned the following in your post:
“the other school of thought within the technical analysis campus is that one should look for patterns, which I'm less enthusiastic about”
Can you recommend any books/resources on creating trading rules using patterns? Do you believe this will add diversification to the set of trading rules?
No because they're utter bollocks.
DeleteHi Rob,
ReplyDeleteHow do you know whether to add a trading rule to a portfolio? Is there some threshold correlation you use? E.g. only add a new trading rule if it has less than x % max correlation to existing rules
Similar question when you have a trading rule and you have multiple variations of it (e.g. different lookback windows). Which ones should you choose (assuming you don’t just cherry pick based on backtest performance)?
Hi Rob,May I ask you a question about aggregate momentum and non-synchronous data?
ReplyDeleteI have only daily close price data for instruments,but each instrument may have different trading hours,and close price may not realized simultaneously.
Is it ok to calculate normalized returns for those instruments in same asset class,and use your formula RA_t = median(Ra_t, Rb_t, Rc_t, ...),in this situation?
Thanks for any thoughts,and looking forward to your new book!
Well I do this. I think as long as you're trading slowly enough, and as long as you lag fills by one day to be conservative, it's okay.
DeleteThanks,that make sense!
DeleteHi Rob,
ReplyDeleteI’ve been working on a system using your books ST for the framework and SP for the handcrafted portfolio allocations. My exposure will be through stocks and ETFs. I was originally planning to get home country equities exposure through individual companies, then ETFs for the rest, like what you suggest in SP. So my equities allocation would look something like 11 US large cap stocks (1 per sector), 11 US mid caps, a US small cap ETF, then ETFs for various other regions. However, after reading this post where you say that momentum is less effective at more specific levels (especially the individual stock level), I’m not sure if that’s a good idea. The rules I have so far are momentum (EWMAC and breakout) and carry. So would you suggest not splitting it up and just going with ETFs instead? Or maybe individual stocks can do fine when we apply an aggregate momentum rule to them in addition to the time series momentum rules of EWMAC and breakout? Thanks.
I think the idea of an aggregate momentum rule makes sense, eg only be long the single stocks when the index is going up. It's something I use myself.
DeleteI see. So in that case would you suggest keeping a small allocation to EWMAC and breakout for the rule diversification benefit? Or would you say the idiosyncratic risk at the individual stock level completely overwhelms individual momentum and we should completely replace those rules with aggregate momentum?
DeleteIt's up to you. I already have a futures system, so I already have an exposure to those rules elsewhere. But if I was only trading stocks, sure I'd probably use a mixture of rules both idiosyncractic and cross asset, just with more of a weight to the cross asset than I'd use at the index level.
DeleteMakes sense. Thanks Rob!
DeleteHi Rob. If I understand correctly, for normalized momentum you calculate normalized returns dividing price changes by the rolling volatility of prices (calculated since the start of the time series). Is that correct or do you use a fixed lookback to define price volatility?
ReplyDeleteThank you
I just use my normal estimation of volatility, which is exponential with a span of a month.
DeleteIt makes sense, since using a volatility with a too long span would create issues with series where price levels today are very different from prices a few years ago (take for example stitched excess return series for gas or hogs). Thanks again
DeleteDear Mr. Carver,
ReplyDeleteI am currently utilizing your systematic trading framework to develop a strategy based on fundamental, mostly non-price based data. My approach involves converting raw data into z-scores using an expanding rolling window, ensuring that future data doesn't influence the calculation of the current week's z-score.
Given that most of my data follows non-Gaussian distributions, my z-score thresholds for buying and selling are typically above zero. I am currently struggling with transforming these z-scores into a continuous forecast scale ranging from -20 to 20.
My current methodology employs binary rules where forecasts are set to 1 or -1 based on specific z-score thresholds, with all in-between values assigned to 0. This approach, however, doesn't utilize the full potential of a continuous scale.
I am considering the use of a min-max scalar for this transformation, but I'm concerned about the distortions it might introduce due to the non-normal distribution of my data. Could you provide insights or suggestions on how best to convert these z-scores into a continuous, scaled forecast? Specifically, how can I refine my approach to generate more nuanced forecasts that accurately reflect the varying strengths of the signals derived from my fundamental indicators?
Thank you for your guidance and insights.
Best regards,
Mathias
To deal with non Gaussian is trivial, measure the quantile (Q) of your forecast vs history between 0% and 100% and then you can eithier use (Q-50)/2.5 as a forecast (which results in a uniform forecast distribution) or map Q on to a Gaussian with mean 0 and std dev 10 so at Q=50 forecast is 0*, Q=10 mapping to -12.8, Q=25 mapping to -6.7 and so on. If you do this I don't see why you would need thresholds (* if a Z score =0 is where you want to be neutral not long or short then you can adjust your distribution easily to achieve this)
DeleteYour suggestions on using quantile measurements and mapping them onto a uniform or Gaussian distribution are indeed helpful. However, I have encountered an additional complexity in my model that I would like your advice on.
DeleteHowever within my framework, a z-score of 0 does not necessarily imply a neutral position. Instead, we determine neutrality by identifying the optimal positive and negative thresholds that maximize a specific performance metric for a specific period, this is how we fit the thresholds. Interestingly, we’ve observed that the midpoint of these optimal thresholds tends to be slightly positive, likely due to a predominance of bull markets in our data’s historical timeframe (commodity futures).
Given this scenario, how would you suggest we adjust the transformation of our z-scores to accommodate this shifted midpoint for neutrality? Is there a method to recalibrate the quantile measurement or the mapping process to reflect a neutrality point that is not zero but a slightly positive number? Any guidance on how to integrate this aspect into the forecast scaling process would be greatly appreciated.
Thank you again for your time and expertise, i am learning a lot by reading your book and my interest in the topic has been greatly enhanced.
So you are deliberately adding an extra parameter to your model to introduce a deliberate long bias? (Obviously I think this is a very stupid idea and a complete waste of time) If you use my method then Q=50 will be neutral, eg your models will be trained to be long half the time. Just work out the Q point where you get your neutral Z score (which if it was my model would be Z=0), and shift all the Q points before doing your mapping.
DeleteGiven that a z-score of 0 represents the mean and not the median of a distribution, and considering the non-Gaussian nature of our data, would you recommend moving away from the use of z-scores altogether? Is there a more appropriate statistical method or transformation that would better capture the central tendency or neutrality point in our specific dataset? Thanks again for your guidance!
DeleteYes, you could just a quantile of the underlying thing you are then transforming into a Z score.
DeleteDear Mr. Carver,
ReplyDeleteI came across this discussion and found myself facing a similar challenge. I was wondering if I could seek your guidance on this matter.
Let's consider a scenario where I have a price series, such as Nasdaq, exhibiting a clear trend. In this situation, the mean/median of any variable I analyze for that time frame tends to show a bullish signal.
My question is, what approach would you recommend to identify this "neutral point", which I can then use as a reference for the Q point?
I guess the underlying question is: Is it necessary to have the model where 50% of the time it's long as you mentioned in the response to Mathias?
I've attempted various methods to identify the "neutral point", including backtesting entry points and optimizing for specific metrics, but I'm concerned about the robustness of these approaches.
Your insights would be greatly appreciated. Thank you for your time and expertise.
Rgrds
Felipe
Let's say you are trading momentum using a moving average crossover. You could: say that when the crossover is neutral you are neutral position. This means you will have a long bias in the backtest if the asset tended to go up. Personally, I am fine with this. Or you could demean the forecast using a long run mean for that asset. This will remove the long bias if you are bothered about that. "I've attempted various methods to identify the "neutral point", including backtesting entry points and optimizing for specific metrics," yeah this is all overfitted bollocks.
Delete