tag:blogger.com,1999:blog-261139923818144971.comments2024-04-13T06:19:38.784+01:00This Blog is SystematicRob Carverhttp://www.blogger.com/profile/10175885372013572770noreply@blogger.comBlogger2722125tag:blogger.com,1999:blog-261139923818144971.post-33027595058761700512024-03-19T16:10:33.437+00:002024-03-19T16:10:33.437+00:00Sorry about the above. I was trying to implement a...Sorry about the above. I was trying to implement a similar setup in my own (far, far, far simpler) Python scripts. I finally simply exponentially weighted the mean() and std() and created the Sharpe, then ranked. I did see a consistent effect across three different systems that ranked 90+ TAA (etf) strategies and two systems that used a variety of equity systems (many with decade plus out of sample).<br /><br />For fun I also tried to expand on your tests here. There appeared to be a small Sharpe bump by only applying sr_equalize False to forecast and not instruments (an idea you alluded to in the end) and a significant bumps by separately testing the full rule set with these limited instruments, and testing the full instrument set with these limited rules - not surprising of course.Ahttps://www.blogger.com/profile/07815695560953002699noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-41198965648543523712024-03-17T11:53:23.214+00:002024-03-17T11:53:23.214+00:00I assume for simplicity all the managers are targe...I assume for simplicity all the managers are targeting the same vol (which if they are all CTA's I would imagine is about 10% given you end up with 6.4% vol on the portfolio). One starting point is this post https://qoppac.blogspot.com/2022/02/exogenous-risk-overlay-take-two.html where you can see that 'normal risk' for my system varies between 0.5 and 1.5 times the risk target. That is expected risk, realised risk will vary a bit more, and it will also vary more for a CTA that is a purer trend follower with less diversification. 62 months isn't really enough to give you an indication of whether say a 3 times risk is typical for a given manager.Rob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-70124734986395830542024-03-16T19:32:52.959+00:002024-03-16T19:32:52.959+00:00Rob, I have read this post and the chapters/sectio...Rob, I have read this post and the chapters/sections on Volatility Targeting in AFTS and ST. I would like to apply these principles to a non-correlated portfolio of 6 cross margined SMAs managed by CTA hedge fund managers. I have 62 months of common historical monthly data for each manager's after fee performance. From the covariance matrix, I derive a 7.7% annual expected CAGR on volatility of 6.4% on notional. Looking at drawdown and margin/equity individual histories and adding some cushion I am comfortable with a leverage ratio in the cross margined SMA of 2.38%. I intend to apply your principles to this portfolio and set a volatility limit of 25% on trading capital annually. My question is how much daily or monthly divergence from the annual 25% vol limit would you accept before acting to limit the trading levels of the individual managers? Chris Huberhttps://www.blogger.com/profile/17087493850315455650noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-44118160873292371672024-03-14T21:24:06.602+00:002024-03-14T21:24:06.602+00:00Great post as usual Rob, thanks so much. I've ...Great post as usual Rob, thanks so much. I've been playing around with this as well. I generated the unweighted performance of each trading rule; for the fast trading rules I filtered out the markets that were too expensive (so it only reflects performance on markets that were viable to trade), then ran those weekly returns through your full handcrafting code included in Pysystemtrade and below is what popped out. Interesting results and some interesting divergences from the manually handcrafted weights. I believe your full handcrafting script already includes sharpe (along with correlations) as a weighting criteria similar to the sharpe only test here, no?<br /><br />accel16: 0.02581<br />accel32: 0.00000<br />accel64: 0.00000<br />assettrend16: 0.07681<br />assettrend2: 0.00684<br />assettrend32: 0.00822<br />assettrend4: 0.00129<br />assettrend64: 0.00745<br />assettrend8: 0.01186<br />breakout10: 0.00475<br />breakout160: 0.00029<br />breakout20: 0.00667<br />breakout320: 0.05110<br />breakout40: 0.00606<br />breakout80: 0.01203<br />carry10: 0.00082<br />carry125: 0.10106<br />carry30: 0.00274<br />carry60: 0.02006<br />momentum16: 0.00184<br />momentum32: 0.02651<br />momentum4: 0.00121<br />momentum64: 0.00255<br />momentum8: 0.00398<br />mrinasset1000: 0.15134<br />normmom16: 0.01894<br />normmom2: 0.00121<br />normmom32: 0.03793<br />normmom4: 0.00138<br />normmom64: 0.01305<br />normmom8: 0.00953<br />relcarry: 0.00000<br />relmomentum10: 0.09064<br />relmomentum20: 0.10128<br />relmomentum40: 0.00223<br />relmomentum80: 0.00759<br />skewabs180: 0.01640<br />skewabs365: 0.07140<br />skewrv180: 0.03902<br />skewrv365: 0.05809<br /><br />Tree:<br />[' '<br /> 'Contains '<br /> '3 '<br /> 'sub '<br /> 'portfolios',<br /> ['[0] '<br /> 'Contains '<br /> '3 '<br /> 'sub '<br /> 'portfolios',<br /> ['[0][0] '<br /> 'Contains '<br /> '3 '<br /> 'sub '<br /> 'portfolios',<br /> ['[0][0][0] '<br /> 'Contains '<br /> "['relmomentum40', "<br /> "'relmomentum80']"],<br /> ['[0][0][1] '<br /> 'Contains '<br /> '3 '<br /> 'sub '<br /> 'portfolios',<br /> ['[0][0][1][0] '<br /> 'Contains '<br /> "['breakout80', "<br /> "'momentum16', "<br /> "'normmom16']"],<br /> ['[0][0][1][1] '<br /> 'Contains '<br /> "['assettrend16']"],<br /> ['[0][0][1][2] '<br /> 'Contains '<br /> "['accel64']"]],<br /> ['[0][0][2] '<br /> 'Contains '<br /> '3 '<br /> 'sub '<br /> 'portfolios',<br /> ['[0][0][2][0] '<br /> 'Contains '<br /> '2 '<br /> 'sub '<br /> 'portfolios',<br /> ['[0][0][2][0][0] '<br /> 'Contains '<br /> "['breakout160', "<br /> "'momentum32', "<br /> "'normmom32']"],<br /> ['[0][0][2][0][1] '<br /> 'Contains '<br /> "['assettrend32']"]],<br /> ['[0][0][2][1] '<br /> 'Contains '<br /> "['momentum64', "<br /> "'normmom64']"],<br /> ['[0][0][2][2] '<br /> 'Contains '<br /> "['assettrend64', "<br /> "'breakout320']"]]],<br /> ['[0][1] '<br /> 'Contains '<br /> '3 '<br /> 'sub '<br /> 'portfolios',<br /> ['[0][1][0] '<br /> 'Contains '<br /> "['assettrend8', "<br /> "'momentum8', "<br /> "'normmom8']"],<br /> ['[0][1][1] '<br /> 'Contains '<br /> "['breakout40']"],<br /> ['[0][1][2] '<br /> 'Contains '<br /> "['accel32']"]],<br /> ['[0][2] '<br /> 'Contains '<br /> '3 '<br /> 'sub '<br /> 'portfolios',<br /> ['[0][2][0] '<br /> 'Contains '<br /> "['assettrend2', "<br /> "'breakout10', "<br /> "'normmom2']"],<br /> ['[0][2][1] '<br /> 'Contains '<br /> '2 '<br /> 'sub '<br /> 'portfolios',<br /> ['[0][2][1][0] '<br /> 'Contains '<br /> "['assettrend4', "<br /> "'momentum4', "<br /> "'normmom4']"],<br /> ['[0][2][1][1] '<br /> 'Contains '<br /> "['breakout20']"]],<br /> ['[0][2][2] '<br /> 'Contains '<br /> "['accel16']"]]],<br /> ['[1] '<br /> 'Contains '<br /> '3 '<br /> 'sub '<br /> 'portfolios',<br /> ['[1][0] '<br /> 'Contains '<br /> '3 '<br /> 'sub '<br /> 'portfolios',<br /> ['[1][0][0] '<br /> 'Contains '<br /> "['carry10', "<br /> "'carry30', "<br /> "'carry60']"],<br /> ['[1][0][1] '<br /> 'Contains '<br /> "['carry125']"],<br /> ['[1][0][2] '<br /> 'Contains '<br /> "['relcarry']"]],<br /> ['[1][1] '<br /> 'Contains '<br /> "['relmomentum10', "<br /> "'relmomentum20']"],<br /> ['[1][2] '<br /> 'Contains '<br /> '2 '<br /> 'sub '<br /> 'portfolios',<br /> ['[1][2][0] '<br /> 'Contains '<br /> "['skewabs180', "<br /> "'skewabs365']"],<br /> ['[1][2][1] '<br /> 'Contains '<br /> "['skewrv180', "<br /> "'skewrv365']"]]],<br /> ['[2] '<br /> 'Contains '<br /> "['mrinasset1000']"]]The Black Seamhttps://www.blogger.com/profile/08393809786861181648noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-1857327825940076912024-03-14T17:50:39.684+00:002024-03-14T17:50:39.684+00:00I tried to understand the code and what's happ...I tried to understand the code and what's happening in PySystemTrade, but had a bit of trouble:<br /><br />Is this right?<br />1) Taking an expanding window of returns over the rolling standard deviation to calculate Sharpe (*not* a rolling window of returns).<br />2) Applying the ew_lookback to the above (i.e. 15 years)<br />3) Adjusting weights depending on confidenceAhttps://www.blogger.com/profile/07815695560953002699noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-90534325730221929652024-03-13T18:01:28.659+00:002024-03-13T18:01:28.659+00:00Wow, thank you very much. Quite a bit of work went...Wow, thank you very much. Quite a bit of work went into this paper/post!Ahttps://www.blogger.com/profile/07815695560953002699noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-27755330369583092722024-03-13T17:42:28.587+00:002024-03-13T17:42:28.587+00:00Yeah it's 'handcrafting' but not done ...Yeah it's 'handcrafting' but not done by hand. You're welcome to look at the code it's horrible though https://gist.github.com/robcarver17/58b3668407fdbd05954c34373c63d9ed Rob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-66731700361398906092024-03-13T14:45:48.552+00:002024-03-13T14:45:48.552+00:00With regard to actual implementation: you are usin...With regard to actual implementation: you are using your hand-crafted weights and then applying a formula to exponentially weight a Sharpe ratio adjustment over time? Is this somehow applied to and adjusts the dataframe that forecast_weights_for_instruments() pulls (and just starts with handcrafted weights set in a config file)? Would you consider releasing the formula used and code? Thank you!Ahttps://www.blogger.com/profile/07815695560953002699noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-55017921077324317702024-03-10T14:45:53.232+00:002024-03-10T14:45:53.232+00:00Very interesting post, sir. Quick question: is the...Very interesting post, sir. Quick question: is there any statistical 'incompleteness' or hazard from using p-values as opposed to, say, confidence intervals for comparing SR to alpha in your returns estimates?Chad Bhttps://www.blogger.com/profile/13026562498196984544noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-87266051008878047052024-03-08T20:08:02.038+00:002024-03-08T20:08:02.038+00:00Thank you for your help. I went back to my program...Thank you for your help. I went back to my program with statsmodels and your advice. It showed significance vs 60-40, but when comparing to the best individual strategies there was no significance and very low power with both the expanding and rolling windows (even though I was aggregating 12 out of 96). I think I basically over-engineered sorting an Excel sheet by Sharpe and pretending I picked a few of the best back in 1970.Ahttps://www.blogger.com/profile/07815695560953002699noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-73254092198130883602024-03-08T08:33:39.478+00:002024-03-08T08:33:39.478+00:00" is it simply the number of price points tha..." is it simply the number of price points that account for it being okay for high frequency traders to alter signals in only weeks, but not for us to use 48 months?" to precise it's the amount of information, broadly speaking the number of decisions you are making. So if I was to test my strategy on tick data, that wouldn't give me more information since my holding period is still ~1 month. Also the statistical significance grows with square root time.Rob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-50263179659777026652024-03-07T17:48:47.447+00:002024-03-07T17:48:47.447+00:00Another mistake, 252 days * 4 years is only 968.Another mistake, 252 days * 4 years is only 968.Ahttps://www.blogger.com/profile/07815695560953002699noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-81117253997219159322024-03-07T17:46:00.127+00:002024-03-07T17:46:00.127+00:00Thank you. I've seen the same return degradati...Thank you. I've seen the same return degradation you found in futures in equity selection and taa (etf) asset class rotation and was hoping to also find a solution. <br /><br />I'm a moron though here - is it simply the number of price points that account for it being okay for high frequency traders to alter signals in only weeks, but not for us to use 48 months? So for 3 weeks a high frequency trader would have 7,200 price points with minute data, whereas 48 months only accounts for 1460 price points using daily data?Ahttps://www.blogger.com/profile/07815695560953002699noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-54325531847953251472024-03-07T16:11:33.359+00:002024-03-07T16:11:33.359+00:0048/96 months is *way* too short to get statistical...48/96 months is *way* too short to get statistical significanceRob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-3479346612795265052024-03-07T16:10:02.049+00:002024-03-07T16:10:02.049+00:00To clarify, you're using Sharpe on an expandin...To clarify, you're using Sharpe on an expanding window to adjust weights? It is interesting that there was no benefit to adjusting instrument weightings this way!<br /><br />What if you used a rolling 48 or 96-month window or such for rules? I've done some testing on systems that showed a benefit to selecting the best subsystems based on this criteria - though not for Futures. Unfortunately they still had severe look-back bias as the systems were created post the test data...Ahttps://www.blogger.com/profile/07815695560953002699noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-12944710076602573802024-02-29T01:46:30.395+00:002024-02-29T01:46:30.395+00:00huh, that's a great way to approach the questi...huh, that's a great way to approach the question! Definitely a different result than I expected.<br /><br />My naive expectations would be 1) minimal correlation (which you found w/ 60/40 long only & long term trend), 2) similar means. So I'd maximize Sharpe by doing a 50/50 risk allocation to minimize overall volatility (w/ equal volatility targets, I guess that's 50/50 risk or cash). <br /><br />I guess I can interpret the results here as a sign of long term trend just having a notably superior Sharpe than long only 60/40 in your data sample?Grant Lincolnhttps://www.blogger.com/profile/16674789472333693224noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-39959331095160527852024-02-21T18:00:57.051+00:002024-02-21T18:00:57.051+00:00Hey, is this book helpful for other market traders...Hey, is this book helpful for other market traders? I am from India and trades in NSE. Madhu Bansalhttps://www.blogger.com/profile/14983472192799608865noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-78528637132226740292024-02-20T16:53:21.181+00:002024-02-20T16:53:21.181+00:00"Application to Sparse Data: My current proje..."Application to Sparse Data: My current project involves data with a weekly frequency, but I only have around 600 weeks' worth of data."<br /><br />(pendantically, this isn't sparse data but data with limited history, not quite the same thing)<br /><br />" Given the relatively limited dataset:<br /><br />How would you suggest adjusting the frequency of recalculating forecast weights?" I wouldn't.<br /><br />"Are there any recommendations for setting the window size for the expanding window and the number of bootstrap runs in this context?" No again I'd use something like max(1 year, available data) for window size and # of bootstraps as many as possible without killing your CPU (though with smaller window sizes you will find you don't need as many bootstraps as there are fewer unique combinations of samples).Rob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-40663126502980233642024-02-20T16:50:04.476+00:002024-02-20T16:50:04.476+00:00"Optimization Frequency: From the analysis pr..."Optimization Frequency: From the analysis presented, it seems that the optimizer's output weights are kept static on a yearly basis. Could you share more about the rationale behind this decision?" it's very unlikely that we would get any interesting new information with less than an additional year of data and this slows things down a lot. In reality I fit these weights less frequently than annually- almost never.Rob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-11135286939759503672024-02-20T16:49:02.360+00:002024-02-20T16:49:02.360+00:00"would it make sense to increase the length o..."would it make sense to increase the length of each Monte Carlo simulation (monte_length) proportionally to the size of the expanding window? " Yes in fact I would use the rule monte_length = max(length of window, 1 year). Rob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-32203049564868060832024-02-20T16:15:42.866+00:002024-02-20T16:15:42.866+00:00Hi Rob,
I hope you're doing well.
I've ...Hi Rob,<br /><br />I hope you're doing well. <br /><br />I've been exploring the portfolio optimization techniques outlined in your work, specifically the bootstrapping with replacement approach combined with an expanding window. This has led me to a couple of questions regarding the methodology and its application:<br /><br />Monte Carlo Length Adjustment: In the context of using an expanding window for bootstrapping, would it make sense to increase the length of each Monte Carlo simulation (monte_length) proportionally to the size of the expanding window? This adjustment could potentially capture more of the evolving data characteristics over time. I'm curious about your perspective on this approach.<br /><br />Optimization Frequency: From the analysis presented, it seems that the optimizer's output weights are kept static on a yearly basis. Could you share more about the rationale behind this decision? Was the frequency of running the optimizer determined based on empirical analysis, theoretical considerations, or a discretionary choice?<br /><br />Application to Sparse Data: My current project involves data with a weekly frequency, but I only have around 600 weeks' worth of data. Given the relatively limited dataset:<br /><br />How would you suggest adjusting the frequency of recalculating forecast weights?<br />Are there any recommendations for setting the window size for the expanding window and the number of bootstrap runs in this context?<br /><br />Thank you for your time and for sharing your expertise!Mathiashttps://www.blogger.com/profile/03254607639797446190noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-39282313130336397252024-02-08T09:23:48.323+00:002024-02-08T09:23:48.323+00:00"Which do you recommend" - what do you m..."Which do you recommend" - what do you mean by 'which'? FX exposure for futures is only on the margin (see chapter in my latest book) and I don't bother hedging it (see also discussion in smart portfolios chapter 2).Rob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-19997481452308969732024-02-08T02:02:34.599+00:002024-02-08T02:02:34.599+00:00As part of my risk party portfolio, I am intereste...As part of my risk party portfolio, I am interested in rolling a long only diversified global bond futures. Which do you recommend? Would you hedge the fx? Would you set up an algo for this or manual?Chris Huberhttps://www.blogger.com/profile/17087493850315455650noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-50655293723504221052024-01-25T16:32:25.496+00:002024-01-25T16:32:25.496+00:00Let's say you are trading momentum using a mov...Let's say you are trading momentum using a moving average crossover. You could: say that when the crossover is neutral you are neutral position. This means you will have a long bias in the backtest if the asset tended to go up. Personally, I am fine with this. Or you could demean the forecast using a long run mean for that asset. This will remove the long bias if you are bothered about that. "I've attempted various methods to identify the "neutral point", including backtesting entry points and optimizing for specific metrics," yeah this is all overfitted bollocks.Rob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-29385547507198911642024-01-25T16:22:07.577+00:002024-01-25T16:22:07.577+00:00Dear Mr. Carver,
I came across this discussion an...Dear Mr. Carver,<br /><br />I came across this discussion and found myself facing a similar challenge. I was wondering if I could seek your guidance on this matter.<br /><br />Let's consider a scenario where I have a price series, such as Nasdaq, exhibiting a clear trend. In this situation, the mean/median of any variable I analyze for that time frame tends to show a bullish signal.<br /><br />My question is, what approach would you recommend to identify this "neutral point", which I can then use as a reference for the Q point? <br />I guess the underlying question is: Is it necessary to have the model where 50% of the time it's long as you mentioned in the response to Mathias? <br /><br />I've attempted various methods to identify the "neutral point", including backtesting entry points and optimizing for specific metrics, but I'm concerned about the robustness of these approaches.<br /><br />Your insights would be greatly appreciated. Thank you for your time and expertise.<br /><br />Rgrds<br />FelipeFelipehttps://www.blogger.com/profile/00655824651295811702noreply@blogger.com