tag:blogger.com,1999:blog-261139923818144971.comments2024-08-10T14:29:32.014+01:00This Blog is SystematicRob Carverhttp://www.blogger.com/profile/10175885372013572770noreply@blogger.comBlogger2750125tag:blogger.com,1999:blog-261139923818144971.post-47093655607095592072024-07-31T20:31:52.138+01:002024-07-31T20:31:52.138+01:00Ok thats helpful and I can see now my idea for kee...Ok thats helpful and I can see now my idea for keeping separate BAP series is unnecessary.<br /><br />Separate Question: In the Appendix of AFTS you recommend for position sizing, BAP Change/Currently Held % method for standard deviation and.... for risk adjusting forecasts, the daily price changes estimate of standard deviation (not annualized). In this blog post, however, you recommend the price change method for position sizing (I think)? I was confused by this followup: "Hence the position is actually proportional to the standard deviation in price difference terms. We can either estimate this directly, or as the equation suggests recover it from the standard deviation in percentage terms, which we then multiply by the current futures price." So σ (d) = σ% (from BAP change/Currently Held) * Price? I think this means we are indifferent to which of the two methods we use to position size as long as we adjust the % method to price in your position sizing formula? <br />Chris Huberhttps://www.blogger.com/profile/17087493850315455650noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-74996584694749957622024-07-22T07:44:48.189+01:002024-07-22T07:44:48.189+01:00Nope. You can forward adjust if you're worried...Nope. You can forward adjust if you're worried about forward looking bias. But a well designed trading system that doesn't use absolute price levels will be indifferent between these two methods.Rob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-4557877202068572132024-07-22T00:00:41.119+01:002024-07-22T00:00:41.119+01:00I've been thinking and working on this since I...I've been thinking and working on this since I wrote it a few days ago. Back adjusted prices are back changed for every price historically at each roll. I think that requires that each roll will have its own isolated back-adjusted price series history. This is the only setup I can think of to avoid lookahead bias created by a continuous back-adjusted price series that extends beyond the back test trade dates. How else could we create a proper EWMAC calculation on continuous pricing? How else could we measure volatility at any point in time in the series using back-adjusted data using the methods in the blog entry and book which relies on back-adjusted prices. I wrote a script that back-adjusts prices at each roll interval separately, which creates a unique back adjusted series per roll going back to the beginning of data availability for a given futures contract. If I am thinking about this incorrectly please let me know.Chris Huberhttps://www.blogger.com/profile/17087493850315455650noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-37432050045677328422024-07-19T17:43:23.931+01:002024-07-19T17:43:23.931+01:00I'm guessing this approach isn't applicabl...I'm guessing this approach isn't applicable for long time-frame backtesting or measuring vol over long spans? It is not clear to me how we can use back-adjusted prices generated years in advance of the first testing period to compute standard deviation with differences in back adjusted prices in the numerator and (then held contract) current prices in the denominator. Let's take Soybeans for an example. Back-adjusting prices from the 8/31/2023 all the way back to 9/20/85 generates a starting back-adjusted price of -$77.5 on 9/20/85. The current price at that time was $536.50. Given the disparity, we cannot hope to get an accurate measure of volatility for the early years if we use the difference in back-adjusted prices in the numerator and the current price in the denominator. Nor can we use this method to get long-term estimates of volatility because the further one goes back the greater potential for price disparity. How does your model deal with this in long term backtesting?Chris Huberhttps://www.blogger.com/profile/17087493850315455650noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-54635675715142643602024-07-17T19:22:47.655+01:002024-07-17T19:22:47.655+01:00Thanks Rob. I looked at that by going back to June...Thanks Rob. I looked at that by going back to June and the discrepancy is more exaggerated.<br /><br />I suspect that your data probably carried on to the September 8th roll to December and so the August 31st data is already backadjusted. I'm asking for some help on elitetrader. Chris Huberhttps://www.blogger.com/profile/17087493850315455650noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-87137565516054899132024-07-17T16:47:24.397+01:002024-07-17T16:47:24.397+01:00Do the stitching at a roll pointDo the stitching at a roll pointRob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-89139515267859041312024-07-17T15:52:59.146+01:002024-07-17T15:52:59.146+01:00I'm not sure how to deal with pricing data dis...I'm not sure how to deal with pricing data discrepancies to update back adjusted pricing. As an example, pystemtrade csv for micro e-minis last price entry is 8/31/2023 20:00 price = 4524.0, So we are presumably in the September contract when your data ends. Barchart reports the September 2023 intraday 20:00 8/31/2024 price = 4521.25. In fact all of the data from the September contract is inconsistent with your csv. Why would this be as we have not rolled yet? Should I ignore these discrepancies when finding the seam to continue the stitching or DIY the entire history?Chris Huberhttps://www.blogger.com/profile/17087493850315455650noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-37196350972356983942024-07-16T19:52:41.772+01:002024-07-16T19:52:41.772+01:00I'm building my data library (quite the projec...I'm building my data library (quite the project and probably pushing back my hope to implement a manual trading program anytime soon) but I have a question about using the pystemtrade .csv repository of pricing data. Much of the pricing data ends in August of 2023. In the Micro e-minis csv, for example, it has intraday data going back to about the end of 2013. <br /><br />My plan is to use the Panama method to back adjust the pricing with up to date Barchart data. I don't see any flaws in approaching it this way, as we simply adjust backwards from the end of your time series library using newer data. If there are any issues please let me know (I'm aware of the negative pricing issue and your approach to Price vs Percentage SD that you have posted about) . I plan on keeping the intraday library in the spirit of what you had started. Do you recommend gathering as much intraday data as possible or do what you have done and collect 14:30 - 23:00 which is presumably end of trading day? I'm not clear why you did it in odd intervals and eod?Chris Huberhttps://www.blogger.com/profile/17087493850315455650noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-92164683300239655532024-07-13T19:44:23.225+01:002024-07-13T19:44:23.225+01:00I want to eventually fire most of my CTA managers ...I want to eventually fire most of my CTA managers that would be redundant to the pysystemtrade strategies and go full DIY when I am ready. Among several considerations that gives me some minor pause is the dispersion in results. There is some comfort in an ensemble approach to reduce TE relative to an index like SG Trend or the broader CTA index, but in the end the goal is to build uncorrelated strategies with positive expected returns relative to long only market risk factors. the goal is not to track an ensemble index in and of itself. Pysystemtrade clearly does this and historically more effectively than the benchmarks burdened by fees. Your work and your willingness to share your years of experience and mastery from the high level philosophy and strategy all the way down to the granular details of execution and refinement, is unique and truly impressive. Your books, the blog, and the github for pysystemtrade are indispensable resources for those of us on the path. I hope that I can work methodically and diligently to take control of my portfolio by mastering the tools you have provided. My plan is to first fully understand the trading rules from AFTS and the tactics from Part Six. Integrate the signal generation from the trading rules with the risk sizing and other tactics to develop a mental model of execution. Backtest it myself to develop facility with the tools and data and process. I would then want to implement a manual approach and eventually automate. It's a daunting process, but certainly possible for anyone with a working understanding of modern portfolio management and its pitfalls, the futures markets, spreadsheets, a willingness to develop the coding skills (with the help of AI), and the time and energy to see it through. <br /><br />I don't think I have ever come across a teaching of this scale, scope, depth, and rigor all with accessible practicality, utility and elegance in the consistency of its application, in all my years of finance. Your parents must be very proud.Chris Huberhttps://www.blogger.com/profile/17087493850315455650noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-53495451205073325952024-07-11T15:41:40.042+01:002024-07-11T15:41:40.042+01:00This sounds a lot like my relative trend system in...This sounds a lot like my relative trend system in AFTS where I normalise the returns of the index and the asset on a daily basis depending on my current vol estimate for each and then look at the cumulation of those returns.Rob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-32375296025665934672024-07-11T12:28:22.016+01:002024-07-11T12:28:22.016+01:00Hi Robert,
I have read both Systematic Trader and...Hi Robert,<br /><br />I have read both Systematic Trader and Advanced Futures Trading Strategies and really appreciate both of them. Thank you for sharing your thoughts and experiences.<br /><br />I mostly trade stocks and want to implement some ideas from your books into my research and trading. Currently, I am focusing on the ratio/quota between a stock and a broad index and how to adjust the ratio for risk/volatility in the share to find stocks that perform "alpha". This is where I am having some trouble; I can't figure out how to think about and calculate this properly and hope you have some good insights.<br /><br />Let's assume that the standard deviation of the stock's daily returns over the past year is 1.5%, and the index has a standard deviation of 1.0%. This means the stock typically (over the past year) moves 1.5 times the index. If one day the index increases by 2.0% and the stock by 3.0%, the risk/volatility adjusted change would be 1 ((3.0%/1.5)/2.0%). In this case, the stock's outperformance is driven by higher beta/volatility/risk. I aim to buy stocks that show a risk-adjusted price change above 1 and short the stocks below 1.<br /><br />This calculation is straightforward as long as both the stock and the index move in the same direction. However, I am unsure how to handle situations when they do not. For example, if the stock is up 1.2% and the index is down 0.8%, or vice versa, how should one think about and adjust for risk/volatility on those days? Should such performance be seen as positive/negative alpha? Have you looked into this or have any insights? I would greatly appreciate your thoughts on this.<br /><br />Best regards,<br />HugoHugohttps://www.blogger.com/profile/16326108262598236847noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-72261045989448110102024-07-01T16:04:21.990+01:002024-07-01T16:04:21.990+01:00Not obvious to a dolt!
I thought because it was a...Not obvious to a dolt!<br /><br />I thought because it was accessing the dynamic system functions to run, perhaps there was already a position optimisation.Ahttps://www.blogger.com/profile/07815695560953002699noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-58969628754569777332024-06-13T07:18:14.529+01:002024-06-13T07:18:14.529+01:00optimised_portfolio() is after optimisation, portf...optimised_portfolio() is after optimisation, portfolio() is before.... I would have thought was obviousRob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-66402639829897966372024-06-12T22:36:02.026+01:002024-06-12T22:36:02.026+01:00When backtesting rob_system for different notional...When backtesting rob_system for different notional account values, are the simulated stats system.accounts.optimised_portfolio().percent.stats() or simply system.accounts.portfolio().percent.stats() already accounts for the optimisedPositions? Ahttps://www.blogger.com/profile/07815695560953002699noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-90165664668970951522024-05-31T10:04:26.822+01:002024-05-31T10:04:26.822+01:00No clearly if they are basically the same instrume...No clearly if they are basically the same instrument there is no point. I would go for lowest cost that meets liquidity thresholds; as per my report https://github.com/robcarver17/reports/blob/master/Duplicate_markets_reportRob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-30593108985827894932024-05-30T23:50:37.557+01:002024-05-30T23:50:37.557+01:00Ahh I see. But do you think it would be worthwhile...Ahh I see. But do you think it would be worthwhile for me to have say for example both the China A50 contracts from SGX and HKFE in the DO? or should I just stick to the one that has the lowest trading cost?Onghttps://www.blogger.com/profile/14212230848439065151noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-37316222278356372352024-05-30T16:55:48.968+01:002024-05-30T16:55:48.968+01:00" For example if I have Arabica Coffee contra..." For example if I have Arabica Coffee contracts from ICE/US, is there a benefit to adding Robusta Coffee contracts from ICE/London?" yes I do this. Unlike other optimisations, DO Is pretty robust to very high correlations since it isn't inverting the matrix.Rob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-74001751270602444492024-05-30T16:52:27.046+01:002024-05-30T16:52:27.046+01:00Hi Rob, what do you think of adding instruments th...Hi Rob, what do you think of adding instruments that are very similar and highly correlated but not necessarily identical or interchangeable into a dynamic optimisation? For example if I have Arabica Coffee contracts from ICE/US, is there a benefit to adding Robusta Coffee contracts from ICE/London? Also, do you think that there is a benefit to adding futures that have daily data from multiple exchanges, for example if I have Gold futures daily data from COMEX, would there be a benefit from adding Gold futures from JPX, HKFE into the dynamic optimisation? Thanks in advance Rob! Love reading all your write-ups on your blog!Onghttps://www.blogger.com/profile/14212230848439065151noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-49366333449293340912024-05-29T17:00:32.789+01:002024-05-29T17:00:32.789+01:00Hello Robert,
I'm reading your book "Adv...Hello Robert,<br /><br />I'm reading your book "Advanced Futures Trading Strategies" (it's very good, thanks), but I couldn't see how the 31.6% in Table 134 is computed as the average(?) performance for the Jumbo assets.<br /><br />So I downloaded the code from https://gitfront.io/r/user-4000052/iTvUZwEUN2Ta/AFTS-CODE.git (unlike the other commentators, I easily found the correct password!). However, I see that chapter27.py assumes a version of the function calculate_position_series_given_variable_risk_for_dict which has an extra argument (current_prices) compared to the arguments in the function definition in chapter4.py.<br /><br />Therefore Python produces an error:<br /><br />TypeError: calculate_position_series_given_variable_risk_for_dict() got an unexpected keyword argument 'current_prices'<br /><br />I wonder if you added an extra argument to this function, but forgot to update the definition in the repository.<br /><br />Also, chapter27.py doesn't print any output, nor are the variables it sets imported anywhere else (as far as I can tell), so I wondered if some parts of the code are missing?<br /><br />Thanks again for the great book!<br />Mark Ryanhttps://www.blogger.com/profile/01255263160501334258noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-79405163375595302732024-05-17T14:03:40.060+01:002024-05-17T14:03:40.060+01:00Yes, similar to a z score. Everything ends up in s...Yes, similar to a z score. Everything ends up in scale free spaceRob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-16952751187197029702024-05-17T13:55:11.029+01:002024-05-17T13:55:11.029+01:00Why does scaling things to an absolute value of 10...Why does scaling things to an absolute value of 10 make them summable? Does it work because it puts all signals on the same scale (like a z-score)?Matthttps://www.blogger.com/profile/16122419489436306940noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-33651468451496783672024-05-17T08:46:40.096+01:002024-05-17T08:46:40.096+01:00Actually it's 10. And there's nothing spec...Actually it's 10. And there's nothing special about 10, any arbitrary number would do. It means I can add up forecasts from different trading rules.Rob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-20212585191847971502024-05-16T21:07:36.286+01:002024-05-16T21:07:36.286+01:00Hey Rob -
Could you explain why you scale everyth...Hey Rob - <br />Could you explain why you scale everything to have an average absolute value of 1?<br />Thanks!Matthttps://www.blogger.com/profile/16122419489436306940noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-30814739466384578732024-05-15T10:32:27.994+01:002024-05-15T10:32:27.994+01:00NoNoRob Carverhttps://www.blogger.com/profile/10175885372013572770noreply@blogger.comtag:blogger.com,1999:blog-261139923818144971.post-57395664646435068842024-05-15T09:59:16.400+01:002024-05-15T09:59:16.400+01:00Hi Rob,
Thank you for your, as always, very infor...Hi Rob,<br /><br />Thank you for your, as always, very informative post. <br /><br />I'm curious what you think about novel portfolio optimization models like Mean Variance Skewness Kurtosis Efficient optimization (MVSKE).<br /><br />This method captures more moments of the return distribution, but I'm uncertain about its practical value compared to its computational complexity.<br /><br />Do you think the additional insights from considering skewness and kurtosis are worth the extra effort?<br /><br />Best regards,<br />NikolayНиколай Цветковhttps://www.blogger.com/profile/14090523135676864120noreply@blogger.com