My first book: "Systematic Trading"






I am the author of "Systematic Trading", which is published by Harriman House in 2015.

(see here for information about "Smart Portfolios" my second book).

For more information: http://www.systematicmoney.org/systematic-trading/
To buy I'd prefer it if you went to the publishers page: https://harriman-house.com/systematic-trading

There is also a Japanese edition, available here.

I'd prefer it if you didn't buy the book on Amazon. Get it from the publishers. Is this a moral stand on their tax dodging, employee exploiting business? It can be if you like. Though coincidentally I also get a larger royalty if you buy direct.


(Naturally I'd rather you bought the book on Amazon than not at all. Of course if you do buy the book from Amazon, then please review it. Be nice. )



Reviews

 

Perry Kaufman


"A remarkable look inside systematic trading never seen before, spanning the range from small to institutional traders. This isn't only for algorithmic traders, it's valuable for anyone needing a structure - which is all of us. Carver explains how to properly test, apply constant risk, size positions and portfolios, and my favorite, his "no rule" trading rule, all explained with scenarios. Reading this will benefit all traders." - Perry Kaufman, author of Trading Systems and Methods, 5th Edition (Wiley, 2013)



Brenda Jubin (Reading the markets)


"The days of Richard Dennis and his “turtles” with their alleged 100% per year profit are long gone, but their mystique lives on...

Robert Carver is more modest—and more realistic. At the same time he has more to offer the investor or trader who has a spark of creativity and intellectual curiosity. Systematic Trading: A Unique New Method for Designing Trading and Investing Systems (Harriman House, 2015) is a thoughtful, and thought-provoking, journey through the process of creating modular rule-based portfolios.

... (Carver) isn’t just some ordinary Joe with a computer and a bunch of back-testing software. He has clearly thought about what makes a good systematic trader and a good systematically-driven portfolio. We can be grateful that he decided to share his insights with us. " Reading the markets (longer review - read more here)

 

Steve Le Compte (CXO Advisor)


"In summary, investors will likely find Systematic Trading a rational and practical approach to building diversified, risk-managed investment/trading portfolios. The book offers quantified examples throughout." CXO Advisor (longer review - read more here)


Amazon reviews


amazon.com 9 reviews 4.9/5 (read the full reviews here)
amazon.co.uk 5 reviews 4.7/5 (read full reviews here)




46 comments:

  1. Hello, Thank you for your excellent blog and book. I was reading your book and you mention backtesting on randomly generated data that contains different lengths of trends. How do you generate that data? Can you give an example of how you would use it?

    ReplyDelete
    Replies
    1. asy. Start with a saw tooth wave form with some amplitude. The period of the wave form you should set for double the length of trend you want. Then difference the wave form to get returns. To each daily return add some gaussian noise with mean zero, and some volatility. The ratio of the volatility to the amplitude of the wave is inversely proportional to the signal:noise ratio. A low ratio means you have a bit of noise, and lots of clear trends, and vice versa.Then you cumulate your returns to get a price series.

      Generate a bunch of these, varying period length and signal:noise ratio. Then run your trend following rules over them (monte carlo, lots of runs). You can then discover what the optimal trend length (in this stylised world) is for a given trend following speed.

      You can also look at the relationship between signal:noise and profit (hint: less noise = less profit!) although that's less interesting.

      PS if you liked my book please put a positive review on Amazon, I'd really appreciate.

      Delete
    2. HI Rob, I appreciate the reply. Sorry for the double post. I thought I posted the original in the wrong section as this refers to the book.

      I have been using a different method to generate random data, first was using a Ornstein–Uhlenbeck process which is more similar to your method. However I was wondering about your thoughts of using the returns of the actual instrument(s) you are looking to trade and then boot strapping those with replacement. That way you are selecting returns from the sample distribution you are actually going to trade in the end. Although it does break down any auto correlation relationships. So sometimes I chunk them into 5 or 10 piece periods, which may keep some of that autocorrelation. What do you think of this method?

      I like your more stylized method as it gives you absolute control over every parameter of the data from error to trend amplitude. I will definitely give it a go

      Delete
    3. You are right that you need to block bootstrap to keep the autocorrelation. The blocks need to be long enough related to the length of trend you are looking to analyse (perhaps 3 x longer). This means blocks of several weeks or months, or even a couple of years.

      It depends on what you are trying to do. The artifical data is good for getting a feel for what effect your indicator picks up on. You can use it to calibrate, to make sure that if you want to pick up 1 month trends that it is doing that. But it won't tell you if that effect exists in real life.

      Using a block bootstrap for fitting is better than many other fitting methods (as long as you do it expanding window of course) [after all this is how I allocate portfolio weights]. So you'd fit the parameter you wanted in many different random draws, and taken an average of the parameter values.

      But again I'm not a huge fan of fitting to real data.

      Delete
    4. Thanks again for the answer Rob. I am working out on how to generate and using the saw tooth method. I am using a modulo function % to generate the saw tooth wave but the return series that comes out of it is very spikey and then flat with plateaus. I am not sure if this is the type of series I would use to then fit the data. I was thinking of using a triangular wave that had up and down movements as opposed to just the single up move of a saw tooth wave. Sorry for all the pestering, but i am very curious on the details of your methodology.

      Is it possible for you to give an example or blog post showing how to generate the data and then fitting a sample rule to it? I appreciate your responses. Thank you.

      Delete
    5. Sorry I wrote saw tooth, but what I actually meant was triangular (https://upload.wikimedia.org/wikipedia/commons/b/bb/Synthesis_triangle.gif).

      Here's a quick example with the noise added; trend length 10 units

      https://docs.google.com/spreadsheets/d/1bkgMhH71-DQy9VxJKrCH5WquWNGZON8zBnNtHul1XOg/edit?usp=sharing

      By the way you'd expect to see a return series (for the trend following rule, once passed over the randomly generated price series) which is flat with spikes; that's kind of what positive skew looks like.

      Delete
    6. I will be toying with this excel thank you for the clear example. Would use this data to fit any rule (carry & trend following)? You mention 5 styles of strategies. Do you change the random data to fit them on? How does the 5 different styles affect the result, I am not clear what the 5 styles are.

      On that same note, would this data be viable for mean reversion strategies?

      I imagine if you are doing some carry or any type of multi legged trade, this type of data may not be viable? Although you could simulate 2 price series and look to have them be highly correlated then take the spread of that. Or i suppose just assume the spread is random and then just use it as is.

      As always, I appreciate your answers. Thank you.

      Delete
    7. Derek

      I have 8 trading rules which I style bucket into 'breakout', 'momentum', 'carry', 'mean reversion' and 'long only' (the 'no rule' rule).

      To be clear we're not really 'fitting' when we use this kind of fake data. We're seeing what type of rule variation will do best, or worst, given a particular stylised trend length. We can also find out what correlations are likely to be, and have a good idea of trading costs.

      We don't make a judgement about what length trends will appear in the future, or whether our stylised trend data is realistic or not (eg is the signal:noise ratio about right). Thus we get no information about pre-cost expected returns, but then it's very difficult to have statistically significant information about these in a real back test.

      For any kind of rule trying to pick up trends this makes sense so 'momentum' and 'breakout'. For other rules, perhaps less so.

      It wouldn't make any sense to use it for carry, because as you say you need a multi legged price series. I am struggling to think of a simple way to use fake data to meaningfully calibrate a carry model. The same applies to 'long only' (not that there is any calibration to do there).

      For mean reversion (in an absolute time series sense, rather than between two instruments) it would make sense - by construction these fake price series show trends at one frequency and mean reversion at another, slower, frequency (which might not be what you want, but you can easily add together triangle waveforms of different frequency and amplitude to get something more realistic, like fast mean reversion and slower trends).

      For mean reversion between two (three, four...) instruments you could generate two (3,4, ...)correlated (and/or cointegrated) return streams plus noise.

      Hope this makes sense
      Rob

      Delete
    8. Hi Rob,

      I have been working with your data generator and have finally gotten one that can generate as much data of many different types as I want in R. Which is great.

      But I referred to your book and am still confused as how I am supposed to actually determine how exactly to select the parameter sets of a system based on this data.

      Lets say I am using the same EWMA system and I have generated a set of 100 random time series using different trend period and noise std dev measurements.

      I then take my EWMA system and then run through all possible parameter sets for the lengths say short EWMA from 1 to 100 and long EWMA as some ratio of 4 x short EWMA.

      Then after running the series of 100 back tests (for the 100 parameter sets of short to long EWMA) on this single block of time series. I look at which parameters generate the least correlated returns to each other. This would mean that they are taking different types of entries/exits in regards to each other or at least are attempting to capture different trend lengths.

      So now I repeat the above steps say 100 more times given a different monte carlo generate time series set. Then continue to run the same procedure as above.

      I have then averaged the correlations between each 100 runs over each 100 parameters. Then I just have picked the 5 most negative correlation parameter sets.

      I hope that makes sense. I think I am close to what you describe in your book and what we have discussed earlier. But once I implemented it I was not as confident.

      Once you get to the parameter sweeping and monte carlo of additional time series how are you selecting?

      Are you making 1 type of series at a time say a 30 day period .5 noise std dev and then running the parameter sweep on that. Which would show which parameter set would best be suited to picking up that specific type of trend / noise ratio?

      At this point I can generate so much data I want to narrow down my process to a more concrete set of steps. Then create some easily understood results or method to determine the optimal 5 parameter sets.

      Thank you for your time.

      -Derek

      Delete
    9. In Part 4 of your book the staunch system trader practice Ch 15 and in Ch 8. I am still very confused to the part of selecting Forecast Weights using the bootstrap method.

      I have fitted EWMA method using random data bootstrapped and fitted them using the MarkoSolver/optimize over periods method.

      When I apply the bootstrap method using optimize_over_periods. Am i supplying that function wtih the actual returns given by those EWMA rules backtested on real data?

      Could you please elaborate on the process of bootstrapping to get the forecast weights. Thank you Its been stumping me for weeks now.

      -Derek

      Delete
    10. When I apply the bootstrap method using optimize_over_periods. Am i supplying that function wtih the actual returns given by those EWMA rules backtested on real data?

      Yes exactly. The elements of your portfolio are the different rules, the returns are the returns given by running those rules on different instruments.

      Delete
    11. I see, so for the forecast weights back test and get the return series of each of the parameter sets for the EWMA rules. Then apply that to the optimise_over_period function to get your portfolio weights. which are really your forecast weights.

      I was actually doing this with forecast values themselves and coming out with results similar to equal weight are very near something like that. I am guessing that's because they are just basically random values from -20 to 20. But the weights were "viable" or at least looked ok so i was a bit confused.

      Really appreciate the fast answer.

      Delete
  2. Hi Rob,
    I just finished your book, really enjoyed it, thanks for the effort you've put in!
    Although, one question is really bothering me. The idea of Volatility Targeting runs all through the book, and it's a sensible idea (if I happened to understand it correctly :) ): to balance everything based on it's riskiness (instrument weights in a portfolio, in case of using SRs, position-capital allocation when deciding position sizes.. )., And it kind of plays nicely with trend-following\asset-allocation systems, where you're betting on a continuation of the same-direction price-movement, so more volatility in this one-directional movement means worse performance.
    But can this approach be also used with mean-reverting StatArb systems? You mentioned that you have a relative value component in your system, but frankly do not understand how would you apply some of these principle to a mean-reverting spread. For example, one of the implications of Volatility Targeting is that when your instrument becomes more volatile you cut down it's positions' capital and wise-versa. But a more volatile mean-reverting instrument is actually a good thing (more volatility - higher profit), so it does not really make sense to reduce capital allocation in this case or does it?

    Thank you in advance.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. Dmitry

      Yes you can use vol targeting with relative value systems, and I have done so.

      The 'instrument' will be something like a portfolio of say apple and google eg 1*AAPL - B*GOOG

      The 'price' will be the same, and the 'volatility' easy to calculate.

      Suppose for simplicity the mean, equilibrium, value of the price is zero.

      Now suppose the price becomes positive. We want to put a position on. What risk does this have? Don't forget volatility is a symmetric measure. It thinks there is an equal chance of the price returning to zero, or the price moving higher. The forecast is assymetric, and says you have a higher chance of the price going to zero. Together these give you the correctly sized position.

      Imagine a spread with low volatility, but which moved (smoothly) to large deviations from zero. You should have a massive position on since your risk is low, but there is a long way for the price to move back and hence lots of profit to be made. Note that whilst the price has been deviating the system would be making losing trades (repeatedly catching the knife).

      A more 'stat arb' price series with small frequent deviations and high volatility would have smaller positions on. However if the thing is close to perfectly mean reverting it would make profits on almost every trade.

      Note that the former system will have a lower sharpe ratio than the latter, but the latter will have smaller positions on. That doesn't mean the position scaling is wrong; just that the latter instrument is inherently more predictable and has more 'juice' in it.

      Note this is all about position scaling (chapter 10). Capital allocation is an entirely separate issue (chapter 11 for instruments, chapter 8 for trading rule variations). It's important not to confound these two subjects.

      A portfolio optimisation that used backtest results (like a bootstrap) would probably put less money in the smoother, slower, mean reversion than in the fast system. Just beware that scaling up the positions in the latter will be heavily leveraging a negative skew trade.

      Rob

      Delete
    3. Thanks a lot for the detailed answer!
      So at the portfolio optimization step (with bootstrap) the slow spread will get smaller weight(less capital to trade in general) because of it's lower SR, the fast\volatile one will get more capital because it's Sharpe is higher., But at the time of an actual position entry (assuming each instrument has only one similar rule for simplicity), the slow spread will get more capital than the fast one, because it's less volatile... Still the second part does not sit fully-well with me, because we're depriving of capital the "juicier instrument".. Like for example, assume the first spread deviated up from 0 to +2$ in 2 days, and the second spread also deviated from 0 to +2$ but did it in one day., so the second one will become more volatile and get less capital to enter the short-position than the first one. Are you saying that it's actually reasonable because that higher volatility of the second one "predicts" higher possibility of the price continuing to go against us (further up) because it’s a symmetric measure, when in case of the first one (because it's current volatility is lower) there's a lower chance that it will continue to go up, so it's less risky? (sorry if I am not making sense :) ).

      Another not directly-related question: when I was reading the parts about Staunch trader, I could not completely comprehend the following: after all the multipliers, weights and standardizations applied, how fully\effectively the system will be using it's total trading capital? I think we do expect that at different times some instruments(subsystems) will be encroaching on the capital "pre-assigned" to other instruments by the initial portfolio optimization, correct? If that's true, then will the system ever be "starving" because some instruments hogged all available capital and left nothing for the others, or the whole system of correlations, weights and checks will always balance itself so that "everyone will get something" ? Maybe another way to put this question is what's the normal expected percentage of the capital that's "IN" (considering your leverage is quite limited).

      Delete
    4. Dmitry,
      "Are you saying that it's actually reasonable because that higher volatility of the second one "predicts" higher possibility of the price continuing to go against us (further up) because it’s a symmetric measure, when in case of the first one (because it's current volatility is lower) there's a lower chance that it will continue to go up, so it's less risky? (sorry if I am not making sense :) )."

      Yes that is exactly what I am saying. Even if you have an amazing trading rule on a day to day basis you are exposed mostly to symmetric risk. So for example suppose you had a trading rule with a sharpe ratio of 2.0 (which as you know I personally wouldn't believe). On a day to day basis there is only a 56% chance you will make a positive return, 44% you will lose money. For a Sharpe ratio of 1.0 it's only a 52% chance of being positive. So it's appropriate to use a symmetric risk measure even if you think that your forecast is amazing.

      "how fully\effectively the system will be using it's total trading capital? I think we do expect that at different times some instruments(subsystems) will be encroaching on the capital "pre-assigned" to other instruments by the initial portfolio optimization, correct? If that's true, then will the system ever be "starving" because some instruments hogged all available capital and left nothing for the others, or the whole system of correlations, weights and checks will always balance itself so that "everyone will get something" ? Maybe another way to put this question is what's the normal expected percentage of the capital that's "IN" (considering your leverage is quite limited)."

      Well we're using derivatives in that part so it's more appropriate to think about whether we'll be starved of margin rather than capital. On average in my own futures system (which runs at 25% annuallised volatility target) I use about 20% of my capital in margin. So it isn't a problem even if you use the maximum recommended target, and are running at the maximum possible forecasts.

      It's perhaps better to think about this problem in a 'cash' portfolio like that of the asset allocating investor. In that section I show how to calculate the maximum volatility target given the volatility of the underlying instruments, assuming the portfolio is 90% invested (to allow some room to increase positions in instruments if their volatility falls).

      If we ignore the volatility of the instruments then the key input into this is the instrument diversification multiplier. If that is very high then your realisable volatility be lower. That is the check and balance effect at an instrument level.

      (note another reason not to use low vol instruments - they consume too much capital)

      The asset allocating investor example assumes a fixed forecast of 10. However if you use dynamic trading rules with a 'cash' system you can't do that. The most conservative thing would be to do the same calculations for maximum possible volatility using the maximum forecast of 20. That would mean on average you'd be using only 45% of your capital. But there would never be a 'starvation' problem.

      In practice it's unlikely that all your instruments will hit a forecast of 20 at the same time. You could do the same calculation with a forecast of 18, or perhaps check in the backtest to see what the maximum total forecast was.

      If you then subsequently get an exceptionally high average forecast across your portfolio then you would be close to running out of capital. But that should be rare.

      Delete
    5. Rob, thanks for your answers, (it's so cool to talk to a real book author :) )
      So for example if our situation is somewhat in the middle (between futures and static allocation):
      For a dynamic system, we have let's say 100 000 of cash capital, and the broker is allowing us to borrow another 100 000. Should we start our calculations of annualized cash volatility target and other values using 90% of the total leveraged amount (2 x 200k x 0.9=180k) or it's not the best way to do it? In general, our goal here is several-fold: we do not want our system to starve, but we also want the capital to work as much as possible, i.e. to have, say, 70% (?) of the leveraged amount(200k*0.7=140k) invested on average, as well as we do not want to get margin-calls when too many trades go against us at the same time (from that we should be guarding in real-time..). It's probably a slightly different\bigger problem, but maybe you could just point to a direction to go..

      Delete
    6. "For a dynamic system, we have let's say 100 000 of cash capital, and the broker is allowing us to borrow another 100 000. Should we start our calculations of annualized cash volatility target and other values using 90% of the total leveraged amount (2 x 200k x 0.9=180k) or it's not the best way to do it? In general, our goal here is several-fold: we do not want our system to starve, but we also want the capital to work as much as possible, i.e. to have, say, 70% (?) of the leveraged amount(200k*0.7=140k) invested on average, as well as we do not want to get margin-calls when too many trades go against us at the same time (from that we should be guarding in real-time..). It's probably a slightly different\bigger problem, but maybe you could just point to a direction to go.. "

      Yes if you do the calculations in here https://docs.google.com/spreadsheets/d/105iRLlsarHx4PWJM5A6Wy5xumyziaq3dzPteVplu0ZM/edit?usp=sharing Leverage factor calculation, but change desired leverage to 140% (and obviously update the rest of the sheet with what you are trading) then you'll achieve what you want.

      Then you will have a system with an average leverage as required.

      Is 70% appropriate? Well as long as you're doing the right rescaling of capital with losses, then you could survive a 30% 'gap' (a fall in account value before you got a chance to rebalance), which unless your annualised vol target is very high is probably safe. You might want to backtest to get a feel for your margin of safety, if you can.

      Delete
    7. Thanks Rob, I'll definitely try that. But the link appears to be broken at the moment, I cannot access this: https://docs.google.com/spreadsheets/d/105iRLlsarHx4PWJM5A6Wy5xumyziaq3dzPteVplu0ZM/edit?usp=sharing

      Delete
    8. Ok, now it's working, thanks!

      Delete
  3. Hi Rob. I've just been working through your volatility calculation spreadsheet from Chapter 10 of your book. All the calculations make sense apart from the 25 day moving average volatility column. Correct me if I'm wrong, but shouldn't the calculations start from cell reference H38 and then reference the previous 25 days rather than 24 (cells D14:D38)?

    ReplyDelete
  4. Hi Robert,

    Just wanted to write a couple of questions/comments on your excellent book. But it says I am not to exceed 4096 letters in the comments section. How can I send them to you by mail /authors page...?

    Thanks and please keep up your good work

    Best
    Christof

    ReplyDelete
    Replies
    1. Best way is to connect with me on linked in, then send an email

      Delete
  5. Hi Robert,

    just wanted to post some comments/questions on your excellent book. But it says I need to restrict myself to 4096 letters. How can I send them to you by mail, authors page etc.

    Thanks and best regards,
    Christof

    ReplyDelete
  6. YOur link to the harriman page for your book is broken

    ReplyDelete
    Replies
    1. Fixed. Thanks very much for pointing that out.

      Delete
  7. Hi Robert,
    I currently invest about half of my retirement plan in Gary A's Dual Momentum strategy, and half in Meb Faber's GTAA Aggressive 3 strategy. These strategies have very long backtested performance of 16-20% returns, with about 25% max drawdowns. Plus, I only need to check in about once a month to reallocate. Do you think your staunch system trader strategy would have significantly better performance? If so, I could be willing to devote the time it takes for daily updates and dealing with the increased complexity of your system. However, if I could only expect a small bump in performance from your strategy, I would probably stick with what I have. For what it's worth, I have a very high risk tolerance; in fact, I am trying to figure out an economical way to leverage my current strategies. I would be comfortable with 50% or greater drawdown for a boost of 2 or 3 percent in returns. Any advice is appreciated, and congrats on an exceptional book.
    Peter

    ReplyDelete
    Replies
    1. Hi Peter. I don't know eithier of those strategies very well. There are a number of reasons it's plausible that the staunch system trader system *could* be better than another strategy:

      - correct position management eg positions adjusted by volatility,
      - more diversified momentum indicator
      - another uncorrelated indicator - carry
      - use of futures (allows leverage, cheaper)
      - more diversified over different asset classes

      Anyone of these things would probably improve your performance.

      My book, as you've hopefully realised, isn't about selling a specific system, but to teach you about what does or does not make a good system. One of the key determinants of what makes a good system is something you can stick to. It sounds like you've found something you can stick to, so if that's halfway decent...

      I probably shouldn't say this, but if the two systems you describe aren't doing anything stupid they might be worth sticking with. I'm hoping that the book has helped give you the critical thinking techniques to work out if these systems are sensible.

      Also hopefully chapter 9 in particular will help you leverage them safely.

      Alternatively maybe you could think about adapting elements of these existing strategies

      Spoiler - I'm writing a book that is much more suitable for long only investments with minimal reallocations...

      Delete
    2. Can't wait for the next book! Chapter nine is helpful for determining leverage, but my options to employ it are limited in a retirement account. Since I can't use margin, and because I don't feel comfortable with options, I guess the only option would be futures, but a lot of the instruments in my strategies don't have corresponding futures. I use mostly ETFs. I think there are only futures for the standard ones like SP500, but not for the more obscure ETFs, like the emerging markets index.

      Delete
  8. Hi Rob, you state in your book that one should avoid changing his volatility target. Suppose I have a very high risk tolerance, and a 25 yr investing horizon, and I am using your chapter 15 sustem. Might it make sense to continuously update my annual volatility target to match my latest backtested sharpe ratio? If I go through a losing streak, my sharpe will decrease, reducing my volatility target as it does. I would think this is the closest you could get to Kelly optimal. Sure I would overshoot and undershoot at times, but over such a long time horizon, I feel it would even out in the end.

    ReplyDelete
  9. I really would advise against this, in the strongest possible terms. Lets say you're starting with a 30 year backtest. It wouldn't make much difference to the overall SR as you add data through live trading. Also even with a 30 year backtest your backtested SR is only an estimate, and an estimate with huge uncertainty. Thats why I advise using only a small fraction of full Kelly, and limiting your annual risk to the point where your backtested SR probably wouldn't be affecting your risk target. Finally you're never going to hit your target risk anyway, since volatility isn't perfectly predictable. So this is a lot of effort for little return. If your system returns are mean reverting you'll lose money doing this (see my blog post on trading the account curve). And you're going to incur additional trading costs. There are probably other reasons not to do this but that should be enough to dissuade you.

    ReplyDelete
    Replies
    1. Thanks for the response, Rob. What maximum fixed volatility would be reasonable for the above? Maybe 50% to match the conservatively projected .5 sharpe? Or would should I really try to stick with a max of half Kelly, even though I have such a long horizon and high risk tolerance?

      Delete
    2. Half kelly is the absolute maximum you should use.

      Delete
    3. Thanks, Rob. I ran your chapter 15 python system on 8 instruments, and my backtest shows a sharpe of 0.80. In your posting on small account sizes and diversification, you state that a sharpe of 0.61 can be expected for 8 instruments. Given my high risk tolerance, should I set annual volatility to 40% (half of 0.80) or 30%?

      Delete
    4. Reread chapter 9 again. You also need to apply a correction factor to reflect the fact that backtested SR will be overstated.

      Delete
    5. Makes sense: if I apply your recommended adjustment of 75% of backtested sharpe, I come to .60. So, that confirms the maximum reasonable volatility of 30%.

      Delete
  10. Hi Robert and congratulations for an exceptional read!
    Just a quick question as I am not 100% what your assumption is about the Gaussian normal distribution.

    Do you generally assume that daily *price changes*, or daily *percentage changes* follow such a distribution?

    Thank you!

    ReplyDelete
  11. Hi Rob,

    Your book is very interesting. Congrats for a very structured approach!

    I have one quick question for you. I see that you recommend exiting
    a strategy using a stop loss with a given X. When you are backtesting a strategy, do use such an exit method? I had a quick look at your code on github but I could not find any reference to X for backtesting purposes.

    Thank you!

    ReplyDelete
    Replies
    1. That's a specific method I recommend for traders who want to have separate entry and exit rules (especially when the entry is discretionary) with discrete trades. But I don't use such a system myself, instead I use a continuously changing forecast (described in chapter 15 of the book).

      Delete
  12. Hi Rob,

    I really liked your book and now would like to apply your framework to my strategy (and hopefully along the way automate it).
    I would like to know your input on Instrument block. As I trade exchange futures spreads, 1% of one instrument is not really meaningful as spreads can have huge variation in percentage terms...

    Would you have any suggestions on how to define the instrument block by any chance? I was thinking of possibly using the roll yield.

    Thank you!

    ReplyDelete
    Replies
    1. I think you're not talking about defining a block (which will probably be a single futures contract long+short) but about the price of a block.

      "As I trade exchange futures spreads, 1% of one instrument is not really meaningful as spreads can have huge variation in percentage terms..."

      And of course spreads can go negative.
      Anyway you can just add a constant to the price when working out the volatility, it will still work.

      Consider the worked example at the end of chapter ten. Now let's do it for eurodollar calendar spreads

      Price: Dec 17 / Dec 18 is currently at about $0.30

      Price volatility: is the volatility of the spread. Lets say it's $0.02 a day. Now we add $100 to the price ($0.30) giving us $100.30. The % volatility is around 0.02 / 100.3 = 0.01994% per day

      Instrument block: long 1 contract / short next contract

      Block value: How much do we lose or gain from a 1% move in the price? The price is $100+spread. A 1% move would be a move in the spread of 100.3 * 1% = 1.003 This would cost us 1.003 * $2500 = $2507.50

      instrument currency volatility = block value * price volatility = $2507.50 * 0.01994 = $50 per day

      Then the rest of the formula works as normal.

      In your case it probably makes sense to find this value by using the simplified formula that doesn't use % volatility at all. Instead start with the spread volatility (0.02 points per day), and multiply by the value of a 1 unit spread move ($2500). You get the same answer: $50.

      Delete
    2. Thank you for coming back to me.
      I have done something similar to your last example (using the points per day) and it seems to work.
      Your first example would give different values depending on the constant that you choose. If you choose 100, it seems to always give a similar result to the second example.
      Why did you choose 1% of the price for the instrument block? Would it work if you defined the instrument block as 1% of the trading capital (and size accordingly)?

      Delete
    3. The reason is that most people are used to measuring prices in percentage terms. In fact if you look at the calculation the price itself cancels out. So you can use the volatility directly calculated in price differences rather than as a %.

      "Would it work if you defined the instrument block as 1% of the trading capital (and size accordingly)?"

      No.

      Delete