Friday 29 January 2016

Correlations, Weights, Multipliers.... (pysystemtrade)

This post serves three main purposes:

Firstly, I'm going to explain the main features I've just added to my python back-testing package pysystemtrade; namely the ability to estimate parameters that were fixed before: forecast and instrument weights; plus forecast and instrument diversification multipliers.

(See here for a full list of what's in version 0.2.1)

Secondly I'll be illustrating how we'd go about calibrating a trading system (such as the one in chapter 15 of my book); actually estimating some forecast weights and instrument weights in practice. I know that some readers have struggled with understanding this (which is of course entirely my fault).

Thirdly there are some useful bits of general advice that will interest everyone who cares about practical portfolio optimisation (including both non users of pysystemtrade, and non readers of the book alike). In particular I'll talk about how to deal with missing markets, the best way to estimate portfolio statistics, pooling information across markets, and generally continue my discussion about using different methods for optimising (see here, and also here).

If you want to, you can follow along with the code, here.


Key


This is python:

system.forecastScaleCap.get_scaled_forecast("EDOLLAR", "carry").plot()


This is python output:

hello world

This is an extract from a pysystemtrade YAML configuration file:

forecast_weight_estimate:
   date_method: expanding ## other options: in_sample, rolling
   rollyears: 20

   frequency: "W" ## other options: D, M, Y

Forecast weights


A quick recap



The story so far; we have some trading rules (three variations of the EWMAC trend following rule, and a carry rule); which we're running over six instruments (Eurodollar, US 10 year bond futures, Eurostoxx, MXP USD fx, Corn, and European equity vol; V2X).

We've scaled these (as discussed in my previous post) so they have the correct scaling. So both these things are on the same scale:

system.forecastScaleCap.get_scaled_forecast("EDOLLAR", "carry").plot()

Rolldown on STIR usually positive. Notice the interest rate cycle.

system.forecastScaleCap.get_scaled_forecast("V2X", "ewmac64_256").plot()

Notice how we moved from 'risk on' to 'risk off' in early 2015

Notice the massive difference in available data - I'll come back to this problem later.
 
However having multiple forecasts isn't much good; we need to combine them (chapter 8). So we need some forecast weights. This is a portfolio optimisation problem. To be precise we want the best portfolio built out of things like these:
Account curves for trading rule variations, US 10 year bond future. All pretty good....


There are some issues here then which we need to address.

An alternative which has been suggested to me is to optimise the moving average rules seperately; and then as a second stage optimise the moving average group and the carry rule. This is similar in spirit to the handcrafted method I cover in my book. Whilst it's a valid approach it's not one I cover here, nor is it implemented in my code.


In or out of sample?


Personally I'm a big fan of expanding windows (see chapter 3, and also here)
nevertheless feel free to try different options by changing the configuration file elements shown here.

forecast_weight_estimate:
   date_method: expanding ## other options: in_sample, rolling
   rollyears: 20

   frequency: "W" ## other options: D, M, Y
Also the default is to use weekly returns for optimisation. This has two advantages; firstly it's faster. Secondly correlations of daily returns tend to be unrealistically low (because for example of different market closes when working across instruments).


Choose your weapon: Shrinkage, bootstrapping or one-shot?


In my last couple of posts on this subject I discussed which methods one should for optimisation (see here, and also here, and also chapter four).

I won't reiterate the discussion here in detail, but I'll explain how to configure each option.

Boostrapping

This is my favourite weapon, but it's a little ..... slow.


forecast_weight_estimate:
   method: bootstrap
   monte_runs: 100
   bootstrap_length: 50
   equalise_means: True
   equalise_vols: True



We expect our trading rule p&l to have the same standard deviation of returns, so we shouldn't need to equalise vols; it's a moot point whether we do or not. Equalising means will generally make things more robust. With more bootstrap runs, and perhaps a longer length, you'll get more stable weights.

Shrinkage


I'm not massively keen on shrinkage (see here, and also here) but it is much quicker than bootstrapping. So a good work flow might be to play around with a model using shrinkage estimation, and then for your final run use bootstrapping. It's for this reason that the pre-baked system defaults to using shrinkage. As the defaults below show I recommend shrinking the mean much more than the correlation.


forecast_weight_estimate:
   method: shrinkage
   shrinkage_SR: 0.90
   shrinkage_corr: 0.50
   equalise_vols: True


Single period


Don't do it. If you must do it then I suggest equalising the means, so the result isn't completely crazy.

forecast_weight_estimate:
   method: one_period
   equalise_means: True
   equalise_vols: True




To pool or not to pool... that is a very good question



One question we should address is, do we need different forecast weights for different instruments, or can we pool our data and estimate them together? Or to put it another way does Corn behave sufficiently like Eurodollar to justify giving them the same blend of trading rules, and hence the same forecast
weights?

forecast_weight_estimate:
   pool_instruments: True ##

One very significant factor in making this decision is actually costs. However I haven't yet included the code to calculate the effect of these. For the time being then we'll ignore this; though it does have a significant effect. Because of the choice of three slower EWMAC rule variations this omission isn't as serious as it would be with faster trading rules.

If you use a stupid method like one-shot then you probably will get quite different weights. However more sensible methods will account better for the noise in each instruments' estimate.

With only six instruments, and without costs, there isn't really enough information to determine whether pooling is a good thing or not. My strong prior is to assume that it is. Just for fun here are some estimates without pooling.

system.config.forecast_weight_estimate["pool_instruments"]=False
system.config.instrument_weight_estimate["method"]="bootstrap"
system.config.instrument_weight_estimate["equalise_means"]=False
system.config.instrument_weight_estimate["monte_runs"]=200
system.config.instrument_weight_estimate["bootstrap_length"]=104

system=futures_system(config=system.config)

system.combForecast.get_forecast_weights("CORN").plot()
title("CORN")
show()









Forecast weights for corn, no pooling

system.combForecast.get_forecast_weights("EDOLLAR").plot()
title("EDOLLAR")
show()



Forecast weights for eurodollar, no pooling

Note: Only instruments that share the same set of trading rule variations will see their results pooled.
 

Estimating statistics


There are also configuration options for the statistical estimates used in the optimisation; so for example should we use exponential weighted estimates? (this makes no sense for bootstrapping, but for other methods is a reasonable thing to do). Is there a minimum number of data points before we're happy with our estimate? Should we floor correlations at zero (short answer - yes).


forecast_weight_estimate:
 

   correlation_estimate:
     func: syscore.correlations.correlation_single_period
     using_exponent: False
     ew_lookback: 500
     min_periods: 20     
     floor_at_zero: True

   mean_estimate:
     func: syscore.algos.mean_estimator
     using_exponent: False
     ew_lookback: 500
     min_periods: 20     

   vol_estimate:
     func: syscore.algos.vol_estimator
     using_exponent: False
     ew_lookback: 500
     min_periods: 20     


Checking my intuition


Here's what we get when we actually run everything with some sensible parameters:

system=futures_system()
system.config.forecast_weight_estimate["pool_instruments"]=True
system.config.forecast_weight_estimate["method"]="bootstrap" 
system.config.forecast_weight_estimate["equalise_means"]=False
system.config.forecast_weight_estimate["monte_runs"]=200
system.config.forecast_weight_estimate["bootstrap_length"]=104


system=futures_system(config=system.config)

system.combForecast.get_raw_forecast_weights("CORN").plot()
title("CORN")
show()

Raw forecast weights pooled across instruments. Bumpy ride.
 Although I've plotted these for corn, they will be the same across all instruments. Almost half the weight goes in carry; makes sense since this is relatively uncorrelated (half is what my simple optimisation method - handcrafting - would put in). Hardly any (about 10%) goes into the medium speed trend following rule; it is highly correlated with the other two rules. Out of the remaining variations the faster one gets a higher weight; this is the law of active management at play I guess.

Smooth operator - how not to incur costs changing weights


Notice how jagged the lines above are. That's because I'm estimating weights annually. This is kind of silly; I don't really have tons more information after 12 months; the forecast weights are estimates - which is a posh way of saying they are guesses. There's no point incurring trading costs when we update these with another year of data.

The solution is to apply a smooth

forecast_weight_estimate:
   ewma_span: 125
   cleaning: True


Now if we plot forecast_weights, rather than the raw version, we get this:

system.combForecast.get_forecast_weights("CORN").plot()
title("CORN")
show()



Smoothed forecast weights (pooled across all instruments)
There's still some movement; but any turnover from changing these parameters will be swamped by the trading the rest of the system is doing.



Forecast diversification multiplier


Now we have some weights we need to estimate the forecast diversification multiplier; so that our portfolio of forecasts has the right scale (an average absolute value of 10 is my own preference).


Correlations


First we need to get some correlations. The more correlated the forecasts are, the lower the multiplier will be. As you can see from the config options we again have the option of pooling our correlation estimates.


forecast_correlation_estimate:
   pool_instruments: True 

   func: syscore.correlations.CorrelationEstimator ## function to use for estimation. This handles both pooled and non pooled data
   frequency: "W"   # frequency to downsample to before estimating correlations
   date_method: "expanding" # what kind of window to use in backtest
   using_exponent: True  # use an exponentially weighted correlation, or all the values equally
   ew_lookback: 250 ## lookback when using exponential weighting
   min_periods: 20  # min_periods, used for both exponential, and non exponential weighting





Smoothing, again


We estimate correlations, and weights, annually. Thus as with weightings it's prudent to apply a smooth to the multiplier. I also floor negative correlations to avoid getting very large values for the multiplier.


forecast_div_mult_estimate:
   ewma_span: 125   ## smooth to apply
   floor_at_zero: True ## floor negative correlations


system.combForecast.get_forecast_diversification_multiplier("EDOLLAR").plot()
show()




system.combForecast.get_forecast_diversification_multiplier("V2X").plot()
show()

Forecast Div. Multiplier for Eurodollar futures
Notice that when we don't have sufficient data to calculate correlations, or weights, the FDM comes out with a value of 1.0. I'll discuss this more below in "dealing with incomplete data".


From subsystem to system


We've now got a combined forecast for each instrument - the weighted sum of trading rule forecasts, multiplied by the FDM. It will look very much like this:

system.combForecast.get_combined_forecast("EUROSTX").plot()
show()

Combined forecast for Eurostoxx. Note the average absolute forecast is around 10. Clearly a choppy year for stocks.


Using chapters 9 and 10 we can now scale this into a subsystem position. A subsystem is my terminology for a system that trades just one instrument. Essentially we pretend we're using our entire capital for just this one thing.


Going pretty quickly through the calculations (since you're eithier familar with them, or you just don't care):

system.positionSize.get_price_volatility("EUROSTX").plot()
show()

Eurostoxx instrument value volatility. A bit less than 1% a day in 2014, a little more exciting recently.

system.positionSize.get_block_value("EUROSTX").plot()
show()


Block value (value of 1% change in price) for Eurostoxx.


system.positionSize.get_instrument_currency_vol("EUROSTX").plot()
show()




Eurostoxx: Instrument currency value: Volatility in euros per day


system.positionSize.get_instrument_value_vol("EUROSTX").plot()
show()







Eurostoxx instrument value volatility: volatility in base currency ($) per day, per contract



system.positionSize.get_volatility_scalar("EUROSTX").plot()
show()




Eurostoxx vol scalar: Number of contracts we'd hold in a subsystem with a forecast of +10




system.positionSize.get_subsystem_position("EUROSTX").plot()
show()

Eurostoxx subsystem position



Instrument weights


We're not actually trading subsystems; instead we're trading a portfolio of them. So we need to split our capital - for this we need instrument weights. Oh yes, it's another optimisation problem, with the assets in our portfolio being subsystems, one per instrument.


import pandas as pd

instrument_codes=system.get_instrument_list()

pandl_subsystems=[system.accounts.pandl_for_subsystem(code, percentage=True)
        for code in instrument_codes]

pandl=pd.concat(pandl_subsystems, axis=1)
pandl.columns=instrument_codes

pandl=pandl.cumsum().plot()
show()

Account curves for instrument subsystems
Most of the issues we face are similar to those for forecast weights (except pooling. You don't have to worry about that anymore). But there are a couple more annoying wrinkles we need to consider.



Missing in action: dealing with incomplete data


As the previous plot illustrates we have a mismatch in available history for different instruments - loads for Eurodollar, Corn, US10; quite a lot for MXP, barely any for Eurostoxx and V2X.

This could also be a problem for forecasts, at least in theory, and the code will deal with it in the same way.

Remember when testing out of sample I usually recalculate weights annually. Thus on the first day of each new 12 month period I face having one or more of these beasts in my portfolio:
  1. Assets which weren't in my fitting period, and aren't used this year
  2. Assets which weren't in my fitting period, but are used this year
  3. Assets which are in some of my fitting period, and are used this year
  4. Assets which are in all of the fitting period, and are used this year
Option 1 is easy - we give them a zero weight.

Option 4 is also easy; we use the data in the fitting period to estimate the relevant statistics.

Option 2 is relatively easy - we give them an "downweighted average" weight. Let me explain. Let's say we have two assets already, each with 50% weight. If we were to add a further asset we'd allocate it an average weight of 33.3%, and split the rest between the existing assets. In practice I want to penalise new assets; so I only give them half their average weight. In this simple example I'd give the new asset half of 33.3%, or 16.66%.

We can turn off this behaviour, which I call cleaning. If we do we'd get zero weights for assets without enough data.


instrument_weight_estimate:
   cleaning: False
 


Option 3 depends on the method we're using. If we're using shrinkage or one period, then as long as there's enough data to exceed minimum periods (default 20 weeks) then we'll have an estimate. If we haven't got enough data, then it will be treated as a missing weight; and we'd use downweighted average weights (if cleaning is on), or give the absent instruments a zero weight (with cleaning off)

For bootstrapping we check to see if the minimum period threshold is met on each bootstrap run. If it isn't then we use average weights when cleaning is on. The less data we have, the closer the weight will be to average. This has a nice Bayesian feel about it, don't you think? With cleaning off, less data will mean weights will be closer to zero. This is like an ultra conservative Bayesian.



If you don't get this joke, there's no point in me trying to explain it (Source: www.lancaster.ac.uk)


Let's plot them


We're now in a position to optimise, and plot the weights:

(By the way because of all the code we need to deal properly with missing weights on each run, this is kind of slow. But you shouldn't be refitting your system that often...)

system.config.instrument_weight_estimate["method"]="bootstrap" ## speed things up
system.config.instrument_weight_estimate["equalise_means"]=False
system.config.instrument_weight_estimate["monte_runs"]=200
system.config.instrument_weight_estimate["bootstrap_length"]=104

system.portfolio.get_instrument_weights().plot()
show()


Optimised instrument weights
These weights are a bit different from equal weights, in particular the better performance of US 10 year and Eurodollar is being rewarded somewhat. If you were uncomfortable with this you could turn equalise means on.


Instrument diversification multiplier


Missing in action, take two


Missing instruments also affects estimates of correlations. You know, the correlations we need to estimate the diversification multiplier. So there's cleaning again:


instrument_correlation_estimate:
    cleaning: True


I replace missing correlation estimates* with the average correlation, but I don't downweight it. If I downweighted the average correlation the diversification multiplier would be biased upwards - i.e. I'd have too much risk on. Bad thing. I could of course use an upweighted average; but I'm already penalising instruments without enough data by giving them lower weights.

* where I need to, i.e. options two and three

Let's plot it



system.portfolio.get_instrument_diversification_multiplier().plot()
show()


Instrument diversification multiplier


And finally...


We can now work out the notional positions - allowing for subsystem positions, weighted by instrument weight, and multiplied by instrument diversification multiplier.


system.portfolio.get_notional_position().plot("EUROSTX")
show()


Final position in Eurostoxx. The actual position will be a rounded version of this.


End of post


No quant post would be complete without an account curve and a Sharpe Ratio.

And an equation. Bugger, I forgot to put an equation in.... but you got a Bayesian cartoon - surely that's enough?
 

print(system.accounts.portfolio().stats())

system.accounts.portfolio().cumsum().plot()

show()



Overall performance. Sharpe ratio is 0.53. Annualised standard deviation is 27.7% (target 25%)

Stats: [[('min', '-0.3685'), ('max', '0.1475'), ('median', '0.0004598'), 
('mean', '0.0005741'), ('std', '0.01732'), ('skew', '-1.564'), 
('ann_daily_mean', '0.147'), ('ann_daily_std', '0.2771'), 
('sharpe', '0.5304'), ('sortino', '0.6241'), ('avg_drawdown', '-0.2445'), ('time_in_drawdown', '0.9626'), ('calmar', '0.2417'), 
('avg_return_to_drawdown', '0.6011'), ('avg_loss', '-0.011'), 
('avg_gain', '0.01102'), ('gaintolossratio', '1.002'), 
('profitfactor', '1.111'), ('hitrate', '0.5258')]

This is a better output than the version with fixed weights and diversification multiplier that I've posted before; mainly because a variable multiplier leads to a more stable volatility profile over time, and thus a higher Sharpe Ratio.


153 comments:

  1. Rob, again thank you for the article.
    I am curious, is there a easy way to feed your system directly from Quandl instead of legacyCSV files?

    ReplyDelete
  2. Getting data from quandl python api is very easy. The hard thing is to produce the two kinds of data - stitched prices (although quandl do have this) and aligned individual contracts for carry. So the hard bit at least for futures trading is writing the piece that takes raw individual contracts and produces these two things.

    This is on my list to do...

    ReplyDelete
  3. I had a few Q's on above:

    OPTIMISATION

    When you optimise to assign weights to rules, what do you do in your OWN system:
    1. i) do you optimise the weights for each trading rule based on each instrument individually, so each trading rule has a different weight depending on the instrument, or ii) do you optimise the weights for trading rules based on pooled data across all instruments?
    2. if the answer above is ii) how do you assign the WEIGHTS TO THE INSTRUMENTS when you pool them in the optimisation to determine the WEIGHTS TO THE TRADING RULES? Are the instrument weights determined in a prior optimisation before assigning weights to trading rules? Is your process to first optimise the weights assigned to each instrument, and after this is done you pool the instruments based on these weights to optimise the for the weights for each trading rule?


    FORECAST SCALARS

    When we calculate average forecast scalars, what do you personally do:
    1. do you calculate the median or arithmetic average?
    2. in order to calculate the average, do you personally pool all the instruments, or do you take the average forecast from each instrument individually?

    Apologies for the caps, could not find any other way to add emphasis.

    ReplyDelete
    Replies
    1. "1. i) do you optimise the weights for each trading rule based on each instrument individually, so each trading rule has a different weight depending on the instrument, or ii) do you optimise the weights for trading rules based on pooled data across all instruments?"

      Number (ii) but in the presence of different cost levels (code not yet written).


      "2. if the answer above is ii) how do you assign the WEIGHTS TO THE INSTRUMENTS when you pool them in the optimisation to determine the WEIGHTS TO THE TRADING RULES? Are the instrument weights determined in a prior optimisation before assigning weights to trading rules? Is your process to first optimise the weights assigned to each instrument, and after this is done you pool the instruments based on these weights to optimise the for the weights for each trading rule?"

      No, if you look at the code it is just stacking all the returns from different instruments. This means they are equally weighted, but actually implicitally higher weights are given to instruments with more data history.

      "FORECAST SCALARS

      When we calculate average forecast scalars, what do you personally do:
      1. do you calculate the median or arithmetic average?"

      median

      "2. in order to calculate the average, do you personally pool all the instruments, or do you take the average forecast from each instrument individually?"

      Pool.

      Rob

      Delete
    2. Hi Rob, Thanks for your answer above. I am unclear as to i) when in the process the instrument weights are calculated and ii) how these are calculated. Are you able to explain this?

      Delete
    3. The instrument weights are calculated when they're needed; after combining forecasts (chapter 8) and position scaling (chapters 9 and 10).

      As to how, it's just portfolio optimisation (of whatever specific kind you prefer; though I use bootstrapping on an expanding out of sample window). The assets in the portfolio are the returns of the trading subsystems, one for each instrument.

      Rob

      Delete
  4. Sorry Rob, I am still trying to wrap my head around this. So to confirm, the instrument weights are determined in a SEPARATE optimisation that is INDEPENDENT from the optimisation of the weights assigned to trading rules? So two separate optimisations?

    ReplyDelete
    Replies
    1. Yes. The forecast weights optimisation has to be done first; then subsequent to that you do one for the instrument weights.

      (of course it's feasible to do it differently if you like.... but I find it easier to do this way and that's what in the book and the code)

      Delete
  5. OK, this is clear in my mind now. Thank you!

    ReplyDelete
  6. Hi Rob,

    Can you perhaps write a blog post about how the Semi Automated trader could develop scaled forecasts? In the book, the examples of CFD bets (not available to those of us in the US) is very helpful, but what if we like the way in which your signals fluctuate from moderately strong to stronger?

    ReplyDelete
    Replies
    1. The instrument you're trading is irrelevant. It's just a matter of translating your gut feel into a number between -20 (strong sell) and +20 (strong buy). I'm not sure that's something I can blog about. Or have I misunderstood the question?

      Delete
  7. Right, that makes sense. Perhaps i'm just not fully understanding. Based on the walk-through examples in the book for the Semi-automatic trader using CFD's, the signals aren't combined or anything fancy like that. Like you said, its just a matter of translating gut feel into an integer.

    I just wanted to know if it were possible for the discretionary trader to develop a weighted combined forecast, similar to the staunch systems trader. One of the most attractive features of your system is the fact that the signal generation is done for you on a routine basis.

    Based on my limited understanding, it seemed like the semi-automatic trader is limited to explicit stop losses and arbitrary binary trading.

    ReplyDelete
    Replies
    1. Oh sure you can combine discretionary forecasts. If you post your email I'll tell you how (not a secret but a bit long for a comment). I moderate posts so I won't publish the one with your email in it.

      Delete
  8. Hi Robert,

    I've a question about forecast weights.

    At first, more theoretical...
    I want to use bootstrapping to determine the forecast weights. I think it's best to calculate separate forecast weights for each element because the cost/instrument can vary substantially per instrument. Also in my opinion it's important to take into account the trading costs for the calculation of the forecast weights, because a fast trading system will generate a lot of trading costs (I work with CFD's) and I think a lower participation in the combined forecast for the faster system will be better.
    Do you agree with this ideas ?

    Now more practical...
    My idea is to calculate a performance curve for each trading rule variation for each instrument and use this performance curves for bootstrapping.

    Is the following method correct :
    1. Daily calculation per instrument en per trading rule variation
    - calculate scaled forecast
    - calculate volatility scaler
    - calculate number of contracts
    - calculate profitloss (including trading costs)
    - create accountcurve

    2. use bootstrapping method per instrument using all the account curves for all used trading rule variations. The result should be the forecast weights per instrument (subsystem)

    Is this the correct way ?

    Thank you
    Kris

    ReplyDelete
    Replies
    1. Yes definitely use trading costs to calculate weights and if costs vary a lot between instruments then do them seperately.

      The method you outline is correct.

      pysystemtrade will of course do all this; set forecast_correlation_estimate["pool_instruments"] to false

      Delete
    2. Hi Robert,

      Thank you for the confirmation.

      Kris

      Delete
  9. I was listening to Perry Kaufman podcast on Better System trader, and he said that true volatility adjustment doesn't work for stocks.

    The argument is that because stock has low leverage and if you trading a stock with low volatility you will need to invest a lot of money to bring that volatility to mach other stock and you may not have enough money to do that. Another option is to reduce to position of the other stocks but then you not using all the money.

    What he suggested is to dividing equal investment by stock price.

    I wonder that your thoughts on this?

    ReplyDelete
    Replies
    1. Generally speaking I think volatility adjustment works for any asset that has reasonably predictable / continously adjusting volatility. Theres nothing bad about stocks, except maybe illiquid penny rubbish, that makes them bad for vol sizing.

      BUT really low volatility is bad in any asset class.

      I discuss the problems of trading anything with really low volatility in my book. Essentially you should avoid doing it. If you haven't got leverage then as Perry says it will consume too much capital. If you have got leverage then you'll open yourself up to a fat tailed event.

      It also leads to higher costs.

      Delete
  10. I have two questions:

    1.) I may have missed somewhere if you mentioned it, but how do you manage hedging currencies? It seems like your trading in pounds, so for instance how do you hedge contracts denominated in AUD?

    2.) What is your margin to equity? This is something I keep hearing about. For instance backtesting a few different strategies and running the margins in CME database shows a margin to equity of about 35% when I am targeting 15% vol. This seems high compared to other managed futures strategies that say about 15% margin to equity and have higher volatility(even while trading more markets than I). Any thoughts would be more than appreciated!!

    ReplyDelete
    Replies
    1. You don't need to hedge futures exposure, just the margin and p&l. My policy is straightforward - to avoid building up excessive balances in any currency.

      My margin:equity is also around 35%, but on 25% volatility. I agree that your margin sounds rather high.

      Delete
    2. Thank you!! Would you mind providing just a simple example of how the currency hedging works?

      Also, I'm trading markets similar to yours and I can't see my margin to equity being correct, would you agree?

      Delete
    3. I buy an S&P future@2000. The notional value of the contract is 200x50 = $100K. I need to post $6K margin. I convert say £4K GBP to do this.

      Scenario a) Suppose that GBPUSD changes such that the rate goes from 1.5 to 2.0. I've lost £1K since my margin is worth only £3K. But I'm not exposed to losses on the full 100K.

      Scenario b) Suppose the future goes to 2200 with the fx rate unchanged. I've made $50 x 200 points = $10,000. I sweep this back home to GBP leaving just the initial margin. I now have $10K in GBP; i.e. £6,666 plus $6K margin.

      Scenario c) Suppose the future goes to 2200. I've made $10,000. I don't sweep and GBPUSD goes to 2.0. I've now got the equivalent of £5,000 in profits and £3,000 in margin. I've lost £1,666 plus the losses on my margin as in scenario (a).

      I agree your margin does sound very high.

      Delete
    4. Ahh I see. Thank you very much, very helpful to me! Your response is greatly appreciated.

      Thank you for your work and love the book!

      Delete
  11. I hope you don't mind questions!

    You say you have a 10% buffer around the current position(i.e if the weight at rebalance is 50% and the target is 45%, you keep it at 50% because it is within 10%). However, what if you have a situation where the position changes from, say, +5% to -4%? This is within the 10% buffer but the signs have changed, what do you do with your position?

    ReplyDelete
    Replies
    1. Gotcha. So you'd leave it at 5%?

      Delete
    2. This is an interesting question. What is the reasoning for a 10% buffer(like why not 5% or 15%)?. Two scenarios pop into my head.

      a.) Lets say you have 10 commodities and each are 9% above their optimal weights. That would leave you 90% more exposed to commodities than your model would call for. Obviously your willing to accept this risk(unless you have some other limits in place). Or say all the 10 commodities have optimal weights of -5% and your current position in all commodities is 4%. You should be -50% short commodities but instead your 40% long commodities.

      b.) With the 10% buffer there is room for path dependency. Taking the example above, if you establish positions in 10 commodities in January and the signals give them 4% weights each and say commodities dont move around much for, say 3-months, you end up being long 40% commodities for those three months. On the other hand, say you establish positions in February and the signals for all commodities is -5% and don't move around a lot for 3 months. You are now -50% short for a few months(2 overlapping with the January scenario). Certainly you can say well they didn't move much so overall the difference might not be that important. But in real time, say in March, we obviously don't know the next few months will be less volatile we just know we're either 40% long commodities or -50% short commodities.

      These are just a few thoughts. Obviously you can mitigate some of the problem by setting exposure limits and such. But the path dependency scenario would still be there(especially with larger buffers).

      Obviously I'm biased towards a smaller buffer. By how much I'm not sure that's why I'd love to get your thoughts on the matter!

      Or say you have 11 strategies equally weighted trading one market. Presumably each strategy is profitable after some conservative measure of costs without any buffering. If 10 of the strategies show now changes in weights and 1 strategy is now 100% long(9% increase in position) you'd be ignoring that valuable signal.

      Would love your thoughts!

      Delete
    3. Theoretically the optimal buffer size depends only on costs. Higher costs means a larger buffer. Lower costs means a smaller buffer. I use a constant buffer size purely to make life simpler.

      Delete
    4. Did you estimate the buffer purely on a backtest? eg tried a bunch of different buffers for a give set of costs and settled on 10%?

      Delete
    5. Also I apologize I should have referenced your book first. I wasn't fully aware that you calculate the buffer using actual contracts, not percents as I thought.

      The example in your book is: the upper buffer is 133.52 * 1.1 = 147 contracts.

      Using this logic then, if I have 1 contract, the upper buffer is 1.1 contracts. So if my optimal number of contracts goes above 1.1(or 1.5 pretty much) then I make a trade. So if my optimal number is 1.6 I just trade one contract?

      Also, in the example I just gave, isnt the impact different for contracts of varying values. So 1 more contract in JGB is $1m vs 1 more contract in TYA being $125k?

      I find this aspect interesting, greatly appreciate your thoughts.

      Delete
    6. ESHKD
      "Did you estimate the buffer purely on a backtest? eg tried a bunch of different buffers for a give set of costs and settled on 10%?"

      In fact there are well known analytical methods for deriving the optimal buffer size (we should always avoid fitting when we can), and 10% is a pretty conservative value which I chose to avoid the complexity of including yet another table in my book (it's what I use in my own system, regardless of cost, so this is a simplification I'm personally fine with).

      "Using this logic then, if I have 1 contract, the upper buffer is 1.1 contracts. So if my optimal number of contracts goes above 1.1(or 1.5 pretty much) then I make a trade. So if my optimal number is 1.6 I just trade one contract?"

      Correct.


      "Also, in the example I just gave, isnt the impact different for contracts of varying values. So 1 more contract in JGB is $1m vs 1 more contract in TYA being $125k?"

      Yes, but (a) it's contract risk that is important not notional size, and (b) there clearly isn't anything we can do about this!

      Delete
    7. Gotcha. It still appears more right to me to set the buffer around % allocations(as opposed to contracts). So obviously I'm missing something. I don't want to exhaust you so thank you for your response!

      Delete
    8. Any resources you may have or know of on the subject would be greatly appreciated though!!

      Delete
  12. Hi Rob,
    If you don't mind me asking, are your log-scale equity curve charts in base 'e' or base 10?
    Thanks

    ReplyDelete
    Replies
    1. Neither. They are cumulated % curves. So "5" implies I've made 500% return on capital if I hadn't done any compounding. A log curve of compounded returns would look the same but have some different scale.

      Delete
  13. Also, from what I have read, it seems your instrument and rule weights are only updated each time a new instrument enters your system, so you hardcode these weights in your own config; however, these weights do incrementally change each day as you apply a smooth to them. How can one set this up in pysystemtrade? I understand how you hardcode the weights in the config, but how do I apply a smooth to them in pysystemtrade? Or is this done automatically if I included e.g., 'instrument_weight_estimate: ewma_span: 125' in the config?

    ReplyDelete
    Replies
    1. At the moment the code doesn't support this. However I think it makes sense to smooth "fixed" weights as instruments drift in and out, so I'll include it in the next release.

      Delete
    2. Now added to the latest release. Parameter is renamed instrument_weight_ewma_span: 125 (and same for forecasts). Will apply to both fixed and estimated weights. Set to 1 to turn off.

      Delete
    3. Hi Rob,
      Can you provide some further details on how to use fixed weights (that I have estimated), yet apply a smooth to them? I've been unable to use 'instrument_weight_ewma_span' to filfill this purpose... Thanks!

      Delete
    4. http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.ewma.html

      Delete
    5. Hi Rob,
      I’ve not been clear. I understand how EWMA works and the process of smoothing.
      The problem I am having is that I am using weekly bootstrapping to estimate instrument weights. However, each day when I run pysystemtrade the calculated instrument weights can vary significantly day to day due to the nature of bootstrapping. This leads to situations where e.g., pysystemtrade would have generated a trade yesterday when I was running it (which I would have executed), but when I run it today the instrument weight estimates may have changed enough due to the bootstrapping so that the trade that was generated and executed yesterday does not show up as a trade that was generated yesterday today. This makes me less trusting of the backtested performance, as the majority of trades that were historically generated but excluded after resampling are losing trades.
      I only sample the market once a day generally (so that repeated sampling of the market overwriting the current day’s mark is not an issue).
      I would like to use the bootstrapping to estimate the weights ANNUALLY and apply the smooth to adjust between last year’s calculated weight, and today’s. But if I am using fixed weights (after having estimated via bootstrapping) by setting them as fixed in the config, there are no longer two data points to smooth between as I have only one fixed estimate in the config.
      How can I insert an historical weight for last year and a new fixed weight this year (by fixing it in the config) and smooth between them?

      Delete
    6. "I am using weekly bootstrapping to estimate instrument weights...." I think this is a little... well if I'm being honest I think its insane.

      Okay so to answer the question for backtesting I use one config, and then for my live trading system I use another config which has fixed weights. Personally I run these seperately for different reasons, the backtest to get an idea of historical performance, the 'as live' backtest with fixed weights to compare against what I'm currently doing and for actual trading.

      There is no configurable way of mixing these, so you'd need to write some code that takes the estimate bootstrapped weights and then replaces them with fixed weights after a certain date.

      Delete
    7. Thanks for the reply. I had applied the same method for the instrument weights as for the forecast weights. You'd mentioned above:
      "Also the default is to use weekly returns for optimisation. This has two advantages; firstly it's faster. Secondly correlations of daily returns tend to be unrealistically low (because for example of different market closes when working across instruments)."
      Why would the default for forecast weights be weekly but not for instrument weights?
      Thanks!

      Delete
    8. Oh sorry I misunderstood. You are using WEEKLY RETURNS to estimate instrument weights: that's fine. I thought you were actually redoing the bootstrapping every week.

      Delete
    9. Hi Rob,
      just on top of this interesting post, 1. do you have any insights on what's the best frequency to update forecast and subsystem weights using bootstrap? 2.if i understand correctly, in live trading, the bootstrap weights(both forecast and subsystem) are updated in discrete way, then why there is smooth method applied in the backtest which might not be the best estimation of live trading?

      Delete
    10. I don't update my weights at all in live trading, but it's more appropriate to do so in simulation since there are markets coming into the system, also to begin with there isn't much data to work off. The smoothing is there so that there aren't massive transaction costs when the weights change.

      Delete
    11. Thanks Rob. Interesting, so that you don't do bootstrap even after having, say, a whole new year data on Live trading? Then what's the point of rolling-window backtest....The optimizing process in live trading doesn't take new data and giving up out of date market conditions.I am bit lost here....

      Delete
    12. You need to have some weights in the backtest, and the only honest way of doing this is to do a backward looking window. Makes sense to update this in the backtest, as there isn't much data to begin with. But at some point the weights are pretty stable and there is almost no value in changing them. So that's why I don't bother in live trading.

      Delete
  14. Hi Rob,
    If I wanted to apply a trading rule to one instrument, say ewmac8_32 just to Corn, and another trading rule to another instrument, say ewmac32_128 just to US10, and combine them into a portfolio so that I could get the account statistics, how could I do that? The typical method of creating systems obviously applies each trading rule to each market.

    I suspect that this would have to be done at the TradingRule stage such that a TradingRule (consisting of the rule, data, and other_args) would be constructed for the 2 cases above. However, I'm having trouble passing the correct "list" of data to the TradingRule object. And, if that is possible, what would need to be passed in for the "data" at the System level i.e. my_system=System([my_rules], data)? I suspect that if all this is possible, it could also be done with a YAML file correct? Thank you so much for any advice and pointing me in the right direction!

    ReplyDelete
    Replies
    1. Easy. The trading rule object should contain all rules you plan to use.

      If using fixed weights:

      YAML:
      forecast_weights:
      CORN:
      ewmac8_32: 1.00
      US10:
      ewmac32_128: 1.00

      Python:
      config.forecast_weights=dict(CORN=dict(ewmac8_32=1.0), US10=dict(ewmac32_128=1.0))


      If using estimated weights:

      YAML:
      rule_variations:
      CORN:
      - "ewmac8_32"
      US10:
      - "ewmac32_128"

      Python:
      config.forecast_weights=dict(CORN=["ewmac8_32"=1.0], US10=["ewmac32_128"=1.0])

      (In this trivial example you wouldn't need to estimate, but you could specify multiple rules of different kinds to do so)

      Delete
  15. Thank you, will test this out!

    ReplyDelete
  16. Hi Rob,
    Thank you for an excellent book. I am trying to rewrite some parts of your system in a different language (broker doesn't support python) and add live trading. However I got a bit stuck while I was trying to reproduce the calculations of volatility scalar. For some reason when I request system.positionSize.get_volatility_scalar("CORN") I receive just a series of NaNs, but the subsystem position is somehow calculated. Don't really understand why is that happening

    ReplyDelete
    Replies
    1. Can you please raise this is an issue on github and include all your code so I can reproduce the problem

      https://github.com/robcarver17/pysystemtrade/issues/new

      Delete
    2. Rob,

      I guess I solved the problem. The issue was the DEFAULT_DATES was set up to December 2015, while data in legacycsv was up to May 2016. So the fx_rate USD-USD wasn't defined after December 2015 causing all the problems.

      Thank you for the fast response, I'm still getting familiar with GitHub.

      Delete
  17. Hi Rob, I tried to reproduce forecast weight estimation with pooling, and bootstrap, using this code

    from matplotlib.pyplot import show, title
    from systems.provided.futures_chapter15.estimatedsystem import futures_system

    system=futures_system()
    system.config.forecast_weight_estimate["pool_instruments"]=True
    system.config.forecast_weight_estimate["method"]="bootstrap"
    system.config.forecast_weight_estimate["equalise_means"]=False
    system.config.forecast_weight_estimate["monte_runs"]=200
    system.config.forecast_weight_estimate["bootstrap_length"]=104


    system=futures_system(config=system.config)

    system.combForecast.get_raw_forecast_weights("CORN").plot()
    title("CORN")
    show()

    The output came out different than your results,

    https://dl.dropboxusercontent.com/u/5114340/tmp/weights.png
    https://dl.dropboxusercontent.com/u/5114340/tmp/weights.log

    Did I have to configure somethings through YAML as well as Python code? It seemed like the code above was enough.

    Thanks,

    ReplyDelete
    Replies
    1. Maybe you have changed the config, because when I ran the lines you suggested I got the right answer (subtly different perhaps because of randomised bootstrapping, and because I've introduced costs). Try refreshing to the latest version and make sure there are no conflicts.

      Delete
  18. One coding question for the correlation matrix - Chapter 15 example system. With this code,

    http://goo.gl/2caO1K

    I get 0.89 for E$-US10 correlation, Table 46 in Systematic Trading says 0.35. I understand ST table combines existing numbers for that number, but the difference seems too big. Maybe I did something wrong in the code? I take PNL results for each instrument, and feed it all to CorrelationEstimator.

    Thanks,

    ReplyDelete
    Replies
    1. Another code snippet, this one is more by the book,

      goo.gl/txN63u

      I only left two instruments, EDOLLAR and US10, included two EWMACs and one carry, with equal weights on each instrument. I get 0.87 for correlation.

      Delete
    2. Yes 0.35 is for an average across all asset classes. It's arguable wehter STIR and bonds are different asset classes; which is why I grouped them together in the handcrafted example in chapter 15. Clearly you'd expect the US 10 year rate and Eurodollar futures to be closely correlated.

      Delete
    3. Thanks! Yes I had the feeling hese two instruments were closely correlated, just was not sure if my calculation was off somehow. Great. And since, according to ST, E$ and US10 are from different geographies that is a form of diversification, and the Ch 15 portfolio has positive SR, so we're fine.

      Delete
  19. Dear Rob,

    where can I find information on how to calculate account curves for trading rule variations from raw forecasts?

    Do I assume I use my whole trading capital for my cash volatility target to calculate position size and then return, or should i pick certin % volatility target assuming ("guessing") in advanve a certain sharp ratio i'm planing to achieve on my portfolio?

    Thanks,
    Peter

    ReplyDelete
    Replies
    1. The code assumes we use some abstract notional capital and volatility target (you can change these defaults). Or if you use weighted curves https://github.com/robcarver17/pysystemtrade/blob/master/docs/userguide.md#weighted-and-unweighted-account-curve-groups it will give you the p&l as a proportion of your total capital.

      Delete
    2. Hi Rob, I am getting tangled up in how the weighted curve groups work, specifically accounts.pandl_for_all_trading_rules. Been over the user guide several times and still don't get it, so some questions:-
      - when I look at accounts.portfolio().to_frame and get the individual instrument component curves, they all sum up nicely to accounts.portfolio().curve()
      - when I look at accounts.pandl_for_all_trading_rules().to_frame() the individual rule curves look like they are giving a percentage (in the chart 15 config, curve rises from 0 -> 400 ish
      - I am guessing this is a percentage of notional capital, so I am dividing this by 100 and multiplying by notional
      - however I cannot get even close to accounts.portfolio.curve()
      - the shape looks very similar, the numbers differ from the portfolio curve by a suspiciously stable factor of 1.38
      - you point out in your user guide that "it will look close to but not exactly like a portfolio account curve because of the non linear effects of combined forecast capping, and position buffering or inertia, and rounding"
      - however I still cannot get them close even when I configure buffering to 0 and capping to high (1000)
      - clarifying questions:-
      - is the output of pandl_for_all_trading_rules().curve() in fact the percentage of notional capital or do I have that wrong?
      - when you say (user guide, panel_for_trading_rule) "The total account curve will have the same target risk as the entire system. The individual curves within it are for each instrument, weighted by their contribution to risk." what exactly do you mean by contribution to risk? Are we now talking about a percentage of the systems target volatility? (20% or 50K in this configuration)
      I appreciate any insights you can give here.

      Delete
    3. Can you send me a private mail?

      Delete
  20. Hi Rob,

    I'm searching for the historical data on the websites you mentioned in the book. I'm looking to the six instruments you also use in this post. On Quandl I can find continuous contracts but this use rollover method at contract expiry and there is no price adjustment. I'm wondering if this is good enough to backtesting because the effective rolling is total different then the (free) data from Quandl. Also with the premium subscription there are a limited methods for rolling. For example : if we roll corn futures in the summer and working only on december contracts, I think this is not possible with quandl (and I think also other data providers like CSIData.com). I'm thinking to write my own rolling methods myself. Is this a good idea and is it necessary to do this (=time consuming). How do you handle this problem ?

    Kris

    ReplyDelete
    Replies
    1. I wrote my own rollover code. Soon I'll publish it on pysystemtrade. In the meantime you can also get my adjusted data: https://github.com/robcarver17/pysystemtrade/tree/master/sysdata/legacycsv

      Delete
    2. Thank you so much for the link Rob! Very usefull for me. I can use this data to do my own calculations with my own program (written in VB.NET)
      What do you do with the gaps: fill it by the previous day values to have a value for each day so all dataseries are in sync ? Or skip the line with the result that the dataseries are not in sync ?

      Delete
    3. If I'm calculating rolldown which is PRICE - CARRY I first work it out for each day, so I'll have occasional Nans. I then forward fill the values, although not too early as I use the value of the forecast to work out standard deviations for scaling purposes, and premature forward filling will reduce the standard deviation.

      Delete
    4. OK, that's also the way I do. Calculate the forecasts on the raw data (so with gaps). Afterwards fill it to bring all instruments in sync so it's much easier to calculate PL.

      An other question about the legacycsv files from github : when I look for example V2X, I see the latest prices not exactly match with the individual contracts from Quandl. Am I missing something ?

      For example :
      file V2X_price.csv at 2016-07-01 : 25,4
      file V2X_carrydata.csv at 2016-07-01 : 25,4 and contract expiry is 201608
      (this 2 files matches so that's OK and I know you get the values from the august contract)
      If I go to the Quandl website and take this particular contract (https://www.quandl.com/data/EUREX/FVSQ2016-VSTOXX-Futures-August-2016-FVSQ2016) then I see the settlement for 2016-07-01 value 25.7

      Also checked this for Corn and this has also a small deviation. I suppose you use backwards panama ?

      What's the reason for this small deviations ?

      Delete
    5. The data from about 2.5 years ago isn't from quandl, but from interactive brokers.

      Delete
  21. This comment has been removed by the author.

    ReplyDelete
  22. Hi, Rob! I'm struggling with forecast correlation estimates used for fdm calculation, could you plz explain what is ew_lookback parameter and how exactly you calculate ewma correlations?



    E.g. With pooled weekly returns i use first ew_lookback = 250 data points to calculate ewma correlations, then expand my window to 500 data points and calculate correlations on this new set using 500 ewma e.t.c? Why use 250 and not t 52 if use weekly returns?

    Thank you!

    ReplyDelete
    Replies
    1. These are the defaults: frequency: "W"
      date_method: "expanding"
      ew_lookback: 500

      An expanding window means all data will be used.

      Yes the ew_lookback of 500 implies a half life of ~10 years on the exponential weighting. If you think that is too long then of course reduce it. Personally I don't see why correlations should change that much and I'd rather have a longer estimate.

      Delete
    2. So ew_lookback just specifies my decay factor which i then use for all the data points?

      How do i go about pooling? e.g. I have asset1 with history from 2010 to 2016 (10 trading rules and variations returns) and asset2 from 2008 to 2016 (10 trading rules and variations returns), do i just stack forecast returns to get total of 14 years of data and calculate correlations 10 x 10 on all of the data or what?

      I'm confused

      Delete
    3. Yes pooled returns are stacked returns https://github.com/robcarver17/pysystemtrade/blob/master/syscore/pdutils.py df_from_list is the critical function.

      Delete
  23. Hello, after looking through the python code, I wonder how you came up with the adj_factor for costs when estimating forecast weights? via simulation? THANKS!

    ReplyDelete
    Replies
    1. Please tell me which file you are looking at and the line number please.

      Delete
    2. syscore/optimisation line 322
      # factors .First element of tuple is SR difference, second is adjustment
      adj_factors = ([-.5, -.4, -.3, -25, -.2, -.15, -.1, -0.05, 0.0, .05, .1, .15, .2, .25, .3, .4, .5],
      [.32, .42, .55, .6, .66, .77, .85, .94, 1.0, 1.11, 1.19, 1.3, 1.37, 1.48, 1.56, 1.72, 1.83])


      def apply_cost_weighting(raw_weight_df, ann_SR_costs):
      """
      Apply cost weighting to the raw optimisation results
      """

      # Work out average costs, in annualised sharpe ratio terms
      # In sample for vol estimation, but shouldn't matter much since target vol
      # should be the same

      avg_cost = np.mean(ann_SR_costs)
      relative_SR_costs = [cost - avg_cost for cost in ann_SR_costs]

      # Find adjustment factors
      weight_adj = list(
      np.interp(
      relative_SR_costs,
      adj_factors[0],
      adj_factors[1]))
      weight_adj = np.array([list(weight_adj)] * len(raw_weight_df.index))
      weight_adj = pd.DataFrame(
      weight_adj,
      index=raw_weight_df.index,
      columns=raw_weight_df.columns)

      return raw_weight_df * weight_adj

      Delete
  24. Hello Rob,
    Would you consider making the ewma_span period for smoothing your forecast weights a variable instead of fixed value, perhaps by some additional logic to detect different volatility 'regimes' that are seen in the market? Or maybe such a notion is fair, but this is the wrong place to apply it, and should be applied at the individual instrument level or in strategy scripts?

    ReplyDelete
    Replies
    1. No this smacks of overfitting. Put such evil thoughts out of your head. The point of the smooth is to reduce turnover on the first of january each year, not to make money.

      Delete
    2. (goes to the blackboard to write "I will not overfit" 50 times)...sorry, I've read your statements on overfitting more than once, but had a lapse in memory when this question popped into my thick skull. Thanks for your response.

      Delete
  25. Hi Robert,

    For the diversification multiplier you mention to use exponential weighting. Where or how you implement this? On the returns or on the deviations of the returns from the expected returns (so just before the calculation of the covariances)? Or maybe at an other place ?

    Can you give me some direction?

    Thanks

    Kris

    ReplyDelete
    Replies
    1. No, on the actual multiplier. It's calculated from correlations, updated annually. Without a smooth it would be jumpy on the 1st January each year.

      Delete
    2. OK,but in this article I found 2 different parameters refering to exponantional weighting :

      - under 'Forecast Diversification Multiplier' --> 'correlation' : I found "using_exponent: True # use an exponentially weighted correlation, or all the values equally"

      - under 'Forecast Diversification Multiplier' --> 'Smoothing again' : I found "ewma_span: 125 ## smooth to apply"

      I am a little bit confused about the 2 parameters. I understand that the second parameter (smoothing again) is to smooth the jump on the 1st January each year.

      But what about the first parameter (correlation) ? I thought that you use some kind of exponantial weighting for calculating the correlations, but maybe I'm wrong ? Sorry, but it is not so clear for me.

      Kris

      Delete
    3. Sorry, yes I use exponential weighting a lot. With respect to the first, yes I calculate correlations using an exponential estimator: http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.ewmcorr.html

      Delete
    4. Thanks for this tip!

      I always try to write my own code (don't like dependency of others code) and also I don't see how I can use the pandas libraries into vb.net.

      But I've found the functions here :
      https://github.com/pandas-dev/pandas/blob/master/pandas/window.pyx --> EWMCOV

      and here :
      https://github.com/pandas-dev/pandas/blob/v0.19.2/pandas/core/window.py#L1576-L1596 --> corr
      So I can analyse how they to the stuff and can write it in VB.NET

      Kris

      Delete
    5. I see that the ewm.corr-function return a list of correlations for each date and not a correlation matrix.
      For the classic corr-function the result a matrix of correlation coëfficients.

      In your code (https://github.com/robcarver17/pysystemtrade/blob/ba7fe7782837b0df0dea83631da19d98a1d8c84f/syscore/correlations.py#L173) I see you only takes the latest value for each year of the ewm.corr function.
      I should expect that we must take a kind of average of all correlation values from a pair to calculate the correlation coëfficient for each pair. Can you clarify this, thanks.

      Kris

      Delete
    6. ewm.corr returns rolling correlations; each element in the list is already an exponentially weighted average of correlations. Since I'm doing the rolling through time process myself I only need the last of these elements.

      Delete
    7. OK, but in your simulations you work with an expanding window and do calculations yearly based on weekly data. If we use EWM-span of 125 it means the rolling correlations go back roughly about 3 years (125*5 days). So if for example the total period is from 1990-2016, is the last element of last calculation (1990-2016) then a correct estimate of the correlation of the whole period, because data before 2012 is 'ignored' ?

      Maybe it's then faster to work with a rolling out-of-sample frame to do this calculations ?

      Or is my idea on this not correct ?

      Kris

      Delete
    8. Well 92% of the weight on the correlations will be coming from the last 3 years. So yes you could speed this up by using a rolling out of sample although the results will be slightly different. 5 years would be better as this gets you up to 99%.

      Delete
  26. Rob, in your legacy.csv modules, some specific futures have the "price contract" as the "front month"(closest contract) like Bund, US20 & US10, etc. meanwhile, others such as Wheat, , gas, crude, etc have the "carry contract" as the front month. is this by design?

    ReplyDelete
    Replies
    1. Yes. You should use a nearer month for carry if you can, and trade further out, but this isn't possible in bonds, equities or FX. See appendix B.

      Delete
  27. Hi Rob,

    Thank you so much for your book. It it very educative. I was trying to understand more about trading rules correlations in "Chapter 8: Combined Forecasts". You mentioned that back-testing the performance of trading rules to get correlation.

    Could you share a bit more insights on how you get the performance of trading rules, please?
    (1) Do you set buy/sell threshold at +/- 10? meaning that no position held when signal is [-10,10], only 1 position held when signal is [10,20] and [-20,-10] and 2 positions held when signal is at -20/+20?
    (2) Trading cost is considered? (I think the answer is yes.)
    (3) You entry a buy trade, say at signal=10. When do you signal to exit the trade? when signal<10 or signal=0?

    or you use dynamic positions, meaning the position varies with signal all the time.

    Another question regarding optimisation:
    In the formula: f*w - lemada*w*sigma*w' to estimate weights
    (1) f is rules' sharpe ratio calculated using the rules' historical performance pooled from all instruments or just the sharpe of the rule from the instrument we look at?
    (2) how do you define lemada? =0.0001? if so, is it always 0.0001?

    Sorry if those two questions had been asked before.

    Thanks,
    Deano

    ReplyDelete
    Replies
    1. To get the performance of a trading rule you run through the position sizing method in the book allocating 100% to a given trading rule.

      1) No, that isn't how the system works at all. Read the rest of the book before asking any more questions.
      2) yes - again this in discussed later in the book
      3) No, I use continous positions. You need to read chapter 7 again as you don't seem to have quite got the gist.

      f*w - lemada*w*sigma*w

      I don't think I've ever used this formula in my book, or on my blog, so I can't really explain it.

      Delete
  28. Rob,

    Is one way to estimate correlations with nonsynchronous trading to run correlations on rolling 3-day returns over a lookback of 60-days?(which I know is much shorter than yours)

    ReplyDelete
    Replies
    1. If you're using daily closing prices to calculate returns across different markets then nonsynchrous is clearly an issue.

      BUT NEVER, EVER, EVERY USE ROLLING RETURNS!!!!! They will result in your volatility estimate being understated.

      Instead use non overlapping 3 day returns eg P_3 - P_0, P_6 - P_3, P_9 - P_6 where P_t is the price on day t.

      As for wether 3 days is enough, well even 2 days would help a bit with nonsynchrous data, although 3 days is better, and 5 days (which is what I use, eg weekly returns if you're in business day space) is better still.

      On a system trading at the kind of speed I trade at using 60 days worth of correlations probably is too short a period, since it isn't long enough to give you a stable estimate. It's also only 20 observations if you're using 3 day non overlapping returns (it's even worse for weekly returns of course, only 12 observations). Your estimate will be very noisy.

      Delete
  29. Thank you! The idea actually came from the Betting Against Beta paper by the guys at AQR. They say they use overlapping(or rolling) 3-day log returns to calculate correlations to control for non-synchronous trading over 120 trading days.

    I think it is safe to say your disagreeing with their approach?

    ReplyDelete
    Replies
    1. The guys at AQR know what they are doing and almost without exception are smarter than me. So it would be surprising if they'd done something crazy.

      Using overlapping returns is generally frowned upon (eg in OLS regression, essentially this is a bias versus variance problem). Using overlapping returns artifically increases the data you have (you only really have completely new observations every 3 days) and that benefit must come at a cost.

      For correlations it *might* be okay; I rarely work overlapping returns so I don't know their properties well enough to say whether they are fine or not. My intuition and a few minutes of playing with excel suggests they will be fine with zero autocorrelation, but maybe not if autocorrelation comes in.

      But I don't see the point in using two types of returns - one to calculate vol, one to calculate correlation (in most software you work out a single covariance matrix which embodies both things).

      Delete
    2. Yes, I agree. That was also my initial intuition as well. I compared the weekly non-overlapping approach with the overlapping 3-day approach over the same time-frame and the markets that are synchronous the correlation estimates are very similar. More importantly, for example when I ran it on the Hang Seng the rolling 3-day approach was quite close to the weekly approach. So obviously some slight differences but, as you say, the approach doesn't seem like something crazy.

      Your response is much appreciated.

      Delete
    3. Hi Rob, what is your view on using returns (weekly in your case) to compute correlations vs log returns (as per this AQR paper). Why do you suppose some choose log returns, do you view any significant difference between the two? I found this http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1586656 on the topic, but still not seeing the fundamental reason to use one vs the other.
      Appreciate your thoughts as always.

      Delete
    4. To be honest I haven't given this much thought. I can see why it will affect the calculation of volatility, but it's not obvious to me how it affects correlation.

      Delete
  30. Good morning, Rob.
    When I run your ch_15 system along with default configs, trading rules, etc. unmodified, the stages run fine. If I substitute the legacyCSV files of several instruments with *intraday* 1-minute bars in the same 2-column format, both for the '_price' file and '_carrydata' (expiration months spaced from current date like you showed in legacy versions), spanning 5 days each, and re-running the system but changing nothing except reduction of the instrument_list entries, I get the error from line 530 (get_notional_position) of /systems/portfolio.py "No rules are cheap enough for CRUDE_W with threshold of 1.300 SR units! Raise threshold (...), add rules, or drop instrument."
    I raised it from the original 0.13 to 1.3, and in other tests as high as 100 (ridiculous value of course, just testing...), same result. Seems I'm overlooking a simple principle of the system, but I can't figure why, given the trading rules were left same. Can you offer a pointer?

    ReplyDelete
    Replies
    1. The answer to your question(s) is that I have not tested the system with non daily data so there is no guarantee it will work. I am pretty sure there are several places where I assume the data is daily (and you have unearthed one of them); some data is always resampled daily whilst others are not, so there could be some issues with mismatching and what not.

      The legacy data with multiple snapshots per day is an oversight (my code should resample to daily before burning the legacy data - that might not have happened on the version of the data you are using) - and indeed it may be causing some slightly unpredictable results in the last few years.

      In summary I would need to do a lot of testing before I was confident the code would work with non daily data, so I really wouldn't trust it for those purposes yet.

      Delete
  31. Also, since many of the instruments in the legacy data have a lot of days near the final years of the records with more than one recorded value per day, it seems that using new CSVs with intraday data would be feasible in short order, but making sure to change the period of several calculations in other stages to recognize the periodicity on a minute scale, instead of days, no?
    Sorry in advance for my hasty monologue...

    ReplyDelete
  32. P.S. for example, does the diversification multiplier need to be modified for interpreting 1-minute periods instead of sampling at end-of-day? What about volatility scaling floors currently set with daily period?

    ReplyDelete
    Replies
    1. Just briefly looking at the intra-day posts in this thread, I can say I spent a considerable amount of time last year making the program intra-day compatible (I was using tradestation data). It's been some time since I looked at this project, but I can refer you to where I left off (I believe it works, but I haven't had anybody review my work): https://github.com/anthonywise/pysystemtrade/tree/tscompare

      Hope this helps

      Delete
    2. Thank you, Gainz! I see that you have an additional folder in 'sysdata' and the 'price' CSVs are 15-minute entries. Can you comment on the adaptation scheme in the stages of the workflow? For example, looking at the /systems/rawdata.py script, only daily prices are mentioned. Did you go that far with changing the code to recognizing intraday, but left the names as 'daily', or is the actual adaptation still needed?

      Delete
  33. Dear Mr. Carver,

    Most of all I always appreciate you for sharing detailed & practical knowledge of quantitative trading.

    I have a few questions while reading your book & blog posts.

    I am trying to develop a trend following trading system with ETFs using the framework in your book.

    The trading system is long-only and constrained by leverage limit(100%).

    Under the constraints, what is the best way to use your framework properly?

    Is there any changes in calculating forecast scalars, forecast weights, FDM, IDM, etc?


    My thought is...

    Solution 1.
    - Maintain all the procedures in your framework as if I could long/short and have no leverage limit. (Suppose that I have 15% target vol)

    - When I calculate positions for trading, I just assign zero position for negative forecasts.

    And if sum of long positions exceeds my capital I scale down the positions so that the portfolio could not be leveraged.


    Solution 2.
    - Forecast scalar:
    No change. I calculate forecasts and scale them(-20 ~ + 20).

    - Forecast weights, Correlation:
    For each trading rules,
    + Calculate portfolio returns of pooled instruments according to the forecasts.
    + Returns for negative forecasts replace to zeros. (Zero position instead of short)
    + And I scale down the returns for positive forecasts when sum of long positions exceeds my capital.
    + Returns of trading rules are used when bootstrapping or calculating correlations.
    + Forecast weights are optimized using these returns.
    - FDM:
    + Calculate FDM based on forecast weights and correlations among the forecasts as your framework.
    + Calculate the historical participations(= sum(long position)/myCapital) using new rescaled forecasts and forecast weights.
    + Check the Median(participations) for back-tested period.
    + If it exceeds 100% I scaled down FDM in order to get my portfolio not take too much risk.

    Frankly speaking I don't know what the right ways are. Both ways does not seem proper. Maybe it is because of my lack of understading.

    Would you give any advice?

    I am really looking forward to your 2nd book. Thanks for reading.

    Best regards,

    Michael Kim

    ReplyDelete
    Replies
    1. Solution 1 is correct. Chapter 14 actually discusses exactly this problem so it might be worth (re) reading it.

      Delete
    2. I have read chapter 14 again and it was helpful.

      I should have checked the book before I asked.

      Thanks for reply.

      Delete
  34. This comment has been removed by a blog administrator.

    ReplyDelete
  35. Hi Rob, I have a question with regard to setting up data prior to optimising weights using bootstrapping. If we follow your advice, forecast returns are already standardised across instruments through dividing by say 36-day EWMA vol. However, I understand from the above example, it makes sense also to equalise vols and means. I assume the vol_equaliser fn does this by rescaling the time series of returns so that all the forecast distributions are virtually the same over the entire series (i.e. have identical Sharpes). The weights you derive would presumably be that of a min variance portfolio and therefore relies on a solution based entirely on the correlations between the returns. Is the above correct? I assume you recommend the same procedure for bootstrapping subsystem weights (i.e. equalise means and vol). Now when using pooled data for forecasts, my thinking is fuzzier: is it advisable not to equalise means or vol?

    ReplyDelete
    Replies
    1. You're correct that the solution relies entirely on correlations in this case (in fact it's the optimal Sharpe, maximum return and the minimum variance solution).

      "Now when using pooled data for forecasts, my thinking is fuzzier: is it advisable not to equalise means or vol?"

      No, you should still equalise them. Basically the logic in all cases is this. Vol targeting equalises expected vol, but not realised vol which will still be different. If realised vol goes into the optimiser then it will have an effect on the weights, which we don't want. To take an extreme example if you have an instrument with a very short data history which happens to have a very strong forecast in that period then it's estimated vol will be unrealistically high and it will see it's weights downgraded unless we equalise vols.

      Delete
  36. Hi Rob thanks for this. Just so I can get my head around your answer a bit better, a question on terminology: are 'estimated' vol and 'realised' vol the same thing and equal to the vol used for standardisation (i.e. rolling historic 36d EWM vol)? As I understand it the two inputs into the the optimisation you do are correlation and mean returns. So are you saying that if we relied merely on vol standardisation (using recent realised vol) then a period of high vol for an instrument with a short data history but high forecasts would lower the forecasts and their corresponding weights? I am failing to make the connection between high forecasts and high price vol which is used for standardisation. I am sorry if I have completely missed the point.

    On a related point, and I should have asked this earlier: on page 289 of your book you recommend that prior to optimisation we should ensure 'returns have been vol normalised' I assume this is the same as 'equalisation' that you refer to in this post and not the same as standardisation (btw the term volatility normalised is in bold so perhaps your publishers might consider putting a reference in the glossary for future editions before your book becomes compulsory reading for our grandkids).

    ReplyDelete
    Replies

    1. There is rolling historical 36d EWMA vol used for position sizing. Realised vol is what actually happens in the future.

      When doing optimisation we use a different estimated vol - the vol of returns over a given period (the entire backtest if using an expanding window).

      The vol used for vol standardisation is the estimated vol of the p&l of the instrument subsystem returns. If all forecasts were identical, and we could predict vol perfectly, then this would automatically be the same for all instruments (because when we scale positions for instrument subsystems we target the same volatility per unit forecast). The fact that estimated vol isn't a perfect forecast adds some randomness to this; a randomness that will be problematic for very short data histories. More problematic again is that for instruments with short histories they will have different forecasts. An instrument with an average forecast of 10 over a year compared to another with an average of 5 will have twice the estimated volatility over that backtest period. But that is an artifact of the different forecast level - it doesn't mean we should actively downweight the higher forecast instrument.

      I agree I could have been stricter with the terminology I use about equalisation, normalisation and standardisation. There are at least three things going on, which are subtly different:

      a) many forecasts involve dividing something by estimated vol to get a forecast that is proportional to sharpe ratio (normalisation)
      b) position scaling involves using estimated vol and assuming it is a good forecast of future vol
      c) equalisation of standard deviation estimates when doing optimisation for all the reasons I've discussed.

      Delete
  37. Hi Rob, it's possible I worded my original question poorly,

    I will try to be more systematic, so please bear with me. To recap on my understanding:

    1.'vol normalisation' is what you do when standardising forecasts. This is typically done by dividing by rolling 36 day EWMA * current price
    2. 'vol standardisation' is what you do when standardising subsystems. I would again use rolling 36 day EWM vol (times block value, etc) for this
    3. 'vol equalisation' is what you do prior to optimisation to scale the returns over the entire (expanding) window so over this window they have the same volatility

    4. Assuming the above is correct, a subsystem position for carry and EWMAC variation is proportional to exp return/vol^2 (which co-incidentally seems to be proportional to optimal kelly scale - although not for the breakout rule).

    5. When I said originally 'Now when using pooled data for forecasts, my thinking is fuzzier: is it advisable not to equalise means or vol?', to be clearer I was trying to ask whether it makes sense to equalise vols prior to optimising forecast weights when pooling (not whether to equalise vols when optimising subsystem weights, if we had pooled forecasts previously). In a previous post ('a little demonstration of portfolio optmisation') you do an asset vol 'normalisation' which I believe is the same as 'equalisation' discussed here (scale the whole window, although not done on an expanding window) but I got the impression for forecasts that the normalisation is handled as above and this took care of the need for further equalisation (for forecasts at least).

    I must admit, I had always thought that if you want to use only correlations and means to optimise then intuitively you should equalise vols in the window being sampled (because to quote you this reduces the covar matrix to a correlation matrix). However I had somehow accepted the fact that normalising forecasts by recent vol ended up doing something similar (also from reading some comments by you about not strictly needing to equalise vols for forecasts, etc). But I guess a different issue arises when pooling short histories?

    In summary, assuming you deciphered my original question correctly, are you saying it is still important to equalise vols of forecast as the 'realised' variance of forecast returns are proportional to the the level of the forecast (so a forecast of 10 would have twice the variance of a forecast of 5), causing the optimiser to downweight elevated forecasts, which is a problem when pooling short data histories? By equalising vols over the entire window being used for optimisation, we end up removing this effect? If that is what you are saying then I promise to go away and think about this much more deeply.

    Thanks again for taking the time.

    ReplyDelete
  38. OK I think get it. Really had to have a think and run some simulations but as far as I can tell there seem to be two effects in play here. The arithmetic returns from applying a rule on an instrument is the product of two rvs: instrument returns and forecasts. Assuming independence between these rvs, the variance can be shown to be function of their first two moments. Over sufficiently long periods, these moments across different instruments are equal (asymptotically converge). However over shorter periods there may be divergence (i.e. different averages, different vols) which will violate the assumption of equal vols required to be able to run the optimiser using correlations only. As far as I can tell there also is a more subtle effect, and that arises from the fact that forecasts and instrument returns are not independent (EWMAC 2,8 and daily returns when using random gaussian data have a correlation of 45%). This inconveniently introduces covariance terms in the calc for vol. However, in the cross section of a single rule applied across different instruments over sufficiently long periods of time, the covar terms should have an equal effect across all instruments. Again over short periods there may be divergence. This divergence in small samples from the assumed distribution of the population is presumably why it is sensible to equalise vols before optimising. Am I on the right track?

    BTW please feel free to delete my earlier comment.

    ReplyDelete
    Replies
    1. You've got it. Couple of interesting points from what you've said though:

      - the fact that a forecast for carry comes out as mu/sigma is a nice property, but in general we only require that raw forecasts are PROPORTIONAL to mu/sigma. So some further correcting factor may be necessary (the forecast scalar)

      - in terms of multiple rules; obviously if weights added up to 1 and rules were 100% correlated you'd end up with a joint forecast with exactly the same properties as the individual forecasts. In theory the forecast diversification multiplier deals with this problem; with the caveat that it's backward looking and based on historic correlations so it only works in expectation.

      Delete
    2. Many thanks Rob. The more I dig into your system, the more I appreciate the serious thought which has gone into engineering it. A couple of follow-up observations. From my comparatively low vantage point, I also see that from a practical point of view certain simplifications are desirable (at very little cost to robustness). Would you say that is the case with the FDM? Strictly, if it made a big difference, should we be adjusting for reduced portfolio volatility of forecast returns rather than forecasts themselves? Also, wrt FDMs would you pool all instrument data whether or not they share the same rule variations after costs?

      Delete
    3. Patrick.

      Essentially the question is what is the difference between the correlation of forecast RETURNS rather than forecast VALUES. If returns are more correlated than values, the FDM will be too high (producing a trading rule forecast that has too much volatility). And vice versa. In fact the answer isn't obvious and depends very much on the properties of the signal and the underlying market.

      At the end of the day I think it's more intuitive to state the FDM in terms that it's ensuring the final combined forecast VALUE has the correct scaling properties. FDM would have to be way off for me to drop this point of view. And it usually is pretty close - in fact any effect is dominated by the problem that volatility and correlations aren't stable enough, which means that trying to hit a volatility target is an imprecise science at best.

      "Would I pool all instrument data if they didn't share the same rule variations" - ideally yes, but it's very complicated to do this - essentially the problem being to average a series of correlation matrices with some overlapping and non overlapping elements, and produce a well defined matrix at the end. There are techniques to do this, but it strikes me as overkill to bother...

      Delete
  39. Hi Rob,

    Do you have the breakdown of subsystem signals for the Eurostoxx? You never get short in 2015, only less long? It looks like the market heads down quite a bit. Is this because of the carry signal dwarfing the trend signal? Optically, I can't line up the forecast weights with the chart.

    Thanks!

    ReplyDelete
    Replies
    1. from systems.provided.futures_chapter15.basesystem import futures_system
      from matplotlib.pyplot import show

      system = futures_system(log_level="on")
      a=system.combForecast.get_all_forecasts("EUROSTX")
      b=system.combForecast.get_forecast_weights("EUROSTX")
      c=a*b
      c.plot()
      https://drive.google.com/file/d/0B2xHDlIRSeeXZXZjenU1QlRaQkk/view?usp=sharing

      Delete
  40. Understood. And thank you again, Rob.

    ReplyDelete
  41. Hi Rob, do you have any perspective of the (non)usefulness of using a different volatility measure than stddev for Sharpe (e.g. CVaR), consistent with a 95% CVaR as part of a 'Modified Sharpe'? Seems this would have more appeal when one is more concerned about the tails of the non-normal returns, esp with the volatility products like VIX.

    ReplyDelete
    Replies
    1. It is appealing, but it depends on what your priority is. I am interested in expected risk being close to realised risk. Measuring realised risk with something that only uses the tails is subject to more parameter uncertainty so it's harder to know if you're doing it right. But if you're trading something highly non normal (like XIV) with too much leverage (like ... XIV) then it's worth doing

      Delete
  42. Dear Rob, Thanks for sharing your work in your books and via the website. I have a question regarding the weights of the trading rules. I am interested in seeing how closely the handcrafted weights would compare to the minimum variance portfolio weights using correlations (and assuming the same volatility for each rule). I have been trying to take some examples of correlation tables (representing a possible set of trading rules) where I have assigned correlations ranging between 0 and 1, typically 0.5 to 0.75. I then calculate the minimum variance weights by inverting the covariance matrix (or correlation matrix as vols are the same). When I calculate the weights I find several have a negative weight. This doesn't make sense in the context of rules as we would not 'short' a trading rule - instead we would discard it or reverse it. Is there an easy way to adjust for the constraint of needing all weights >+ zero? Many thanks, Simon

    ReplyDelete
    Replies
    1. Depends on what optimiser you are using, but most should allow you to set positive weights as a constraint. If you're assuming same vol and same sharpe ratio then I think (correct me if I am wrong) you will only get negative weights if you have negative correlations, so you could also try flooring these at zero.

      Delete
    2. I did it in Excel by inverting the correlation matrix using MINVERSE. It seems strange, but the correlations were all positive, yet some of the weights were negative. For example, I think with three assets, all with standard deviation of 0.1, and correlations of AvB=0.85, AvC=0.25 and BvC=0.5 I calculate that A & C have positive weights (69% and 57%) and B has a negative weight (-26%). I have the constraint that the sum of weights = 100%, but I haven't worked out yet if I can incorporate minimum weights into the matrix approach.

      Delete
    3. I guess you could try using goal seek instead of MINVERSE but not sure if it works with constraints. Sorry optimisation in excel is a pretty ugly thing that I have always tried to avoid doing.

      Delete
  43. This comment has been removed by a blog administrator.

    ReplyDelete
  44. Robert, I bought your book recently and it's so good I can't put it down.

    Hope I'm not too late to the dance for this blog post.

    At present I'm trying to replicate the code to produce the final graph showing the Sharpe returns.

    I get a similar graphing pattern: https://ibb.co/QHYyMqt

    But notice the x-axis has 1e6. Large numbers like this also exist in the code output:

    [[('min', '-9.796e+04'), ('max', '2.721e+04'), ('median', '86.45'), ('mean', '134.6'), ('std', '3899'), ('skew', '-2.784'), ('ann_mean', '3.445e+04'), ('ann_std', '6.239e+04'), ('sharpe', '0.5522'), ('sortino', '0.6348'), ('avg_drawdown', '-5.859e+04'), ('time_in_drawdown', '0.9573'), ('calmar', '0.1773'), ('avg_return_to_drawdown', '0.588'), ('avg_loss', '-2465'), ('avg_gain', '2566'), ('gaintolossratio', '1.041'), ('profitfactor', '1.113'), ('hitrate', '0.5167'), ('t_stat', '3.439'), ('p_value', '0.000586')], ('You can also plot / print:', ['rolling_ann_std', 'drawdown', 'curve', 'percent', 'cumulative'])]

    Notice Sharpe ratio looks right but ann_std and other values are massive numbers.

    1. Is there a full code example that shows this particular Sharpe graph and I've been stupid and missed it?
    2. If not, is this likely due to me being unable to configure the system.config.forecast_correlation_estimate["func"] setting? [I get error AttributeError: module 'syscore' has no attribute 'correlations' if I uncomment that config line]
    3. If not, Is it something else I'm missing from my code?

    I know you're a very busy man, so I won't hold it against you if don't have time to respond.

    But I'm so very close that any help, even a sentence or two response that points me in the right direction would be great.

    Here is my code:

    from matplotlib.pyplot import show, title
    from systems.provided.futures_chapter15.estimatedsystem import futures_system
    import syscore

    system=futures_system()

    system.config.forecast_weight_estimate["pool_instruments"]=True
    system.config.forecast_weight_estimate["method"]="bootstrap"
    system.config.forecast_weight_estimate["equalise_means"]=False
    system.config.forecast_weight_estimate["monte_runs"]=200
    system.config.forecast_weight_estimate["bootstrap_length"]=104
    system.config.forecast_weight_estimate["ewma_span"]=125
    system.config.forecast_weight_estimate["cleaning"]=True

    system.config.forecast_correlation_estimate["pool_instruments"]=True
    # system.config.forecast_correlation_estimate["func"]=syscore.correlations.CorrelationEstimator
    system.config.forecast_correlation_estimate["frequency"]="W"
    system.config.forecast_correlation_estimate["date_method"]="expanding"
    system.config.forecast_correlation_estimate["using_exponent"]=True
    system.config.forecast_correlation_estimate["ew_lookback"]=250
    system.config.forecast_correlation_estimate["min_periods"]=20

    system.config.forecast_div_mult_estimate["ewma_span"]=125
    # system.config.forecast_div_mult_estimate["floor_at_zero"]=True

    system=futures_system(config=system.config)

    print(system.accounts.portfolio().stats())

    system.accounts.portfolio().cumsum().plot()

    show()

    ReplyDelete
    Replies
    1. These are money terms not %

      system.accounts.portfolio().percent().stats()

      .... will give you more reasonable answers.

      Delete
  45. Hi Rob,
    What might be a legitimate way to deal with unequal-length and/or asynchronous time series, specifically for correlation calculations? My brief search yielded the notion of truncating the longer series to match the shorter one.

    This could be applicable to cross-correlations for say NQ-100 versus DAX or Hang Seng Index futs, but also for markets on the same continent but different session hours & durations (e.g. ICE Coffee vs WTI Crude).

    ReplyDelete
    Replies
    1. Unequal length: yes, you have to truncate
      Asynchrous: Use a lower frequency. I use weekly returns.

      Delete
  46. Just bought your book about a week ago. Really useful stuff Rob. Really apreciate it. Just getting started with trading. Hopefully will make some things work out.

    Thanks!!!

    ReplyDelete
  47. Hi Rob! Just have a question about the chapter 15. I am using your spreadsheets (https://www.systematicmoney.org/systematic-trading-resources) but I have one big question. Maybe is something silly but I can't figure it out.

    When you talk about "Each point is worth (c)" in the trading dairy I don't know where does this come from. In your book it just says: <**For Euro Stoxx a point move in the futures price cost 10 Euros>.
    But 1% of a future worth 3370 is 33.70...
    And then I don´t understand why is the point (c) -in the trading diary- constant- on October 2014 and on December 2014 - in the spreadsheet as the price (b) changes.

    Thankyour very much

    ReplyDelete
    Replies
    1. "https://www.barchart.com/futures/quotes/FXU20/overview" it says '10 times index'. Basically if the index moves by 1 point (eg from 3370 to 3371) you will earn or lose 10 euros. Each future has a different multiplier and they are (nearly always) constant. However the value of a 1% move in the index (which will be point value * price * 1%) will change as the price changes.

      Delete
  48. Hi Rob, have you experimented with conditional probability (e.g. Bayesian) as a substitute for standard correlation measures?

    I've read elsewhere that "Correlation causes many conceptual misinterpretations, especially related to causal structures."

    ReplyDelete
    Replies
    1. Hi Chad. I've played with stuff like this https://core.ac.uk/download/pdf/162459222.pdf but not used it in anger. I doubt it would be much use in my standard framework, where the correlation matrix is the returns of trading strategies, which are relatively stable and pretty linear. It will probably fare better if the correlation matrix is of underlying returns, as you'd use in a classical portfolio optimisation.

      Delete
  49. Above, when you are weighting the different signals (carry and lookbacks), are you accounting for both Sharpe and correlation or just correlation?
    It would seem that the correlations would favor the shortest and longest lookback since they are more diversifying but in theory the middle lookback should have a higher Sharpe because that should be the hump of a curve with Sharpe falling off as you approach mean reversion as the lookback gets too short or too long.

    ReplyDelete
    Replies
    1. Hi Michael. The hump effect you describe is spot on; and is also confused somewhat by costs which make the faster lookbacks look worse. In the plots I did indeed include the mean as well as correlations. However my thinking on this subject has developed some more, so it might be worth reading the series of posts:
      https://qoppac.blogspot.com/2018/12/portfolio-construction-through.html (first post, read the series)
      Follow ups:
      https://qoppac.blogspot.com/2019/12/new-and-improved-sharpe-ratio.html

      https://qoppac.blogspot.com/2020/11/improving-use-of-correlations-in.html

      Delete
  50. Hi Rob,I have a question about estimating forecast weight with bootstrapping.We need expanding window performance curve for each rule variation for each instrument, right? (with some abstract notional capital and volatility target, etc). And first,we calculate forecasts then scale and limit them to -20 ~ +20.
    My question is,how we get forecast scalars? If we use all sample data to estimate forecast scalars,then make backtest on these sample,It seems like we are using something from future,because we wouldn't actually know these "latest" forecast scalars at the beginning of the backtest.
    Because we limit forecast to -20 ~ +20,performance curve would change if we use different forecast scalar.I know that forecast scalars seems change little over years,maybe it's not a big problem?
    Apologies if I misunderstood.Thank you!

    ReplyDelete
    Replies
    1. I don't bootstrap forecast scalars, I just estimate them using an expanding window of all historic data.

      Delete
    2. I should have checked your code and blog before I asked.
      I check the function get_scaled_forecast and found that forecast_scalar is a time series ,we only use the scalar that was available at that time,avoid forward looking.
      Thanks for your reply!

      Delete
  51. Hi Rob, I really appreciate that you're still replying to a blog written six years ago. So if you can see my comment, here is one question:

    Am I understanding correctly that in the combined forecast stage you are using two different correlations?:
    1) For the forecast weights calculation, be it bootstrapping or shrinkage, you use the correlation matrix calculated from the PERFORMANCE curves; and
    2) For the fdm calculation, you use the correlation matrix calculated from the FORECAST values.
    (the capitalization is supposed to be highlighting, instead of yelling, sorry about that)

    Thank you very much!

    ReplyDelete
  52. Hi Rob,

    I'm not sure if you are still monitoring this space but regardless, I really appreciate your sharing your wisdom with us here. I've been reading your book, Systematic Trading, for the last couple of weeks with great admiration for your work.

    I'm struggling with calculating the forecast weights with bootstrapping. I've managed to generate scaled forecasts for my trading rules. now I'm at the stage of combining forecasts.

    My question is:
    i) Do I use p&l for each trading rule and try to optimise the profit in order to calculate forecast weights?
    ii)If I use bootstrapping with pooling, for each run, do I calculate different set of weights per instrument that I'm pooling and get the average of these weights on each run and get a final average in the end to calculate the final weights?

    Your help would be much appreciated.
    Many thanks.

    ReplyDelete
    Replies
    1. i) yes
      ii) You could do that (A), or you could bootstrap the weights for each instrument and then take an average (B), or you could do (C) a mixture this: a 'run' is a single set of returns for a single instrument, and you take the average of those runs so you're basically selecting both instruments and data each time. Different methods have pros and cons, they will or will not give more weight to instruments with more data, they are more or less robust. I can't fit a full discussion here, there's another book in the works in a few years that might cover it... but I use C

      Delete
    2. Thanks a lot for your reply and I'm looking forward to buying your new book once you release it.
      I am planning to read the Advanced Futures Trading Strategies as soon as I get a good grasp on Systematic Trading.

      Just to double check I understand the logic correctly:
      i) When I use p&l for each trading rule to calculate forecast weights, I don't use the forecasts(and position sizing) of trading rules and just use fixed position based on entry and exit signals of each rule. Is that correct?
      ii) In bootsrapping with expanding window method, if the bootstrap length=100, do you pick random 100 continuous days from year 1 of your data in the first run and then another 100 continious random days from year 1 + 2 from your data on the next run? Is that logic is correct?

      Many thanks again Rob, I really appreciate your help.

      Delete
    3. It would be much easier for me if you joined my ET thread, here https://www.elitetrader.com/et/threads/fully-automated-futures-trading.289589/ It's rather clunky to comment here and there is a community of people who can answer your questions if I'm busy. Anyway, the answers are (1) No, you should use the position sizing and forecasts, (2) no or expanding window bootstrapping, I'd actually pick 256 days with replacement after one year and 512 after two years and son on.

      Delete

Comments are moderated. So there will be a delay before they are published. Don't bother with spam, it wastes your time and mine.