Friday, 14 November 2025

Wordle (TM) and the one simple hack you need to pass funded trader challenges

An unusual (but quick) mid month post, as this is a live issue I thought I'd publish this whilst it's relevant.

There has been some controversy on X/Twitter about 'pay to play' prop shops (see this thread and this one) and in particular Raen Trading. It's fair to say the industry has a bad name, and perhaps this is unfairly tarnishing what may pass for good actors in this space. It's also perhaps fair to say that many of those criticising these firms, including myself, aren't as familiar with that part of the trading industry and our ignorance could be problematic. 

But putting all that aside, a question I thought I would try and answer is this - How hard is it to actually pass one of these challenges? As a side effect, it will also tell us what the optimal vol target is to use if we're taking part in one of these challenges. Hence the clickbait article heading. I know from experience this will open me up to having to filter out 500 spam comments a day, but f*** it. 

As well as modelling Raen, I also model a much dodgier challenge later in the post, from another company which I will name only as prop firm #2. Finally I close with some generic and unquantified thoughts on the subject. 

Standalone Python code here. You can play with this to model another firms challenges.

TLDR: 

  • Raen you have reasonable chance of passing their first round challenge and you should use a vol target of [scroll down to find out!] to maximise your chances.
  • Prop firm #2 and most of the 'industry' use a very long bargepole, I can lend you mine
  • I remain skeptical of pay to play

As to what any of this has to do with the word game Wordle (TM), read on to find out.


IMPORTANT: This is not an endorsement of Raen. I have no association with them and I remain skeptical of this entire industry. Their CEO reached out to me after this blogpost was initially published, confirmed my understanding of the challenge parameters was correct, and gave me permission to use the firms name. I made one small correction to the post as a result of that contact.


The (relatively) good guys 

The rules of the Raen challenge are this:

  • You must make 20%
  • You can't lose more than 2% in a single day. There is no maximum trailing drawdown. So if you lose 1.99% every day forever, you're still in the game.
  • You must trade for at least 30 trading days before passing the challenge
  • It costs $300 a month to do the challenge. This isn't exactly the same Raen which charges a little more, but as a rounder number it makes it easier to directly see how many months we expect to take by backing out from the cost per month. I assume this is paid at the start of the month.
Note: this is just the 1st stage of the challenge. The rules for the 2nd stage are much more nebolous, but to be fair there are no charges for those. Like I said, this prop firm appears to be amongst the relatively good guys. 
 
I've also got these parameters:
  • 256 business days a year, 22 business days a month (it's actually more like 21, but again this higher figure will make the prop firm look good)
  • Random gaussian returns generated with no autocorrelation. This is extremely kind as it ignores the chance of fat tails that are somewhat common in finance.
  • If we get stopped out we try again, which means restarting the challenge from scratch. There are no reset fees. I assume that this reset doesn't affect the timing of monthly fees (I can't find the answer to this question on the website, but this must be the case as otherwise the cost of resetting would be free and your best strategy would be to keep making massive bets every day and you would pass eventually and only ever have to pay the first month).
  • We give up if we can't pass after trying for a year (there are no time limits in the challenge, but this speeds up the computation and seems like reasonable behaviour).
  • I assume there are no other limits which make it hard to hit a given risk target. This is unlikely to be a constraint except for suboptimally high vol targets.
There are two clear variables we are missing: the expected Sharpe Ratio, and the vol, both required to generate the gaussian returns. The former is assumed to be exogenous (a question to answer is how hard are these challenges to pass - if you need a SR of 4 to pass them that suggests they are probably too hard), whilst the latter we can optimise for. Note that due to the drawdown and self imposed time limit the optimal vol target won't be equal to the usual Kelly optimal. In fact this subject is intellectually interesting as well as topical since it's the first time I've looked at optimisation with a drawdown/time constraint. 

I run this as a bootstrap exercise. We try and optimise: (a) minimise the median cost, (b) maximise the probability of being funded before we give up. 

OK so two simple graphs then. Each has a different line for each SR, and the x-axis is the vol target we are running at. The y-axis on graph one is the cost, with a minus sign so we have the natural thing of a high y-axis being good. On graph two the y-axis is the probability of passing before we give up, again obviously high y-axis is good.


Median cost of getting to stage two, lines are SR, x axis is annual vol target, y axis is cost (bigger minus numbers are higher costs)

Note that for SR/vol combinations where we have a less than 50% median chance of succeeding the median cost will be equal to the monthly cost * 12. This is the case for SR<1.5



Probability of getting to stage two, lines are SR, x axis is annual vol target, y axis is probability of success



What conclusions can we draw from this?
  • The optimal vol target depends on your SR and whether you are focusing on costs or probability*
  • To get a greater than 50% chance of passing we need an expected SR of 1.5 or higher. 
  • The expected median cost with optimal vol is going to be be $2000 for a SR of 1.5, which you can get down to $1500 if you are the next RenTech (SR of 3). 
  • The expected median time to pass is going to be about 7 months for a SR of 1.5 or about 5 months if you are the next RenTech
* Experts will recognise the vol target choice as the Wordle (TM) starting word problem (yes we finally got there). The best starting word for Wordle will depend on whether you are maximising your probability of winning, or trying to minimise the number of guesses you make. Similarly, are we trying to maximise our chance of passing the challenge, or minimising our likely cost? They are not quite the same thing.

The optimal vol looking at costs is around 15 - 20%. Looking at probability of passing, it's around 12% for very high SR traders, and more like 22% for low SR traders. Basically if you're crap you have to take a bit more risk to have a chance. If you're good you can chill. Given we're assuming Gaussian returns I'd be tempted to mark these figures down a bit, although note that for high SR traders using less than optimal vol is quite harmful (very steep lines) whilst using more than optimal is less painful (this is completely at odds with Kelly of course).

Since nobody knows what their SR is, I'd suggest using 15% as a vol target. If you are incredible that is slightly more than optimal, but you still have an 80% chance of passing. If you are less incredible it may be slightly less than optimal, but then you have no business passing this challenge anyway.


The not so good guys firm #2


Here is an example of another firm's level 1 challenge, I won't name them but they are currently on the 1st page of google results for the search term "trading prop challenge" so that narrows it down. This firm has several challenge tiers in the futures space, I've chosen the lowest; but all the conditions are the same just different notional capital and $ costs. 

The rules of the challenge are this:

  • You must make 6%
  • The maximum drawdown is 4%; trailing based on daily balances.
  • If you lose more than 2% in a day, well basically you're stopped out at 2% but the challenge doesn't end. So your max loss in a day is 2%. In practice would be slightly more because of slippage but let's be generous here.
  • There is a one time activation of $130 (not exact figures again but ballpark).
  • You have to do the challenge in 30 days. It costs $100 to start each challenge. If you want to extend the challenge by 30 days it costs another $100. This equates to a monthly fee of $100, so we'll model it like that.
  • If you need to reset (start again because you've gone boom) it's $80. This is on top of the monthly cost since it doesn't reset the number of days to zero before you have to pay a monthly fee again.
  • There are optional data fees we will ignore, because there are enough fees here already.
Now, it's worth saying that there are many other terms and conditions that make firm #2 much dodgier and less likely to fund you or give you your profit share once funded (of course we're assuming firm #1 sticks to their word as well); but we're purely here to model the challenge itself.

Here are the graphs:


Median cost of getting to stage two, lines are SR, x axis is annual vol target, y axis is cost (bigger minus numbers are higher costs)


Probability of getting to stage two, lines are SR, x axis is annual vol target, y axis is probability of success

This is not what I had expected. I had expected the challenge to be much harder, so the firm could keep collecting the fees. But this challenge is easy to pass, just use vol more than 25%. Basically you get to flip a coin a couple of times and sooner or later it will turn up heads. This strategy will work even if you are a losing trader (SR -0.5) as shown. The only benefit of being a better trader is you will pass quicker and thus pay less. 

This is an incredibly badly designed challenge. It rewards higher volatility. It doesn't discriminate at all between good and bad traders. 

So eithier (i) there are other conditions in the (very hard to find) small print that in practice make the challenge hard to pass or (ii) it's a deliberate strategy to allow almost anyone to get to the next stage. The biggest red flag is that trading with this particular firm is sim only even after you have passed the challenge. They don't want to make the initial challenge too hard; they want you as a paying customer ASAP. And people who use too much vol are ideal customers for bucket shops.


The prop firms view

Of course what we're not doing here is looking at things from the prop firm's point of view. The challenge is designed to answer the question: "is this potential trader any good or just lucky?". At least that is if you are assuming they are genuinely looking for good traders. Which prop firm #2 definitely isn't, so let's focus on Raen.

The main shortcoming of these challenges is that 30 days or even a year is a wholly insufficient time to determine if anyone has any skill, unless they are very highly skilled indeed. And again, to be fair, the initial challenge of Raen is purely a screening exercise that will essentially tell you (a) if someone has a vague idea of how to manage risk and avoid a 2% daily drawdown and (b) is eithier very good (SR somewhere over 1) or just very lucky.

Someone who shoots for a vol target that is too high will almost certainly fail. However there is still a chance of a crap trader being lucky. But hopefully the second stage will weed them out. So we aren't too worried about type 1 errors.

However even relatively highly skilled traders (say SR 1 to 1.5) will only have a coinflip chance of passing. So there is still quite a big chance of a type 2 error and missing out on the next Nav Sarao*. Perhaps that's okay. They're probably only interested in people with a SR of over 2 anyway, where the passing percentage for a year will be over 60% if they use optimal vol. Of course I'm assuming all these people have several thousand dollars to stump up to a years worth of monthly fees. There will be many who don't, and therefore also miss out on potentially being funded even if they are good traders. So I would say the possibility of a type 2 error is quite high.

this famous gentlemen who for all his faults was an incredibly succesful futures prop trader even when he wasn't breaking the law.
 
I'd say on balance that Raen's challenge is relatively well designed given all the caveats. It's simple, it's difficulty rating feels about right, and the 2% daily drawdown acts as a simple anti muppet filter. The fact there is an optimal vol is satisfying. It would be interesting to see their internal numbers on how many people pass the first and then the second challenge; and then go on to become good traders. That will tell us what their type 1 error actually is. 


But is this all really a good idea? Some unquantified and unqualified opinions

Putting aside the statistical debate, is this all really a good thing? For the traders trying out, or for the firms themselves (assuming they again are genuine). There are many red flags in this industry, having to pay to be considered for a 'job' is always bad (although Raen's CEO clarified to me that they also accept applicants who haven't passed the challenge, presumably with some kind of filter on experience); the fact that many places are purely bucket shops where you trade against the broker is awful (again not Raen), frankly the whole thing makes my stomach churn but I'm trying to be as fair as possible here and put emotions aside.

The world of trading has changed an awful lot. In this post the founder of Raen says their shop is for people who would never the opportunity to get into Jane Street (JS). But JS is looking for people with a very particular set of skills to do a certain kind of trading which you can't do unless you have the sort of resources JS has. 

Yes we can argue that the Jane Street filter is too strict (though they hired SBF, a man who did not understand how to size trading positions, so maybe not strict enough), but it's pretty silly to pretend that Jane Street would be interested in hiring the sort of people who have the ability to be point and click futures traders. It's really not the same at all.

Raen apparently has ex JS people working there and they are 'very succesful'. I am sure they are. I'm also sure that they're almost certainly not point and click traders eithier. But is it really realistic to replicate JS by hiring a completely different set of people, without any of the filters JS uses to get specific sets of skills, using a totally different process from what JS uses, and without most of JS resources; and then sit them next to ex JS traders from whom they will presumably absorb brilliance by osmosis?

Basically if for some reason you are trying to be the next JS why are you using a hiring process which is clearly for point and click traders? There are no references to APIs that I can see on any of these challenge websites, so I assume it's point and click they are looking for.

So, is the world of point and click prop traders too inaccessible? It's probably more accessible than it was 20 years ago from an IT and cost perspective. But admittedly if you're not trading costly and dodgy retail assets,  and want to trade futures, then no you can't really do this with the $3000 or so you'd need to pass even a good trading challenge. The $100k of (notional, real?) money you get from Raen is the bare minimum I suggest in my book. You would need less in equities though, but to be eg US PDT you need $25k (for now). 

But from my perspective, the whole point and click futures industry seems very... niche. The vast majority of professional traders now are basically quants, or heavily supported by quants, and/or using data other than the charts and order books fancied by the dozen monitor setups of the cliched point and click trader. It's an area of the market that really is very efficient and where the vast majority of point and click humans can't compete even if supported by execution algos, which is why I deliberately trade much... more... slowly. 

In fact I'd say there are now significantly more people employed by the likes of JS than by genuine and profitable point and click firms. 

So we're talking about getting access to a relatively tiny industry that is frankly a bit quaint and probably still shrinking. I can understand why many people want to get into it though. Who wouldn't want to gamble for a living, make millions of dollars a year, in a job which requires no qualifications (no Phd in astrophysics needed here!), which so many films and YouTube videos have glamorised, and which eithier requires almost no work or where hard work and effort will be rewarded (depending on which video you watch). 

I can believe that there are a very small number of people who have pointed and clicked for so long, that they really do have an ability to 'feel' a particular market very well, they can glance at an L2 order book and see patterns, they know which news and statistics to focus on, they know what other markets to look at, they know how to manage risk and size positions, they have built execution algos that enable them to compete not with HFT but certainly in the sub one day area... and they are certainly better traders than me or my systems. 

As to how you would select such people, I do not know. They are not my people. Personally I am very skeptical as to whether there really are people who can sit at a computer having never traded futures before except maybe in a simulator, with no training or market experience, and have some innate trading ability that enables them to have a high probability of passing a trading challenge with the sort of SR that would make most hedge funds weep with envy, and also that those challenges are the best way of being able to tell that someone has that innate ability. 

But once again, I'm not in this industry so what do I know. 

Summary

Raen: not a bad intial screening and a more than 50% chance of a pass with a SR above 1.5. But it will cost you more than $300. Budget for several thousand bucks and use a vol target of around 15% to optimise your chances.

Unamed prop firm #2 I googled and most of this industry: stay away for gods sakes.

Pay to play: morally dubious IMHO

Random futures traders having some sort of innate talent that can be found in this way: I doubt it








Tuesday, 11 November 2025

Is predicting vol better worth the effort and does the VIX help?

 I'm a vol scaler.

There I've said it. Yes I adjust my position size inversely to vol. And so should you. 

But to this well we need to be able to predict future vol; where the 'future' here is roughly how long we expect to hold our positions for. 

Some people spend a lot of effort on this. They use implied vol from options, high(er) frequency data, GARCH or stochastic vol models. Other people don't spend a lot of effort on this. They look at the vol from the last month or so, and use that. I'm somewhere in the middle (though biased massively towards simplicity); I use an exponentially weighted moving average of recent vol combined with a much slower average.

An obvious question with any research effort is this: is the extra effort worth it? If we were trading options, then sure it would be. But we're not.

In this post I answer that 'is it worth spending time on this nonsense' question and look at the actual improvements we can gain from moving from the most rudimentary vol forecasting to the slightly more complex stuff I do. I also see if we can use a simple indicator of future volatility - the VIX - to improve things further. This was suggested by someone on Twitter(X). 


Is it worth predicting vol better?

I've mentioned this experiement a few times in the past, but I don't think I have ever blogged about it. Basically you run two backtests, one with your normal historic vol estimation, and the other with perfect foresight: basically equal to the ex-post vol over the next 30 days. This will be equal to the theoretical best possible job we could do if we really worked hard at forecasting vol. We can't do any better than a crystal ball. 

Then you check out the improvement. If vol is worth forecasting, there will be a big improvement in performance.

[This is a 'workhorse' test simulation with 100 liquid futures and 4 signals: 40% carry, and 20% in eahc of ewmac 16,32 and 64]

We begin with the simplest possible predictor of vol, a backward looking standard deviation estimate with an infinite window. Essentially this is a fixed vol estimate without any in sample estimation issues. We then compare that to the perfect foresight model.

Let's begin by looking and seeing what the vol outcome is like, this is one month rolling vol estimate (the realised vol of the strategy returns); clearly foresight does a better job of vol targeting.


Above are the cumulated returns. That sure looks like a decent improvement and as the vol of perfect foresight is lower it's better than it looks. It's a half unit improvement in SR points, from 0.76 to 1.24. The skew has dropped off from over 1.0 monthly to 0.12, but you know from my previous posts that small dip in skew won't be enough to destroy the huge CAGR advantage given by this sort of SR premium. The sortino is much better, more than double. 

So the short answer is yes, it's worth predicting vol better. Let's see how 


What size window

The obvious thing to do is to shorten our estimation window from forever to something a little shorter. Here is a graph I like to show people:

The x-axis shows the window size for a historic vol estimator in business days. The y-axis shows the R squared regressing the realised vol for a given future time period against the estimator / predictor of future vol. We're looking for the point on the x-axis that maximises R squared. Each line is a different length of future time period. So for example, to get the best prediction of vol one month ahead (about 21 business days) we look at the purple line for 21 days, and we can see this peaks at around 25 days. 

This is also the highest R squared. We are best at predicting one month vol ahead than other periods, and to do so we should use the previous one month vol (actually slightly more than a month). 

We don't do quite as well predicting shorter periods, and it looks like we might need slightly less data to predict eg 5 day vol. We do worse predicting longer periods, and it looks like we need more data. For 365 days ahead vol, the best R squared is obtained at somewhere between 40 days (around 2 months) and 100 days (around 5 months). 

Note: these are good R squared! In my last post a monthly holding period with an R squared of 0.1 would give us a SR of over 1, which is good. Here we are seeing R squared of over 0.30, which equates to a SR of nearly 2. That is very good - if we were as good at predicting returns as vol our SR would be two!

With that in mind, let's go from an infinite lookback to a 25 day business day lookback and see what happens.

First the rolling vol:

We can already see a fair improvement from the spikiness of the benchmark. How about the returns?

It looks like we are doing better than the benchmark and are competitive with foresight. However some of this is higher vol; our SR is 1.03 which still falls short of the 1.24 of the perfect foresight model, though obviously much better than the benchmark of infinite vol.

To recap:

Infinite previous vol                  SR 0.76
One month simple rolling vol           SR 1.03
Perfect foresight                      SR 1.24


From simple to exponential moving average

Now let's be a little fancier and go to EWM of vol rather than a simple equally weighted measure. This might not get us a better forecast of vol, but we should be smoother. A 36 day span in the pandas EWM function has the same half life as a 25 day SMA.

As before, here's the vol targeting, which is now almost identical:


And for profits....


Again we aren't quite vol matched, but EWM does in fact add a small increment in SR of 0.04 units. Around a quarter of that modest bump comes from lower costs (a saving of around 24 bp a year). 


Infinite previous vol                  SR 0.76
One month simple rolling vol           SR 1.03
One month EWM rolling vol              SR 1.06
Perfect foresight                      SR 1.24


I already looked at this in my book AFTS, but if we combine the standard 25 EWM vol with a very long run average (10 years) of the same vol we get another small bump. This is the vol measure I use myself.


Introducing the VIX

We are still some way short of getting close to perfect foresight vol. So let's do something else, for fun. We know that implied vol should be a good predictor of future vol; accounting for the well known vol premium (we get paid for being short gamma, hence implied is persistently higher than expected future vol).

Here's the simple rolling 25 day standard deviation measure for the S&P 500, and the VIX:

Note: I would like to thank Paul Calluzzo for pointing out a stupid mistake I had made in the first version of this post

A couple of things to notice. Firstly the vol premium is larger after 2008 due to a general level of scaredy-cat-ness, and it sems to have narrowed somewhat inthe last few years. Over the last few years there have been a lot of dumb retail people selling vol and pushing the price down! 

Secondly it looks like VIX tracks rather than predicts increases in risk, at least for those unexpected events which cause the biggest spikes. Which suggests it's predictive power will be somewhat limited.
If we regress future vol on historic vol plus the VIX, the VIX coefficient is 0.14 and the historic vol comes in at 0.71. That suggests historic vol does most of the explaining with VIX not adding much to the party. I get similar results if I put the vol premium (VIX - historic vol) plus historic vol into the regression to reduce potential colinearity. 

Summary

There are significant performance benefits to be gained from forecasting vol well even in a directional system that doesn't trade optionality. Over half of those benefits can be captured by just using the right amount of lookback on a simple historical estimate. Further complexity can probably improve vol targeting but is unlikely to lead to significant performance improvements. Finally, the VIX is not especially helpful in predicting future volatility; mostly this is explained pretty well by historic vol.



Saturday, 1 November 2025

R squared and Sharpe Ratio

 Here's some research I did whilst writing my new book (coming next year, and aimed at relatively inexperienced traders). Imagine the scene. You're a trader who products forecasts (a scaled number which predicts future risk adjusted returns, or at least you hope it does) who wants to evaluate how good you are. After all you've read Carver, and you know you should use your expected Sharpe Ratio to determine your risk target and cost budget.

But you don't have access to cutting edge backtesting software, or even dodgy home brew backtesting software like my own psystemtrade, instead you just have Excel (substitute for your own favourite spreadsheet, god knows I certainly don't use the Micros*it product myself). You're not enough of a spreadsheet whizz to construct a backtest, but you can just about manage a linear regression. But how do we get a Sharpe Ratio from a regression?

If that is to much of a stretch for the typical reader of this blog, instead imagine that you do fancy yourself as a bit of a data scientist, and naturally you begin your research by regressing your risk adjusted returns on your forecasts to identify 'features' (I'm given to understand this is the way these people speak) before going near your backtester because you've read Lopez De Prado

Feels like we're watching a remake of that classic scene in Good Will Hunting doesn't it "Of course that's your contention. You're a first year data scientist. You just finished some financial economist, Lopez De Prado prob'ly, and so naturally that's what you believe until next month when you get to Rob Carver and get convinced that momentum is a risk factor. That'll last until sometime in your second year..."

But you're wondering whether an R squared of 0.05 is any good or not? Unlike the Sharpe Ratio, where you know that 1 is good, 2 is brilliant, and 3 means you are eithier the next RenTech or more likely you've overfitted.

So I thought it would be 'fun' to model the relationship between these two measures of performance. Also, like I said, it's useful for the book. Which is very much aimed at the tech novice trader rather than the data scientist, but I guess the data scientist can just get the result for free from this blogpost as they're unlikely to buy the book.

There are three ways we can do this. We can use a closed form formula, we can use random data, or we can use actual data. I'm going to do all three. Partly to verify the formula works in the real world, and partly to 

There is code here; you'll need psystemtrade to run it though.

Edit notes: I'd like to thank LacertaXG1 and Vivek Rao for reminding me that a closed form formula exists for this problem.


Closed form formula

From the book known only as G&K we have one of my favourite laws, LAM - the law of active management. This is where the famous 'Sharpe Ratio (actually Information Ratio, but we're amongst friends) is proportional to sqrt active bets' comes from, a result we use in both portfolio size space (the IDM for a portfolio of N uncorrelated assets ought to be sqrt N) and in time space (for a given success rate the SR for a trading strategy with holding period T will be sqrt 2 times better if we halve our holding period). 

Anyway under LAM at an annual holding period an R squared of 0.01 equates to an IC/SR of 0.10. Under LAM we'd expect the same R squared to result in a sqrt(256) = 16, SR of 1.6 at a daily holding period. Let's see how well this is borne out by the data.


Real data and forecasts

This is the easiest one. We're going to get some real forecasts, for things like carry, momentum. You know the sort of thing I do. If not, read some books. Or if you're a cheapskate, the rest of this blog. And we get the price of the things the forecasts are for. And because I do indeed have fancy backtesting software I can measure the SR for a given forecast/price pairing*. 

* to do this we need a way of mapping from forecast to positions, basically I just do inverse vol position scaling with my standard simple vol estimate which is roughly the last month of daily returns, and the overall forecast scaling doesn't really matter because we're not interested in the estimated coefficients of the regression just the R squared.

And because I can do import statsmodel in python, I can also do regressions. What's the regression I do? Well since forecasts are for  predicting future risk adjusted returns, I regress:

(price_t+h - price_t)/vol_estimate_t = alpha + beta * (forecast_t) + epsilon_t 

Where t is time index, and h is the forecast horizon in calendar days, which I measure simply by working out the forecast turnover (by counting the typical frequency of forecast sign changes from negative to positive in a year), and then dividing 365 by the turnover. 

Strictly speaking we should remove overlapping periods as that will inflate our R squared, but as long as we consistently don't remove overlapping periods it then our results will be fine.

Beta we don't care about as long as it's positive (it's some arbitrary scaling factor that will depend on the size of h and the forecast scaling), and alpha will be any bias in the forecast which we also don't care about. All we care about is how well the regression fits, and for that we use R squared. 

Note: We could also look at the statistical significance of the beta estimate, but that's going to depend on the length of time period we have. I'd rather look at the statistical significance of the SR estimate once we have it, so we'll leave that to one side. 

Anyway we end up with a collection of SR and the counterpart R squared for the relevant regression. Which we'll plot in a minute, but let's get random data first.


Random data

This is the slightly harder one. To help out, let's think about the regression we're going to end up running:

(price_t+h - price_t)/vol_estimate_t = alpha + beta * (forecast_t)  + epsilon_t 

And let's move some stuff around:

 (forecast_t)  

     = ((1/beta)*(price_t+h - price_t)/vol_estimate_t) 

      + (alpha/beta) + (epsilon_t/beta) 

If we assume that alpha is zero, and we're not bothered about arbitrary beta scaling, then we can see that:

 (forecast_t)  

     = ((price_t+h - price_t)/vol_estimate_t) + noise

This means we can do the following:
  • Create a random price series, compounded gaussian random is fine, and scaling doesn't matter
  • Measure it's backward looking vol estimate
  • Work out the future risk adjusted price return at any given point for some horizon, h
  • Add noise to it (as a multiple of the gaussian standard deviation)
  • Voila! As the french would say. We have a forecast! (Or nous avons une prévision! As the French would say)
We now have a price, and a forecast. So we can repeat the exercise of measuring a SR and doing a regression from which we get the R squared. And we'll get the behaviour we expect; more noise equals lower SR and a worse R squared. We can run this bad boy many times for different horizons, and also for different levels of noise.


Results

Without adoing any further, here are some nice pictures. We'll start with the fake data. Each of the points on these graphs is the mean SR and R squared from 500 random price series. The x-axis is a LOG scale for R squared. 10^-1 is 0.01 and so on, you know the drill. The y axis is the SR. No logging. The titles are the forecast horizons in business days, so 5 days is a week, etc etc.

As we're trading quickly, we get pretty decent SR even for R squared that would make you sad. An R squared of 0.01, which sounds rubbish, gives you a SR of around 0.7. 

Heres around a monthly holding period:


Two months:


Three months:


Six months:

And finally, one year:



Right so what are the conclusions? There is some fun intuition here. We can see that an R squared of 0.01 equates to a SR of 0.1 at an annual holding period as the theory suggests. It's also clear that an R squared of 0.1, which is very high for financial data, isn't going to help that much if your holding period is a year. Your SR will still only be around 0.30. Wheras if you're trading fifty times faster, around once a week, it will be around 2.30 SR with R squared of 0.1. The ratio between these two numbers (7.6) is almost exactly equal to the square root of fifty (7.1) and this is no accident; our results are in line with the law of active management which is a nice touch.

Neatly, an R squared of 1 equates exactly to a SR of 1 at a one year holding period.

Now how about some real results. Here we don't know what the forecast horizon is, instead we measure it from the forecast. This does mean we won't have neat graphs for a given horizon, but we can do each graph for a range of horizons. And we don't have to make up the forecast by reversing the regression equation, we just have forecasts already. And the price, well of course we have prices.
Important note! Unlike with fake data where we're unlikely to lose money on average, with real data we can lose money. So we remove all the negative SR before plotting.

Here's for a horizon of about 5 days:

No neat lines here; each scatter point represents an instrument and trading rule (probably mostly fast momentum). Remember this from earlier for the 5 day plot with fake data: "An R squared of 0.01, which sounds rubbish, gives you a SR of around 0.7". You can see that is still true here. And also the general shape is similar to what we'd expect; a gentle upward curve. We just have more really low SR, and (sadly!) fewer higher SR than in the fake data.

About two weeks:

About a month:

About two months:
About three months:
About six months... notice things are getting sparser
And finally about a year:
There is very little to go on here, but an R squared of 0.1 which before gave a SR of 0.3 isn't a million miles away at 0.5. In general I'd say the real results come close to confirming the fake results.


Summary

Both data scientists and neophyte traders alike can use the fake data graphs to get SR without doing a backtest. Do your regression at some forecast horizon for which a fake data graph exists. Don't remove overlapping periods. If the beta is negative then you're losing money. If the beta is positive then you can lookup the SR inferred by the R squared.

You can also use any graph, and then correct the results for LAM. For example, if you want the results for 1 day, then you can use the results for 5 days and multiply the SR by sqrt(5). But you want a closed form solution. So here is one, assuming 256 business days in a year:

The SR for N days holding period is equal to 16 * sqrt(R squared / N)