Saturday, 1 November 2025

R squared and Sharpe Ratio

 Here's some research I did whilst writing my new book (coming next year, and aimed at relatively inexperienced traders). Imagine the scene. You're a trader who products forecasts (a scaled number which predicts future risk adjusted returns, or at least you hope it does) who wants to evaluate how good you are. After all you've read Carver, and you know you should use your expected Sharpe Ratio to determine your risk target and cost budget.

But you don't have access to cutting edge backtesting software, or even dodgy home brew backtesting software like my own psystemtrade, instead you just have Excel (substitute for your own favourite spreadsheet, god knows I certainly don't use the Micros*it product myself). You're not enough of a spreadsheet whizz to construct a backtest, but you can just about manage a linear regression. But how do we get a Sharpe Ratio from a regression?

If that is to much of a stretch for the typical reader of this blog, instead imagine that you do fancy yourself as a bit of a data scientist, and naturally you begin your research by regressing your risk adjusted returns on your forecasts to identify 'features' (I'm given to understand this is the way these people speak) before going near your backtester because you've read Lopez De Prado

Feels like we're watching a remake of that classic scene in Good Will Hunting doesn't it "Of course that's your contention. You're a first year data scientist. You just finished some financial economist, Lopez De Prado prob'ly, and so naturally that's what you believe until next month when you get to Rob Carver and get convinced that momentum is a risk factor. That'll last until sometime in your second year..."

But you're wondering whether an R squared of 0.05 is any good or not? Unlike the Sharpe Ratio, where you know that 1 is good, 2 is brilliant, and 3 means you are eithier the next RenTech or more likely you've overfitted.

So I thought it would be 'fun' to model the relationship between these two measures of performance. Also, like I said, it's useful for the book. Which is very much aimed at the tech novice trader rather than the data scientist, but I guess the data scientist can just get the result for free from this blogpost as they're unlikely to buy the book.

There are two ways we can do this. The easy way and the hard way; just joking. We're going to do it the easy way. The hard way would involve tedious maths, and nobody wants that. The easy way involves nice pictures.

Actually there genuinely are two (easy) ways we can do this. We can use random data. Or we can use real data and forecasts. I'm going to do both then combine the results.

There is code here; you'll need psystemtrade to run it though.


Real data and forecasts

This is the easiest one. We're going to get some real forecasts, for things like carry, momentum. You know the sort of thing I do. If not, read some books. Or if you're a cheapskate, the rest of this blog. And we get the price of the things the forecasts are for. And because I do indeed have fancy backtesting software I can measure the SR for a given forecast/price pairing*. 

* to do this we need a way of mapping from forecast to positions, basically I just do inverse vol position scaling with my standard simple vol estimate which is roughly the last month of daily returns, and the overall forecast scaling doesn't really matter because we're not interested in the estimated coefficients of the regression just the R squared.

And because I can do import statsmodel in python, I can also do regressions. What's the regression I do? Well since forecasts are for  predicting future risk adjusted returns, I regress:

(price_t+h - price_t)/vol_estimate_t = alpha + beta * (forecast_t) + epsilon_t 

Where t is time index, and h is the forecast horizon in calendar days, which I measure simply by working out the forecast turnover (by counting the typical frequency of forecast sign changes from negative to positive in a year), and then dividing 365 by the turnover. 

Strictly speaking we should remove overlapping periods as that will inflate our R squared, but as long as we consistently don't remove overlapping periods it then our results will be fine.

Beta we don't care about as long as it's positive (it's some arbitrary scaling factor that will depend on the size of h and the forecast scaling), and alpha will be any bias in the forecast which we also don't care about. All we care about is how well the regression fits, and for that we use R squared. 

Note: We could also look at the statistical significance of the beta estimate, but that's going to depend on the length of time period we have. I'd rather look at the statistical significance of the SR estimate once we have it, so we'll leave that to one side. 

Anyway we end up with a collection of SR and the counterpart R squared for the relevant regression. Which we'll plot in a minute, but let's get random data first.


Random data

This is the slightly harder one. To help out, let's think about the regression we're going to end up running:

(price_t+h - price_t)/vol_estimate_t = alpha + beta * (forecast_t)  + epsilon_t 

And let's move some stuff around:

 (forecast_t)  

     = ((1/beta)*(price_t+h - price_t)/vol_estimate_t) 

      + (alpha/beta) + (epsilon_t/beta) 

If we assume that alpha is zero, and we're not bothered about arbitrary beta scaling, then we can see that:

 (forecast_t)  

     = ((price_t+h - price_t)/vol_estimate_t) + noise

This means we can do the following:
  • Create a random price series, compounded gaussian random is fine, and scaling doesn't matter
  • Measure it's backward looking vol estimate
  • Work out the future risk adjusted price return at any given point for some horizon, h
  • Add noise to it (as a multiple of the gaussian standard deviation)
  • Voila! As the french would say. We have a forecast! (Or nous avons une prévision! As the French would say)
We now have a price, and a forecast. So we can repeat the exercise of measuring a SR and doing a regression from which we get the R squared. And we'll get the behaviour we expect; more noise equals lower SR and a worse R squared. We can run this bad boy many times for different horizons, and also for different levels of noise.


Results

Without adoing any further, here are some nice pictures. We'll start with the fake data. Each of the points on these graphs is the mean SR and R squared from 500 random price series. The x-axis is a LOG scale for R squared. 10^-1 is 0.01 and so on, you know the drill. The y axis is the SR. No logging. The titles are the forecast horizons in business days, so 5 days is a week, etc etc.

As we're trading quickly, we get pretty decent SR even for R squared that would make you sad. An R squared of 0.01, which sounds rubbish, gives you a SR of around 0.7. 

Heres around a monthly holding period:


Two months:


Three months:


Six months:

And finally, one year:



Right so what are the conclusions? There is some fun intuition here. It's clear that an R squared of 0.1, which is very high for financial data, isn't going to help that much if your holding period is a year. Your SR will still only be around 0.30. Wheras if you're trading fifty times faster, around once a week, it will be around 2.30. The ratio between these two numbers (7.6) is almost exactly equal to the square root of fifty (7.1) and this is no accident; our results are in line with the law of active management which is a nice touch.

Now how about some real results. Here we don't know what the forecast horizon is, instead we measure it from the forecast. This does mean we won't have neat graphs for a given horizon, but we can do each graph for a range of horizons. And we don't have to make up the forecast by reversing the regression equation, we just have forecasts already. And the price, well of course we have prices.
Important note! Unlike with fake data where we're unlikely to lose money on average, with real data we can lose money. So we remove all the negative SR before plotting.

Here's for a horizon of about 5 days:

No neat lines here; each scatter point represents an instrument and trading rule (probably mostly fast momentum). Remember this from earlier for the 5 day plot with fake data: "An R squared of 0.01, which sounds rubbish, gives you a SR of around 0.7". You can see that is still true here. And also the general shape is similar to what we'd expect; a gentle upward curve. We just have more really low SR, and (sadly!) fewer higher SR than in the fake data.

About two weeks:

About a month:

About two months:
About three months:
About six months... notice things are getting sparser
And finally about a year:
There is very little to go on here, but an R squared of 0.1 which before gave a SR of 0.3 isn't a million miles away at 0.5. In general I'd say the real results come close to confirming the fake results.


Summary

Both data scientists and neophyte traders alike can use the fake data graphs to get SR without doing a backtest. Do your regression at some forecast horizon for which a fake data graph exists*. Don't remove overlapping periods. If the beta is negative then you're losing money. If the beta is positive then you can lookup the SR inferred by the R squared.


* Incidentally, it doesn't matter if the forecast horizon isn't super accurate. Data science type people will probably do their regressions one day ahead, or maybe two days ahead if they're worried about lookahead bias bleeding type things. They're assuming that forecasts are realised monotonically in the future, so basically ignoring any weird return autocorrelation conditional on forecasts (eg a positive momentum forecast over the next three months might also predict mean reversion in the next few days). You can still use the graph for shorter priods even if your forecast is slower. A slow forecast will naturally have a lower R squared if you run the regression at a faster frequency, and hence a lower SR. 


No comments:

Post a Comment

Comments are moderated. So there will be a delay before they are published. Don't bother with spam, it wastes your time and mine.