Regardless of Method, Prediction Models Basically Agree

A number of different blog posts comparing some of the different prediction models have been going around lately. If you haven’t seen them, you should check out Alexej Behnisch’s piece comparing various models (the post where I got data for this piece), and Constantinos Chappas’s work at StatsBomb doing a “poll of models.”

In his post, Alexej pointed out  how many similarities there were between the different model predictions, and highlighted some of the major differences in their predictions. However, something I’ve noticed in the past is that the middle of the table is basically one big tie, so predicting Southampton for 10th vs. 13th doesn’t necessarily mean there’s much of an actual difference in the models.my most recent heat map of probabilities might help illustrate what I mean:

Week 20-3 Heat Map

You can look at the brightness of the colors and almost see the clear boxes. The table has basically separated itself into 4 tiers: the top 2, the next 5 (contenders for Europe), the next 7 (mid-table malaise), the next 4 (partly safe with a chance of relegation), and the bottom 2 (getting ready for life in the Championship). One of the things Alexej’s article points out is differences in 3rd-5th place predictions, but if you look at the probabilities in my model they’re roughly tied. The mid-week results could easily see those three completely switch and switch again after the weekend’s fixtures. 1

So the question I’m interesting in is how similar the various models’ predictions are? In statistical terms, how well do the different models’ predicted points values correlate with each other? The answer is: incredibly highly. In fact, they correlate so highly I checked and re-checked my analysis against the original data several times because I didn’t believe it. Here’s a plot of the data to show you how highly they correlate.

Correlation Major Models

To see the correlation between two models, lineup the name on the bottom row and the name on the left side.  The lower diagonal shows a scatterplot of the two models specified on the bottom and left sides, and all of the scatterplots show basically a 1:1 relationship with almost no variance from that line. This indicates a high correlation, and the number for each pair is specified in the upper diagonal. The lowest correlation is between my model (Soccermetric) and Michael Caley’s (MC_of_A), and even that is 0.978 which is incredibly high. All of the major models here basically have the same predicted values for end of season points.

With so little variation, there’s not much else to be squeezed out of these data, but I wanted to present one more just because it’s a question that has been on my mind. There  are basically two types of models out there – the pre-season prediction models, and the in-season models.2 They use dramatically different data, so I’ve always been curious to see how the two types of models match up.  I highlighted the cells with models that “match” (both are either in-season or both are pre-season) in blue, while the cells with mis-matched models (one is pre-season, one is in-season) in red. The results are below.

View post on imgur.com

There’s certainly no statistically distinguishable relationship between the two groups, but mis-matched models (red cells) tend to have a little lower correlation than the matches (blue cells), but we’re looking at a comparison of 0.98 to 0.99 here so I don’t think it’s worth drawing any conclusions from. The similarity between all the models is striking to me, and at least by week 20 the in-season models seem to have roughly the same predictions as the pre-season models. This may be my bias as a pre-season modeler, but that speaks highly of the pre-season models in my mind.

The moral of the story is that despite using such different inputs, these models have roughly the same predictions, correlating at 0.97 or better. More importantly, they correlate with market expectations at a similar level, which either means stats conform to the wisdom of crowds or that gamblers are listening to our models and betting accordingly. At the end of the day, look at the model inputs, pick the model you’re most comfortable with, but it looks like whichever one you pick you’ll see roughly the same outcomes. It speaks well about the type of work that’s being done online, and it’s an exciting time to be following soccer analytics.





  1. This isn’t to criticize the piece – Alexej’s article points out an important point that there can be a huge difference in where a team finishes within these groups, and nowhere is this more true than the 3-5th place spots.
  2. I didn’t know how to classify the EuroClubIndex because it uses both historical and current season data so I omitted it from this analysis. I considered market data as in-season data because they update every week with new information.

Leave a Reply

Your email address will not be published. Required fields are marked *