ELO ratings have become one of the more popular ways of predicting soccer results – basically people took the chess ELO ratings and transferred the same logic over to soccer. The work at Club ELO is one of the most ambitious, amazing soccer projects out there, and they deserve all the credit they’ve been getting recently. It’s an effective, simple to calculate, straightforward way of comparing team.
There are two ways to judge the success of a model: its simplicity and its effectiveness. The original IRT model underpinning my predictions is fairly straightforward, although the new SVM is much more complex. The second (more important) comparison is how accurately the models predict results. So I’m going to benchmark my model against the Club ELO model throughout this season – initially I’m going to start with EPL, but I’m hoping to run the model on the other major soccer leagues as well. Week 1’s results are in the table below.
|Game||Murphy's SVM Predictions||Club ELO Predictions|
|Manchester United v. Tottenham||0.55||0.54|
|Bournemouth v. Aston Villa||0.42||0.26|
|Norwich City v. Crystal Palace||0.42||0.30|
|Everton v. Watford||0.34||0.24|
|Leicester City v. Sunderland||0.54||0.45|
|Chelsea v. Swansea||0.19||0.20|
|Arsenal v. West Ham United||0.03||0.09|
|Newcastle United v. Southampton||0.38||0.29|
|Stoke City v. Liverpool||0.24||0.34|
|West Bromwich Albion v. Manchester City||0.47||0.59|
Each model assigns a predicted probability of the three outcomes (Win/Draw/Loss), and I gave points for how likely each model thought the actual outcome was. For example, if the model predicted an outcome was 30% likely, it earned 0.30 points.
This week was a rough one for both models – there were a lot of upsets (Chelsea’s draw and Arsenal’s loss stand out), so neither score was particularly high.
ELO performed a little better – earning 3.30 points compared to my SVM model’s 3.06. ELO was better on 6/10 games, while I was better on 4/10 games, but we were effectively even on many of the games. The ELO seems to assign a higher percentage to “draw” for most games, as the Chelsea and Everton draws cost me quite a few points, and my model was surprisingly low for Man City’s win over West Brom.
*EDIT: I think I transposed some of the columns in my initial predictions, and I got the numbers completely wrong. My model performed slightly better (3.59 points to 3.30 for ELO), and was better on 6/10 games compared to 4/10 for ELO. I’ve updated the table and future posts.
On the other hand, the SVM made up a lot of ground picking Leicester City over Sunderland and liked West Ham’s chances to win a lot more than the ELO. I think overall I’ll pick up ground on upsets and lose ground on ties, which I’m happy with.