So the model didn’t perform as well Week 31 as it did Week 30. It actually did about as well as “chance” with 3 correct predictions, 3 times the second most likely outcome occurred, and 4 times the third most likely outcome occurred.
So what happened? First, the correct predictions: Chelsea unsurprisingly won (the model REALLY likes Chelsea at home against anyone). Swansea over Hull City wasn’t much of a surprise either, and while it wasn’t much of an upset, the model did well on the marquee game of the weekend: Arsenal v. Liverpool. That’s actually a pretty tough game to pick, although Liverpool seems to have returned to form after winning 12/13, and after a slow start Arsenal has been as good this year as they have been in 5 years.
Objectively, the 2nd most likely outcome picks were pretty solid. I think intuition would have had Tottenham as a much bigger favorite over Burnley, so predicted a 1/3 chance of a draw seems pretty solid. I’m actually really pleased with the QPR win over West Brom: I didn’t see the betting line, but I have to think QPR was a pretty significant underdog on the road against a decent West Bromwich Albion team. Giving them a 29% chance of winning, and a 45% chance of a draw is more than I would have done.
Manchester United represents a specific challenge for my model: I posted that it seemed off that they were only a 33% likelihood to win at home against Aston Villa, and my instinct was right there. The model consistently underrates Manchester United, probably likely due to their uninspiring early season form. It’s doing so again this weekend at home against slumping Manchester City, but we’ll see what happens.
Finally, the complete misses. Leicester City over West Ham, and Crystal Palace over Man City have to be considered pretty big upsets (although Palace has looked good recently). I’m not sure there’s anything I can do about results like that, and the model does quantify the unlikely nature of these types of results.
Everton v. Southampton and Sunderland v. Newcastle have to be considered bad misses. Southampton’s been in good form this year and Everton is obviously underperforming, but giving them 8% to win at home has to be considered a miss. Sunderland v. Newcastle was a tough one, but giving Sunderland 13% to win at home seems a bit of an underestimation of their chances.
I’ve made some changes in the model this week: specifically I’ve added covariates to the prediction model. I played with a few different models that included recent form, transfer spending, and goals scored/goals allowed, and the best model in terms of predictive power and parsimony was simply predicting total points as a function of goals for and goals allowed. Transfer spending added a little predictive power, and recent form surprisingly only added a little predictive power (I was actually concerned it was too collinear with the outcome variable but that turned out not to be true), so for now I’m leaving it with just the two explanatory variables. This summer and fall will be devoted to using player data to predict outcomes, so those will likely go into the model as well at some point, but for now I’m happy with it. A quick diagnosis shows it gives Manchester United a little more credit (it gave them a 38% win likelihood last weekend and something like 42% chance to draw…still a 2nd best prediction, but closer), so I’m happy with it for now.
Looking forward to seeing this week’s outcomes and how well the model does. I like most of the picks, so we’ll see what happens.