Week 24 EPL Model Comparisons

This is the third week of my in-season results based model (TAM), and I wanted to continue comparing its performance to MOTSON’s. My goal is, with enough data, to learn as much as I can about the advantages and disadvantages of the two approaches and maybe how they can compliment each other and how I can improve my predictions for next season. You can see last week’s post here, and I’ll be updating every week. Below is a table with each model’s modal category and the actual result.

GameMOTSONTAM (In-Season)Actual Result
Norwich v. TottenhamTottenhamDrawTottenham
West Ham v. Aston VillaWest HamWest HamWest Ham
Leicester City v. LiverpoolLeicester City/Draw (equal)Leicester CityLeicester City
Crystal Palace v. BournemouthCrystal PalaceBournemouthBournemouth
Arsenal v. SouthamptonArsenalArsenalDraw
Sunderland v. Man CityMan CityMan CityMan City
Man United v. Stoke CityMan UnitedDrawMan United
West Brom v. Swansea CityDrawDrawDraw
Watford v. ChelseaChelseaDrawDraw
Everton v. NewcastleEvertonEvertonEverton

MOTSON did well this week, getting 7/10 games correct, and the TAM stepped up, also getting 7/10 correct. MOTSON missed on Crystal Palace, Arsenal, and Chelsea, while the TAM missed on Tottenham, Man United, and Arsenal. So what can we learn from these games?

Once again, MOTSON overestimates Chelsea. There’s not much to learn here because we’ve already learned this a bunch of times – I still don’t have any good ideas how I could have modeled this pre-season, and maybe this is just a statistical anomaly. Either way, it’s a known issue with the model that seems to be corrected fairly well by the in-season results model.

Both models missing on Arsenal is a bit surprising, and this may just be a legit low-probability event. If two disparate models predict the same outcome and both get it wrong, then that might be the explanation.

TAM’s misses on Tottenham and Man United are surprising. I would have picked both of those teams to be favorites, especially with Man United at home. Maybe Spurs has struggled to convert road fixtures to wins, which explains that prediction, but United drawing at home against Stoke is a weird one to me. Not sure why that is – worth thinking about more.

MOTSON’s miss on Crystal Palace v. Bournemouth is a tough one – I would have picked Palace to win, but TAM got this one right so apparently the data are better than my intuition and the pre-season model here. I don’t have any real insight here, but it’s important to take notice of this in case a pattern arises that would give me an opportunity to improve the model next year.

Overall MOTSON still leads with 18 correct picks over TAM’s 15. Both models had good weeks, and I’m curious to see if the TAM model improves as it gets more data while MOTSON keeps the same inputs from last year. In-season results did much better this week than the past two, so I’m curious to see if this is just because the results were more predictable this week or if it’s improving.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *