This is the third week of my in-season results based model (TAM), and I wanted to continue comparing its performance to MOTSON’s. My goal is, with enough data, to learn as much as I can about the advantages and disadvantages of the two approaches and maybe how they can compliment each other and how I can improve my predictions for next season. You can see last week’s post here, and I’ll be updating every week. Below is a table with each model’s modal category and the actual result.
|Game||MOTSON||TAM (In-Season)||Actual Result|
|Norwich v. Tottenham||Tottenham||Draw||Tottenham|
|West Ham v. Aston Villa||West Ham||West Ham||West Ham|
|Leicester City v. Liverpool||Leicester City/Draw (equal)||Leicester City||Leicester City|
|Crystal Palace v. Bournemouth||Crystal Palace||Bournemouth||Bournemouth|
|Arsenal v. Southampton||Arsenal||Arsenal||Draw|
|Sunderland v. Man City||Man City||Man City||Man City|
|Man United v. Stoke City||Man United||Draw||Man United|
|West Brom v. Swansea City||Draw||Draw||Draw|
|Watford v. Chelsea||Chelsea||Draw||Draw|
|Everton v. Newcastle||Everton||Everton||Everton|
MOTSON did well this week, getting 7/10 games correct, and the TAM stepped up, also getting 7/10 correct. MOTSON missed on Crystal Palace, Arsenal, and Chelsea, while the TAM missed on Tottenham, Man United, and Arsenal. So what can we learn from these games?
Once again, MOTSON overestimates Chelsea. There’s not much to learn here because we’ve already learned this a bunch of times – I still don’t have any good ideas how I could have modeled this pre-season, and maybe this is just a statistical anomaly. Either way, it’s a known issue with the model that seems to be corrected fairly well by the in-season results model.
Both models missing on Arsenal is a bit surprising, and this may just be a legit low-probability event. If two disparate models predict the same outcome and both get it wrong, then that might be the explanation.
TAM’s misses on Tottenham and Man United are surprising. I would have picked both of those teams to be favorites, especially with Man United at home. Maybe Spurs has struggled to convert road fixtures to wins, which explains that prediction, but United drawing at home against Stoke is a weird one to me. Not sure why that is – worth thinking about more.
MOTSON’s miss on Crystal Palace v. Bournemouth is a tough one – I would have picked Palace to win, but TAM got this one right so apparently the data are better than my intuition and the pre-season model here. I don’t have any real insight here, but it’s important to take notice of this in case a pattern arises that would give me an opportunity to improve the model next year.
Overall MOTSON still leads with 18 correct picks over TAM’s 15. Both models had good weeks, and I’m curious to see if the TAM model improves as it gets more data while MOTSON keeps the same inputs from last year. In-season results did much better this week than the past two, so I’m curious to see if this is just because the results were more predictable this week or if it’s improving.