Flashback to grade school, and one of the first things we all learned about probability is that each trial is independent from the previous ones. If you flip a coin, and it comes up heads 9 times in a row, the odds of it coming up heads the 10th time are exactly the same as they were in each of the first 9: 51%.^{1}

This is the law of independent trials: results at time (t-1) don’t affect the probability of a given outcome at time (t). The coin doesn’t know that it just came up heads, and nothing about the fact that it came up heads changes the future probability of it coming up heads. The trials are independent, therefore the outcomes are independent.

We think of shots the same way in expected goal models – a shot with an xG of 0.4 has a 40% chance of going in, and a shot with an xG of 0.1 has a 10% chance of going in. Therefore, teams basically have to take 4 shots with an xG of 0.1 to equal one shot with an xG of 0.4.^{2} In this case, what does a rational team do? Unless you can somehow consistently get 4 times as many 0.1 xG shots compared to just a couple more passes and getting a 0.4 xG shot, you should always look for one more pass because your xG will be higher. Long shots are incredibly low value and should never be taken by a rational player.

However, this only looks at the offensive side of things. Now we turn to the defense. If I’m a smart defender, knowing that this is the dominant strategy I back off and let anyone shoot from outside the penalty area who wants to. They might buy a lottery ticket once in a while, but in the long-run I’ll be better off closing down the 0.4 and higher xG opportunities to make sure they never happen. Even if I get half (or even a third) as many shots as my opponent, if they’re 4 times the quality I will win more games than I lose.

Now we have a new problem: I’m a defender leaving strikers unmarked outside the penalty area and they can shoot at will, the expected goals value increases: I don’t know of any measures that take defense into account^{3}, so the 0.1 value is calculated with an assumption that a team is putting together some sort of defense up. Long shots score 10% of the time given a reasonable defense, but if they’re wide open then you could probably assume a higher xG value for long shots. Closer shots score 40% of the time given a defense that isn’t all centered in the six yard box ready to frustrate incoming players, but if that’s the case then they probably earn a lower xG score. This is what formal theorists call a “dynamic equilibrium” at play here: results are calculated based on both sides putting out their best possible strategies.^{4}

The point here is that the different strategies matter, and that the strategies employed for the first shot change the results of the second shot. A 0.4 xG shot is only a 0.4 xG shot because defenders have to provide some sort of defense to the 0.1 xG shots that happened earlier (or could have potentially happened earlier). But if a team never shoots from that distance, or has zero quality from that distance, then we never reach the dynamic equilibrium. At this point, it becomes logical to take some low xG long shots to “keep the other team honest” and open up the higher xG close shots later.

When you’re looking at xG maps, look at the whole picture. How did the smaller xG shots affect the higher xG shots later in the game? Or the opposite: did a number of high xG shots affect the value of the low xG shots later? A team can’t live on a diet of high xG shots alone, and it becomes optimal to take a handful of low xG shots to open up more high value ones later.^{5} All xG aren’t created equal, and when you look at the maps think about the second-order value of the lower xG shots.

- Seriously – a professor at Stanford found that there’s no such thing as a “fair coin.” http://news.stanford.edu/pr/2004/diaconis-69.html and the full study is available here: http://statweb.stanford.edu/~susan/papers/headswithJ.pdf ↩
- This isn’t *quite* the same probability: the math for at least one goal with four shots of 0.1 is 1-(0.9*0.9*0.9*0.9), or 37%, but it’s close enough for my purposes here. See Danny Page’s excellent treatment of the topic if you’re interested in all the math behind this idea ↩
- Readers: correct me if I’m wrong here. I’m not as familiar with the inner workings of all the different models out there as many of you are. ↩
- In political science, the dynamic equilibrium argument is important to the study of campaigns: advertising has no net effect in presidential elections because both sides are running so many ads they cancel each other out. But if only one candidate went on TV, presumably things would be different. ↩
- Forgive me for not looking it up, but there is research out there that argues that it’s optimal to take penalty kicks to the player’s weaker side a percentage of the time to keep the goalkeeper from always going to the strong side. I think the numbers were like 75-25% strong/weak. This is the same idea: take some weaker shots to open up the stronger ones later. ↩