(Editors Note: Yuri Gelfat is an avid fan of the Winnipeg Jets and if you’re a regular of #JetsTwitter then you’re probably somewhat familiar and have maybe even interacted with him a time or two. Yuri has an affinity for numbers and often sites advanced analytics in his talk about the Jets, but he also has an appreciation for combining what we can learn from analytics and what we see with our own eyes that the numbers sometimes don’t account for, or in some cases might contradict. He reached out to me a while ago and asked if he could share his thoughts and go into more detail about “Expected Goals” or “xG” as most of us refer to it as. I highly encourage everyone to give Yuri a follow and I thank him for sharing a great breakdown and insight into something that I myself am still learning about on a weekly basis. -Art)
You see many fans and media members articulate term “Expected Goals Share” (xGF%) and we all hear “goals above expected” and “saves above expected” all the time.
I read a lot of write ups about different xG models and there is no simple explanation of how the xG values assigned or generated and to be honest it is for the reason, since this thing is not simple and it is based on advanced mathematics, statistics and data-analysis methods.
However, I’m here to try and explain how these guys create (they call it “build”) their models and then produce and assign xG values for every unblocked shot in a game.
So let’s make it clear first:
- xG value is the statistical chance of the unblocked shot to become a goal.
- xG is literally the most direct way to measure shot quality that exists, given the current Play-By-Play data.
- This PBP data is made by NHL available to the public, but it has its inaccuracies, inconsistencies and omissions. Anyway that is the data that all public xG models are using, so nothing is perfect, obviously. These errors are also random, i.e. there’s not a specific type of mistake that happens the same way very often. That means that in a large sample, the mistakes should mostly wash out (e.g. for every shot listed as too close, there’s one listed as too far)
- There are three public models that provide real-time xG values in public space:
– Money Puck (Creator: Peter Tanner)
– Natural Stat Trick (Creator: Brad Timmins)
– Evolving-hockey (Creators: Josh and Luke Younggren)
So all of the above you already knew, now how the heck they actually do that? Here is what I’ve learned:
First they take huge data set about 70% of all shots made in NHL for 2007-2018 seasons and put it in the table, rows and columns. They call it training data set.
Each row is instance, meaning it’s an identified unblocked shot. (Note: They can’t use blocked shots because the NHL frustratingly lists the location of where the shot was blocked and not where the shot was taken from)
Each column is a feature of that shot, meaning: all kinds of attributes of that particular shot (instance). Things like: the player and goalie names involved in the shot, angle of the shot, distance and how long players had been on the ice when the shot was taken. They will also include observations of things that aren’t shots, as time between events is important as well, and add it as another column (feature of the shot)
I will call this table matrix X.
The last column to that table, is Target Feature Vector, which in our case: was that shot a goal or not, often put 1 for Yes and 0 for No.
I will call that column vector Y.
Such table can consist of more than a million of instances (shots) with about hundred of features (attributes) of each shot all from PBP NHL available data.
Then using statistical modeling (usually in Python or R, programming language) they create mathematical algorithm, (known as Logistic Regression or Gradient Boosting, these are the most common approaches because they tend to give the best “out of the box” results compared to the amount of time it takes to design them) to determine in how many instances given similar features (attributes) the vector Y value ended being 1.
Put it simple, what the percentage of shots with same variables were goals.
And that’s your xG value.
When you take 10 or 20 shots, you can probably determine yourself just by looking on those shot features/attributes and say which ones were repetitive for shots that led to goals, but when we talk about half a million of shots you need an algorithm to do it for you.
After that they can categorize (classify) which of the features had bigger weight in shot to become a goal.
Here is the list sorted by importance (from Money Puck model):
- Shot Distance From Net
- Time Since Last Game Event
- Shot Type (Slap, Wrist, Backhand, etc)
- Speed From Previous Event
- Shot Angle
- East-West Location on Ice of Last Event Before the Shot
- If Rebound, difference in shot angle divided by time since last shot
- Last Event That Happened Before the Shot (Faceoff, Hit, etc)
- Other team’s # of skaters on ice
- East-West Location on Ice of Shot
- Man Advantage Situation
- Time since current Powerplay started
- Distance From Previous Event
- North-South Location on Ice of Shot
- Shooting on Empty Net
This code is your xG model, which is basically a sophisticated formula or algorithm, with multiple variables. Where you enter all these variables (attributes of the shot above) and it assigns chances of the shot to go in, become a goal.
Q: How do you know if it works?
A: They test it.
They take remainder 30% (about hundreds of thousands) of the instances (shots) and their features (attributes), from 2007-2018 seasons and let their code to predict the outcome, vector Y values.
See below, 70% of the available data went to Train another 30% went to Test
While they know exactly the outcome of each shot, they compare the predicted by code results of the shot, to the real (actually recorded by NHL) outcome.
They now can tell in how many times the code was accurate in predicting the goal.
They call it the accuracy of the model (often measured by AUC value or/and Log Loss).
For these public models the accuracy of predicting shot to become a goal is between 76.7% to 79.9% from these tests. Based on Maurice’s comments and other sources, commercial models show accuracy of about 85% which is not perfect either.
So when new data, new shot information, becomes available from NHL they run it through the code and show you xG value, but it will be only about 80% accurate prediction, in addition to all inaccuracies of shot registration by NHL, to tell how probable was that shot to become a goal.
For example, when the model says shot had 2% or 0.02 xG value in reality chances were somewhere between 1.6% and 2.4% and it was still possible that shot will go in on any given shot like that.
Another example, let’s say shot was in close and the model shows 40% chance of scoring, 0.40 xG value. In reality it was somewhere between 32% and 48% chance but model can’t say exact because it’s missing information/attributes/features like: was the goalie screened, was it cross crease pass for “back door one timer” or where goalie or opposition player were located. At the same time commercial (non-public) models will have some of those additional features and will narrow the range saying that this shot had between 34% and 46% chance to become a goal.
Please keep in mind that the best goal scorers in the league do not need chances higher than 60% (0.60 xG value of public models) to score most of their goals.
Q: But if it’s Patrik Laine shooting, it must be higher chance than when Luca Sbisa shoots?
A: Not exactly. Many smart people are still trying to add shooting talent of the shooter to the features tables but the classifier keeps on telling them that it doesn’t matter. So more work is required to determine how to measure and add shooters talent to each shot (instance) attributes (features).
Based on all the above, when you see one team had 1.35 xG and other 2.08 xG after the game, which is actually a sum of the team’s shots’ expected goals values, you can confidently tell which team had more dangerous shots or had better combined chances to score.
Hope it helped to understand were these expected goals numbers are coming from.