The (Markov) chain of NFL drives

An NFL game is nothing more than a series of drives, each one with a different outcome (e.g., touchdown, fumble, etc.) that can represent the current state of the game.  Consecutive drives thus represent transitions from one state to the other.  These transitions are interesting since they might be able to reveal various interesting patterns.  For instance, we are all aware of how crucial committing turnovers is.  However, can this be captured by the transitions between drives? For example, the transition from a drive that terminated with a turnover to one that leads to a score might be more probable as compared to the transition between a punt drive to a scoring one due to increased momentum.

A very convenient way to describe the state transitions of a system is through Markov chains! Markov chain is nothing more than a network capturing the various states of the system and the transitions between them. In our case the system is an NFL game and the state is the result of the last drive.  Hence, states can be “Touchdown”, “Field goal”, “Punt”, “Fumble” etc.  We have a sink node, i.e., a node that is the terminal state of the system, namely, “End of Game”, and a source node, “Start of Game”.  The transition probabilities can be identified using play-by-play data.  Indeed we used data from the 2009-2015 seasons and built the corresponding Markov chain (see the following figure).

screen-shot-2016-09-12-at-4-06-23-pm

The nodes in the network above represent the various states of an NFL game, while the directed edges represent the transitions between the various states. The weight of each edge corresponds to the transition probability and this is further captured in the width of the edge. The same information can be presented alternatively through the transition matrix:

markov-matrix

From these states the only scoring ones for the offense are the INT and FG.  However, some of the states representing offensive turnovers can potentially be scoring states for the defense (e.g., a pick-6 interception).  In the following figure we plot the probability of an offensive turnover leading to a defense score (i.e., touchdown).

turnover-def-td

As we can see a blocked punt or FG have the highest chance of being returned for a touchdown by the defense.  However, these are also the states that are less possible to begin with, and hence, the large confidence intervals associated with them.

How can all these be of any use though?  By identifying the steady state of this Markov chain one can potentially estimate various metrics such as the expected outcome of a game.  The steady state solution p_i of a Markov chain represents the probability of the system being at state i. At a high level this represents a random walker over the Markov “network”, where the transitions between states happen with probabilities equal to the corresponding entries at the transition matrix. For example, the steady state solution of the above Markov chain for an NFL game is as follows:

screen-shot-2016-09-24-at-7-51-28-pm

Using these steady state probabilities one can for example estimate the expected number of total points in an NFL game by summing up the expected number of points X for each drive:

E[X] = n_D \cdot \sum_i p_i\cdot x_i

where x_i is the points scored in each state (e.g., 3 in a field goal, 2 in etc.) and n_D is the expected number of drives per game (23.6 as per our data).  Using the above equation we obtain E[X] = 43.9, while using the actual results from the games we get a value of 45.1.  The two quantities cannot be deemed statistically different (p-value > 0.7) for the corresponding t-test)!

Of course someone will argue what is the value in using the Markov chain to estimate the expected number of total points in a game.  There are so many other ways to do this and the most straightforward is to simply analyze the game data.  However, the point is that Markov chains are a strong tool in the arsenal of a computational sports analyst that can be used also for prediction.  For example, by building team specific Markov chains one could simulate (at a high level) upcoming games and predict the score/winner of a game.

Furthermore, we can analyze the network to identify interesting patterns such as the ones mentioned at the beginning of the article.  For example we compare the transition between INT->TD and Punt->TD.  The  probability of a transition INT->TD is 0.247 with a 95% confidence interval of [0.233,0.261], while that of a transition Punt -> TD is 0.187 with a 95% confidence interval of [0.181,0.193].  This means that it is more likely for a team to score a TD after forcing a turnover (interception in the anecdote we examined) as compared to a drive that follows a punt from the opponent (p-value < 0.05) by approximately 6%.  That’s another way to see why forcing turnovers is good, while committing them are bad.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s