Skip to content

What we know about the theory of optimal strategy in basketball

January 8, 2016
by

A few months ago I was asked by Mark Glickman to write a book chapter about optimal strategy in basketball.  (Mark is a senior lecturer in the Department of Statistics at Harvard, and is currently in the process of setting up a sports analytics laboratory.)

That was a sort of daunting task, so I recruited Matt Goldman as a co-author (Matt is currently working in the chief economist’s office at Microsoft, and has done some great work on optimal behavior in basketball), and together we put together a review that you can read here.  The Chapter will appear in the upcoming Handbook of Statistical Methods for Design and Analysis in Sports, one of the Chapman & Hall/CRC Handbooks of Modern Statistical Methods.

Of course, it’s likely that you don’t want to read some academic-minded book chapter.  Luckily, however, I was given the opportunity to write a short summary of the chapter for the blog Nylon Calculus (probably the greatest of the ultra-nerdy basketball blogs right now).

You can read it here.

A few things you might learn:

  • An optimal team is one where everyone’s worst shots have the same quality.
  • An optimal strategy does not generally lead to the largest expected margin of victory
  • NBA players are shockingly good at their version of the Secretary Problem

(Long-time readers of this blog might see some familiar themes from these blog posts.)

 

The value of improved offensive rebounding

January 5, 2016

In case you haven’t been paying attention to this sort of thing, the best NBA writer right now is Zach Lowe.  (Even if he has been forced by the tragic Grantland shutdown to post his columns on the super-obnoxious ESPN main site).

In a column today, Zach considered the following question: How valuable is it to try and get offensive rebounds?

Obviously, getting an offensive rebound has great value, since it effectively gives your team another chance to score.  But if you send too many players to “crash the boards” in pursuit of a rebound, you leave yourself wide open to a fast-break opportunity by the opposing team.

So there is some optimization to be done here.  One needs to weigh the benefit of increased offensive rebounding against the cost of worse transition defense.

In recent years, the consensus opinion in the NBA seems to have shifted away from a focus on offensive rebounding and towards playing it safe against fast-breaks.  In his analysis, however, Zach toys with the idea that some teams might find a strategic advantage to pursuing an opposite strategy, and putting a lot of resources into offensive rebounding.

This might very well be true, but my suspicions were raised when Zach made the following comments:

There may be real danger in banking too much on offensive rebounds. And that may be especially true for the best teams. Good teams have good offenses, and good offenses make almost half their shots. If the first shot is a decent bet to go in, perhaps the risk-reward calculus favors getting back on defense. This probably plays some role in explaining why good teams appear to avoid the offensive glass: because they’re good, not because offensive rebounding is on its face a bad thing.

Bad teams have even more incentive to crash hard; they miss more often than good teams!

Zach is right, of course, that a team with a high shooting percentage is less likely to get an offensive rebound.  But it is also true that offensive rebounds are more valuable for teams that score more effectively.  For example, if your team scores 1.2 points per possession on average, then an offensive rebound is more valuable to you than it is to a team that only scores 0.9 points per possession, since the rebound is effectively granting you one extra possession.

Put another way: both the team that shoots 100% and the team that shoots 0% have no incentive to improve their offensive rebounding.  The first has no rebounds to collect, and the second has nothing to gain by grabbing them.  I might have naively expected that a team making half its shots, as Zach mentions in his comment above, has the very most incentive to improve its offensive rebounding!

So let’s put some math to the problem, and try to answer the question: how much do you stand to gain by improving your offensive rebounding?  By the end of this post I’ll present a formula to answer this question, along with some preliminary statistical results.

 

The starting point is to map out the possible outcomes of an offensive possession.  For a given shot attempt, there are two possibilities: make or miss.  If the shot misses, there are also two possibilities: a defensive rebound, or an offensive rebound.  If the team gets the offensive rebound, then they get another shot attempt, as long as they can avoid committing a turnover before the attempt.  Let’s say that a team’s shooting percentage is p, their offensive rebound rate is r, and the turnover rate is t.

Mapping out all these possibilities in graphical form gives a diagram like this:

possession_tree

The paths through this tree (left to right) that end in x’s result in zero points.  The path that ends in o results in some number of points.  Let’s call that number v (it should be between 2 and 3, depending on how often your team shoots 3’s).  The figures written in italics at each branch represent the probabilities of following the given branch.  So, for example, the probability that you will miss the first shot and then get another attempt is (1-p) \times r \times (1-t).

Of course, once you take another shot, the whole tree of possibilities is repeated.  So the full diagram of possible outcomes is something like this:

possession_tree-long

Now there are many possible sequences of outcomes.  If you want to know the expected number of points scored, which I’ll call F, you just need to sum up the probability of ending at a green circle and multiply by v.  This is

F-offensive_rebounding

(A little calculus was used to get that last line.)

So now if you want to know how much you stand to gain by improving your offensive rebounding, you just need to look at how quickly the expected number of points scored, F, increases with the offensive rebound rate, r.  This is the derivative dF/dr, which I’ll call the “Value of Improved Offensive Rebounding”, or VIOR (since basketball nerds love to make up acr0nyms for their “advanced stats”).  It looks like this:

VIOR

Here’s how to interpret this stat: VIOR is the number of points per 100 possessions by which your scoring will increase for every percent improvement of the offensive rebound rate r.

Of course, VIOR only tells you how much the offense improves, and thus it cannot by itself tell you whether it’s worthwhile to improve your offensive rebounding.  For that you need to understand how much your defense suffers for each incremental improvement in offensive rebounding rate.  That’s a problem for another time.

But still, for curiosity’s sake, we can take a stab at estimating which current NBA teams would benefit the most, offensively, from improving their offensive rebounding.  Taking some data from basketball-reference, I get the following table:

VIOR_table

In this table the teams are sorted by their VIOR score (i.e., by how much they would benefit from improved offensive rebounding).  Columns 2-5 list the relevant statistics for calculating VIOR.

The ordering of teams seems a bit scattered, with good teams near the top and the bottom of the list, but there are a few trends that come out if you stare at the numbers long enough.

  1. First, teams that have a lower turnover rate tend to have a higher VIOR.  This seems somewhat obvious: rebounds are only valuable if you don’t turn the ball over right after getting them.
  2. Teams that shoot more 3’s also tend to have a higher VIOR.  This is presumably because shooting more 3’s allows you to maintain a high scoring efficiency (so that an additional possession is valuable) while still having lots of missed shots out there for you to collect.
  3. The teams that would most benefit from improved offensive rebounding are generally the teams that are already the best at offensive rebounding.  This seems counterintuitive, but it comes out quite clearly from the logic above.  If your team is already good at offensive rebounding, then grabbing one more offensive rebound buys you more than one additional shot attempt, on average.
    (Of course, it is also true that a team with a high offensive rebounding rate might find it especially difficult to improve that rate.)

VIOR-ORR

 

Looking at the league-average level, the takeaway is this: an NBA team generally improves on offense by about 0.62 points per 100 possessions for each percentage point increase in its offensive rebound rate.  This means that if NBA teams were to improve their offensive rebounding from 23% (where it is now) to 30% (where it was a few years ago), they would generally score about 4.3 points more per 100 possessions.

So now the remaining question is this:  are teams saving more than 4.3 points per 100 possessions by virtue of their improved transition defense?

 


Footnotes:

  1. There are, of course, plenty of ways that you can poke holes in the logic above.  For example: is the shooting percentage p really a constant, independent of whether you are shooting after an offensive rebound or before?  Is the turnover rate t a constant?  The offensive rebound rate r?  If you want to allow for all of these things to vary situationally, then you’ll need to draw a much bigger tree.

    I’m not saying that these kinds of considerations aren’t important (this is a very preliminary analysis, after all), only that I haven’t thought deeply about them.

  2. Much of the logic of this post was first laid out by Brian Tung after watching game 7 of the 2010 NBA finals, where the Lakers shot 32.5% but rebounded 42% of their own misses.
  3. I know I’ve made this point before, but I will never get over how useful Calculus is.  I use it every day of my life, and it helps me think about essentially every topic.

Field theory of swords

December 10, 2015

Note: The following is the last in a series of blog posts that I contributed to the blog ribbonfarm.  The series focused on qualitative descriptions of particles and fields, and this one finally brings the topic to a human level.  You can go here to see the original post, which contains some additional discussion in the comments.

 

I don’t mean to brag, but if you’ve been following this sequence of posts on the topic of particles and fields, then I’ve sort of taught you the secret to modern physics.

The secret goes like this:

Everything arises from fields, and fields arise from everything.


Go ahead.
You can indulge in a good eye-roll over the new-agey sound of that line.
(And over the braggadocio of the author.)

But eye-rolling aside, that line actually does refer to a very profound idea in physics. Namely, that the most fundamental object in nature is the field: a continuous, space-filling entity that has a simple mathematical structure and supports “undulations” or “ripples” that act like physical particles. (I offered a few ways to visualize fields in this post and this post.) To me, it is the most mind-blowing fact of modern physics that we call particles are really just “ripples” or “defects” on some infinite field.

But the miraculousness of fields isn’t just limited to fundamental particles. Fields also emerge at much higher levels of reality, as composite objects made from the motion of many active and jostling things. For example, one can talk about a “field” made from a large collection of electrons, atoms, molecules, cells, or even people. The “particles” in these fields are ripples or defects that move through the crowd. It is one of the miracles of science that essentially any sufficiently large group of interacting objects gives rise to simple collective excitations that behave like independent, free-moving particles.

Maybe this discussion seems excessively esoteric to you.  I can certainly understand that objection. But the truth is that the basic paradigm of particles and fields is so generic and so powerful that one can apply it to just about any level of nature.

So we might as well use it to talk about something awesome.

Let’s talk about swords.

Read more…

A minor memory of Leo Kadanoff

October 31, 2015
by

Einstein once famously said that “imagination is more important than knowledge.”

But when you’re a graduate student or postdoc struggling to make a career in physics, that quote rarely feels true. Instead, you are usually made to feel like productivity and technical ability are the qualities that will make or break your career.

But Leo Kadanoff was someone who made that quote feel true.

Leo Kadanoff, one of the true giants in theoretical physics during the last half century, passed away just a few days ago. Kadanoff’s work was marked by its depth of thought and its relentless creativity. I’m sure that over the next week many people will be commemorating his life and his career.

But I thought it might be worth telling a brief story about my own memories of Leo Kadanoff, however minor they may be.

 

During the early part of 2014, I had started playing with some ideas that were well outside my area of expertise (if, indeed, I can be said to have any such area). I thought these projects were pretty cool, but I was tremendously unconfident about them. My lack of confidence was actually pretty justified: I was highly ignorant about the fields to which these projects properly belonged, and I had no reason to think that anyone else would find them interesting. I was also working in the sort-of-sober environment of the Materials Science Division at Argonne National Laboratory, and I was afraid that at any moment my bosses would tell me to shape up and do real science instead of nonsense.

In a moment of insecure hubris (and trust me, that combination of emotions makes sense when you’re a struggling scientist), I wrote an email to Leo Kadanoff. I sent him a draft of a manuscript (which had already been rejected twice without review) and asked him whether he would be willing to give me any comments. The truth is that the work really had no connection to Kadanoff, or to any of his past or present interests. I just knew that he was someone who had wide-ranging interests and a history of creativity, and I wasn’t sure who else to write to.

His reply to me was remarkable. He told me that he found the paper interesting, and that I should come give a seminar and spend a day at the University of Chicago. I quickly took him up on the offer, and he slotted me into his truly remarkable seminar series called “Computations in Science.” (“The title is old,” he said, “inherited from the days when that title would bring in money. We are closer to ‘concepts in science’ or maybe ‘all things considered.’ ”)

When I arrived for the seminar, Kadanoff was the first person to meet me. He had just arrived himself, by bike. Apparently at age 77 he still rode his bike to work every day. We had a very friendly conversation for an hour or so, in which we talked partly about science and partly about life, and in which he gave me a brief guide to theater and music in the city of Chicago. (At one point I mentioned that my wife was about to start medical residency, which is notorious for its long and stressful hours. He sympathized, and said “I am married to a woman who has been remarkably intelligent all of her life, except during her three years of residency.”)

When it came time for the talk to start, I was more than a little nervous.  Kadanoff stood in front of the room to introduce me.

“Brian is a lot like David Nelson …” he began.

[And here my eyes got wide. David Nelson is another giant in theoretical physics, well-respected and well-liked by essentially everyone. So I was bracing myself for some outrageous compliment.]

“… he grew up in a military family.”

I don’t think that line was meant as a joke. But somehow it put me in a good mood, and the rest of the day went remarkably well. The seminar was friendly, and the audience was enthusiastic and critical (another combination of emotions that goes very well together in science). In short, it was a beautiful day for me, and I basked in the atmosphere that surrounded Kadanoff in Chicago. It seemed to me a place where creativity and inquisitiveness were valued intensely, and I found it immensely energizing and inspiring.

Scientists love to tell the public about how their work is driven by the joy of discovery and the pleasure of figuring things out. But rarely does it feel so directly true as it did during my visit to University of Chicago.

 

On the whole, the truth is that I didn’t know Leo Kadanoff that well.  My interactions with him didn’t extend much beyond one excellent day, a few emails, and a few times where I was in the audience of his talks. But when Kadanoff was around, I really felt like science and the profession of scientist lived up to their promise.

It’s pretty sad to think that I will probably never get that exact feeling again.

 

Take a moment, if you like, and listen to Kadanoff talk about his greatest work. It starts with comic books.

The miracle of Fermi liquid theory

October 29, 2015

Note: the following is the fourth in a series of five blog post that I was invited to contribute to the blog ribbonfarm on the topic of particles and fields.  You can go here to see the original post, which contains some discussion in the comments.

 

Let’s start with a big question: why does science work?

Writ large, science is the process of identifying and codifying the rules obeyed by nature. Beyond this general goal, however, science has essentially no specificity of topic.  It attempts to describe natural phenomena on all scales of space, time, and complexity: from atomic nuclei to galaxy clusters to humans themselves.  And the scientific enterprise has been so successful at each and every one of these scales that at this point its efficacy is essentially taken for granted.

But, by just about any a priori standard, the extent of science’s success is extremely surprising. After all, the human brain has a very limited capacity for complex thought. We human tend to think (consciously) only about simple things in simple terms, and we are quickly overwhelmed when asked to simultaneously keep track of multiple independent ideas or dependencies.

As an extreme example, consider that human thinking struggles to describe even individual atoms with real precision. How is it, then, that we can possibly have good science about things that are made up of many atoms, like magnets or tornadoes or eukaryotic cells or planets or animals? It seems like a miracle that the natural world can contain patterns and objects that lie within our understanding, because the individual constituents of those objects are usually far too complex for us to parse.

You can call this occurrence the “miracle of emergence”.  I don’t know how to explain its origin. To me, it is truly one of the deepest and most wondrous realities of the universe: that simplicity continuously emerges from the teeming of the complex.

But in this post I want to try and present the nature of this miracle in one of its cleanest and most essential forms. I’m going to talk about quasiparticles.

Read more…

The simple way to solve that crocodile problem

October 11, 2015
by

This past week, certain portions of the internet worked themselves into a tizzy over a math problem about a crocodile.

Specifically, this problem:

crocodile_problemThe problem was written into this year’s Higher Maths exam in Scotland, and has since been the source of much angst for Scottish high schoolers and many Twitter jokes for everyone else.

 

As with most word problems, I’m sure that what confounded people was making the translation between a verbal description of the problem and a set of equations.  The actual math problem that needs to be solved is pretty standard for a calculus class.  It just comes down to finding the minimum of the function T(x) (which is one of the things that calculus is absolutely most useful for).  In practical terms, that means taking the derivative dT/dx and setting it equal to zero.

But it turns out that there is a more clever way to solve the problem that doesn’t require you to know any calculus or take any derivatives.  It has fewer technical steps (and therefore comes with a smaller chance of screwing up your calculation somewhere along the way), but more steps of logical thinking.  And it goes like this.

 

The problem is essentially asking you to find the path of shortest time for a crocodile moving from one point to another.  If the crocodile were just walking on land, this would be easy: the quickest route is always a straight line.  The tricky part is that the crocodile has to move partly on land and partly on water, and the water section is slower than the land section.

If you want to know the speed of the crocodile on land and on water, you can pretty much read them off directly from the problem statement.  The problem gives you the equation T(x) = 5\sqrt{36 + x} + 4(20-x).  The quantity 20 - x in that equation represents the path length for the on-land section, and the quantity \sqrt{36 + x^2} is the path length for the water section.  (That square root is the length of the hypotenuse of a triangle with side lengths x and 6 — apparently the river is 6 meters wide.)  Since \textrm{time} = \textrm{distance} / \textrm{velocity}, this means that the on-land speed is (10/4) m/s, and the water velocity is (10/5) m/s  (remember that the units of time were given in tenths of a second, and not seconds — not that it matters for the final answer).

That was just interpreting the problem statement.  Now comes the clever part.

The trick is to realize that the problem of “find the shortest time path across two areas with different speed” is not new.  It’s something that nature does continually whenever light passes from one medium to another:

refraction

I’m talking, of course, about Fermat’s principle: any time you see light go from one point to another, you can be confident that it took the shortest time path to get there.  And when light goes from one medium to another one where it has a different speed, it bends.  (Like in the picture above: light moves slower through the glass, so the light beam bends inward in order to cross through the glass more quickly.)

The bending of light is described by Snell’s law:

\sin(\theta_1)/\sin(\theta_2) = v_1/v_2,

where v_1 and v_2 are the speeds in regions 1 and 2, and \theta_1 and \theta_2 are the angles that the light makes with the surface normal.

Since our crocodile is solving the exact same problem as a light ray, it follows that its motion is described by the exact same equation.  Which means this:

crocodile_refraction(\sin \theta_r)/(\sin \theta_l) = v_r/v_l

Here, v_l = (10/4) m/s is the crocodile speed on land, and v_r = (10/5) m/s is its speed in the river.  The sine of 90^o is 1, and the sine of \theta_r is \text{opposite}/\text{hypotenuse} = x/\sqrt{36 + x^2}.

So in the end the fastest path for the crocodile is the one that satisfies

x/\sqrt{36 + x^2} = 4/5.

If you solve that equation (square both sides and rearrange), you’ll get the correct answer: x = 8 m.

So knowing some basic optics will give you a quick solution to the crocodile problems.

This happy coincidence brings to mind a great Richard Feynman quote: “Nature uses only the longest threads to weave her patterns, so each small piece of her fabric reveals the organization of the entire tapestry.”  It turns out that this particular tapestry had both light rays and crocodiles in it.

\hspace{1mm}


\hspace{1mm}

By the way, this post has a very cool footnote.  It turns out that ants frequently have to solve a version of this same problem: they need to make an efficient trail from the anthill to some food source, but the trail passes over different pieces of terrain that have different walking speeds.

And, as it turns out, ants understand how to follow Fermat’s principle too!  (original paper here)

What would you teach if you could teach absolutely anything?

September 28, 2015
by

Suppose someone told you the following:

You are invited to teach a class to a group of highly-motivated high school students.  It can be about absolutely any topic, and can last for as little as 5 minutes or as long as 9 hours.

What topic would you choose for your class?

 

As it happens, this is not just a hypothetical question for me at the moment.  In November MIT is hosting its annual MIT Splash event, and the call for volunteer teachers is almost exactly what is written in the quote above.  Students, staff, and faculty from all over MIT are invited to teach short courses on a topic of their choosing, and the results are pretty wild.

A few of my favorite courses from last year:

  • The History of Video Game Music
  • How to Create a Language
  • Cryptography for People Without a Computer
  • Build a Mini Aeroponic Farm
  • Calculating Pi With a Coconut
  • Advanced Topics in Murder

 

So now I would like to turn to you, dear blog readers, for help.

What should I teach about?  Please let me know, in the comments, what you think about either of these two questions:

  1. If you were a high school student, what kind of class would you want to go to?
  2. If you were in my position, what kind of class would you want to teach?

 

The two ideas that come to mind immediately are:

  • Quantum Mechanics with middle school math

Use Algebra 1 – level math to figure out answers to questions like: What is wave/particle duality? How big is an atom? How do magnets work?  What is quantum entanglement?

  • The Math Behind Basketball Strategy

Learn about some of the difficult strategic decisions that basketball teams are faced with, and see how they can be described with math.  Then solve a few of them yourself!

Imagining yourself as a high school student, which of those two sounds better to you?  Any suggestions for alternative ideas or refinements?

 

UPDATE:  You can find my courses listed on the MIT Splash catalog here.  Thanks for all your helpful comments, everyone!  This should be a lot of fun.

Follow

Get every new post delivered to your Inbox.

Join 2,630 other followers