Part 1: Introduction
You may have seen a few websites talking about decisions by coaches recently - Bill Belichick's famous 'go for it on 4th down' failure pretty much pushed them into the mainstream visibility, but footballcommentary.com has been around for a while, while Advanced NFL Stats is more recent.
The entire idea behind both of those websites is simple: you have a decision between two or more options - we'll stick with 2, and call them A and B. Figure out all possible outcomes of A, and average them. Figure out all possible outcomes of B, and average them. If A's average outcome is better than B's average outcome, choose A, otherwise choose B.
Pretty simple, huh?
One problem: it ignores the distribution of the outcomes of B and A - that is, it ignores the risk involved. All in all, the average expected value will tend to win out in the long run. But games, seasons, and coaches' careers aren't infinite - which means it's not really simple enough to ask "what's the best average expected outcome?" You really have to analyze the risk involved somehow.
Almost all of the 'decision questions' in football involve risky decisions vs. safe decisions. Should the coach go for it on 4th and 2 or punt? Punts are relatively safe - it's safe to simplify things and say that a punt is always better than a failed 4th down conversion.
That means that you have two choices: take a safe option (punting) and gain a little, or risk that to gain a lot.
Brian at AdvancedNFLStats basically puts it this way: going for it is a 60% chance of having a 100% chance to win the game, and a 40% chance of having a 53% chance of winning the game. Punting results in a 70% chance to win the game.
So, let's rescale things to make it more obvious, and subtract 53% (the lowest win chance) from all numbers. The question, then, is: "should I take a guaranteed 17% improvement in win probability, or go for a 60% chance at a 47% improvement?" Obviously the "expected value" choice is to go for it. However, look at the two positive outcomes: in one case, you win 100% of the time, and in the other case, you win 70% of the time. Those are both pretty good positions to be in. Now look at the negative outcome: you win with 53% chance. Basically a coin flip.
It's reasonable to believe that a coach might look at that and say "you know, winning 70% of the time and ~100% of the time aren't that different, but winning 53% of the time sucks." Let's put it a different way: winning 70% of the time gets you 11 wins, and good freaking chance you're in the playoffs. Winning 100% of the time? OK, you're obviously in the playoffs. Winning 53% of the time? 8 wins and you're going home.
What I'm suggesting is that coaches might view their improving odds with diminishing returns, and not see that much of a difference between 70% and 100%. Football is a low-scoring sport, with just a few high-leverage plays - taking the 'safe' decision too often isn't all that crazy.
The interesting thing - to me, at least - is that you could try to figure out exactly how coaches *do* view winning percentages. How valuable is it to a coach to move his winning percentage from 90% to 95%? What about from 50% to 55%? It's safe to say that even though they're equivalent changes, I doubt most coaches would care much about the former if there was much risk involved. Can we do this?
Easy - instead of starting off with the assumption that coaches don't know the percentages, and are simply making ill-informed decisions, assume that they *do* know the percentages, and try to find a function which rescales the winning percentages (u(WP%)) which makes the 'safe' decision better (i.e. the one they actually take).
That function - the u(WP%)? It's usually called a utility function.
As it turns out, it doesn't take a lot to do this. I'll show that in a future post, using Belichick's decision, which is pretty ideal for this.