clock menu more-arrow no yes mobile

Filed under:

Hayes, Adelman, and Morey: Taking Stats Seriously

I'm going to leave the official recap to someone (Xiane?) who actually saw the second half of the game last night. Instead, I'd like to talk about a discussion that took place between myself and a few commenters over at Clips Nation. If you didn't see the game, the Rockets battled the Clippers closely for three quarters, occasionally pulling ahead but never able to totally break away, until finally the LA offense (and defense) collapsed late in the fourth to create a blowout.

So, anyways, the discussion... Basically, I responded to two things: (1) the statement (typical of most observers at this point) by Steve Perrin that Chris Kaman had a significant height advantage over Chuck Hayes, and so he should be able to dominate on offense, and (2) Steve's question about whether or not we should really believe that Daryl Morey has achieved what he has via stats, or if the Rockets' success happened as a result of the influence of traditional scouting, and so this could happen with any "traditional" team. I and the commenters went back and forth about this for a while, but I've had a few hours to let this stew in my head, and I think I can consolidate my argument better here (not that anyone really gives a shit). My point here is one that is fairly simple: we must take stats (and the statistical revolution) seriously, and I mean this in two ways.

On the first point, Chuck Hayes is a terrific post defender. I'm not going to pretend that this is something that is only noticed by stats gurus. The announcers in the Conference Semifinals noticed this last year, and I think there's a general recognition that Chuck is, at the very least, a good defender.

And yet we routinely hear sportswriters, commentators, announcers, opposing fans, and opposing players (including Chris Kaman last night) state that there is no way that some "only" 78" tall could guard "7-footer #822." And then we see these guys get shut down. Yes, there are exceptions to this, but Hayes has, for the most part this year, done a fantastic job against opposing bigs. But, even after this happens, we continue to hear the same rhetoric. But what is most galling to me is that many Rockets fans, including many who view Morey's work very favorably, often repeat these words.

(I should make an aside here: this is not to say that Hayes is somehow the perfect defender. Chuck's skills are largely limited to single-man defense, and his defense of the basket is limited to stripping the ball from opponents - something he's very good at - and taking charges. Yao can step into the lane and defend the paint against all opponents, but Chuck can't do this. Such is life.)

Now, here is what I mean when I say we should take stats seriously. In some sense, this attitude is created from a failure to really believe what the statistics say. Adjusted Plus-Minus and a variety of other metrics have consistently shown Chuck to be one of the best defensive players in the league over his career, and they really do mean that. They do not mean this in some nuanced way. They simply mean that, when Chuck is on the court, the Rockets defend better than when he is sitting on the bench. And so, if one is tempted to say that Chuck will have trouble with Greg Oden, Andrew Bynum, or even guys like Gasol or Dirk (though let me say that everyone is going to have trouble with Dirk and Gasol, though I'd bet smaller, quicker players like Chuck might have an easier time against them, and I think this was born out in the playoffs last year), that temptation ultimately stems from a failure to take what the statistics say seriously.

I was most struck by this phenomenon in another area: Bill Simmons' tirade against Bill Belichick's decision to go for it on 4th-and-2 against the Colts a few weeks ago. Here we have a guy who has been something of an unofficial champion for Daryl Morey and his work not taking what the statistics say seriously. 4th-down conversion rates and expected payoff were some of the first work down in advanced football statistics (indeed, a discussion of it was included in my freshman microeconomics textbook), and so we have pretty good data about when coaches should go for it and when they should punt. Turns out, most coaches punt too much (though not Belichick, who was the first to adopt the stat gurus' advice on this). And, as it turns out, the numbers say that the Patriots did the right thing when they went for it:

Combine all these variables and what do we have? According to a formula called "Expected Win Probability When Going For It," Pattani believed that the Patriots had an 80.5 chance of winning the game. By punting, they had a 79.0 chance of winning. So my argument (made on Monday's podcast) that Bill Belichick should have "played the percentages and punted" was technically wrong. Barely. Belichick did play the percentages if you took those percentages at face value.

I am not disputing the numbers or the methods for achieving them. But by Monday night, based on various columns and message boards (as well as e-mails to my reader mailbox), you would have thought Belichick was a genius for blowing the game. He played the percentages! It wasn't as crazy as it looked! By this logic, Belichick also should have held a loaded pistol to his head on the sideline, spun the chamber and tried to shoot himself like Chris Walken in "The Deer Hunter." If those 1-in-6 odds came through and he succeeded, we could have said, "Hey, he played the percentages: 83.6666 percent of the time, you don't die in that situation! You can't blame him for what happened!"

Let's disregard Simmons' (embarrassingly bad) analogy, and instead focus on what's more interesting about this statement. "I am not disputing the numbers" he says. And yet they apparently do not matter in Simmons' final analysis. What matters is that Belichick "obviously" made the wrong move - "obviously" because it goes against Simmons' assumptions about the way the game is supposed to be played.

The "Belichick made the right move" argument was nearly as dense. In the biggest game of the regular season, when a football coach tries something that -- and this is coming from someone who watches 12 hours of football every Sunday dating back to elementary school -- I cannot remember another team doing on the road in the last three minutes of a close game, that's not "gutsy." It's not a "gamble." It's not "believing we can get that two yards." It's not "revolutionary." It's not "statistically smart." It's reckless. It's something that should happen only in video games, and only when you and your roommate are both high.

Simmons then launches into another tirade about APM statistics and Tyrus Thomas Tim Thomas (and I won't dispute Simmons' characterization of Thomas as a "one-man swine flu"). Suffice it say that the rest is a pained argument about context in statistics (yes, context is important, and that's why the best statistics attempt to figure out context. But I get the feeling that if the stats said "4th-and-2 after coming out of a timeout on the road and facing a bad defense after your team is melting down: go for it, baby!" Simmons would still say the same thing).

Here's the issue: the stats say that the payoff was higher by going for it, and they really do say that, and to disregard them because they say something that seems to deny the assumptions you've made about football since your childhood is to utilize statistics merely as a way to confirm your own biases. If we're going to accept some of what advanced stats have to say, we should be willing to accept what those same stats have to say about more general things. Yes, context matters, and if the statistical analysis keeps going against the grain and contradicting perceived experience, then something may be awry, but we should not jump to that conclusion just because the expected win tables contradict what your high school football coach or John Madden told you about winning games.

So what does that mean for Chuck and the Rockets? Simple: Chuck really is a good defender. That's really all there is to it. And when someone says that a 7-footer should be able to rise up and shoot over Chuck, then we should immediately know that he is either not looking at the advanced numbers or else is not taking what they say seriously, using them only when convenient.

The second point is a little tougher to address. Are stats really what has made this Rockets team relatively successful? Yes and no.

"No," in the sense that Daryl Morey is not simply a good statistician - he is pretty good at making deals, negotiating with free agents, etc. He can do all the other things (besides identifying talent) that one needs to do as a GM (in other words, actually acquiring talent after you've identified it).

On another level (and this was something - understandably - harped on by Clippers fans), Rick Adelman is a great coach, despite his - IMO undeserved - rep around here as "Coach Sleepy." He knows what he is doing, and the front office and players clearly trust him. But that's partly because (as he is so quick to point out), Morey gives him players he can trust. Were Adelman in Dunleavy's situation (admittedly a shithole of his own design), I seriously doubt the Clippers would be on the playoff track or anything. Coaches matter, but they don't matter nearly as much as the people actually, you know, playing the game. So, yes, Adelman is a part of the Rockets' success, but he does not explain most of it, let alone all of it.

The main sort of charge leveled at the "Statistical Revolution" by the commenters was that none of these moves are particularly revolutionary in and of themselves. I want to look at that more closely.

We're conditioned, perhaps by the experience of the Stats Revolution in baseball, that there's going to be a great deal of resistance among the NBA establishment towards this sort of thinking. I think, however, that the baseball experience was, for whatever reason, somewhat unique. Maybe it was unique because of the sort of people who own baseball clubs and comment on baseball teams, but in any case baseball statistics developed very differently from basketball stats. Sabermetrics started with Bill James doing all of his statistics by himself, placing an ad in the Sporting News, and sending out binders of his abstracts to the people who responded to the ad. There was resistance to baseball stats from the start, enough that teams didn't start using computers to look at basic statistics until the mid-80s, and that was only (if I remember right) the White Sox, who employed STATS Inc. Perhaps because baseball stat geeks went against the established "small ball" recipe for success, they encountered a lot of hostility, but it's not as if this view was unheard of. It was pioneered by Branch Rickey, utilized by Earl Weaver with great success, etc.

When you get down to it, though, a successful "Moneyball" team in 2002 wasn't that revolutionary. Anybody could see that OBP was important, right?

Well, yes. But you still had people (many people, including announcers, GMs, and managers) criticizing players for walking (shit, I heard it in 2005 about Morgan Ensberg). Getting on base was always seen as a good thing, but most people didn't understand how good it was.

So, let's look at basketball. Perhaps because the only thing "revolutionary" stated by stats-gurus in regards to strategy has been "Mid-range shots are for suckers," and that wasn't particularly revolutionary, basketball stats have faced less resistance. Or maybe it's because of the sort of people who own basketball teams compared to baseball teams (I think it's less of an "old boys" club). Really, all that people like Morey, Pelton, Pritchard, Presti, etc. have said is:

  1. Efficiency is the right way to evaluate team defense and offense.
  2. Defense matters a lot more than most people think it does

That's it, basically. What's so revolutionary about that?

For one, this is (as was the case with OBP in baseball) an issue of emphasis. Just as people did recognize that it was important to get on base in 1979, people recognized in basketball that these things were important, but they were not valued to the extent that the statistically-oriented GMs valued them. That's why the fact that Shane Battier gets starter minutes when most coaches and GMs would place him in a reserve role is so interesting. Same goes for Chuck Hayes. And it's easy to forget that the prevailing attitude about starting Aaron Brooks last season (or, for that matter, drafting him) was fairly negative. The Rockets were "packing it in."

Now, you can say that these decisions could be made by GMs other than Morey, Pritchard, etc. I'd agree with that - the Grizzlies gave Battier a relatively large deal and made him a starter, after all.

But - and here's the rub, as it were - would all of these decisions be made by typical front offices?

No, I think not, or else a lot more teams would look like the Rockets, Blazers, or even the Thunder. Instead, we see most teams employ several players who are basically useless, good benches are seen more as a nice addition than an absolute must, and guys like Ben Gordon get star-level contracts. Players like Battier and Hayes are the exception more than the rule.

Beyond this practical concern, there's a more basic one lurking around: How do we know stats work? And here, again, the proof is in repetition.

(Again, an aside: Much of this "debate" seems silly to me. The argument is not really over whether or not stats "work," it's if stats describe reality, and I think that they obviously do describe real things reasonably accurately. And we're going to be more accurate at this sort of work next year, as more data is incorporated into models and theories are revised. Still, the proof of whether or not stats describe reality is in whether or not they can accurately predict the future)

According to BPro, there are currently eight teams currently employing statistical consultants in their front offices in some capacity. Obviously, some are more involved with analysis than others: I'd say that the Rockets, Thunder, and Blazers form the "big three" of statistical work, but we know teams like the Nuggets, Celtics, Spurs, and Mavericks utilize statistical analysis. Of those eight teams, only one (OKC) didn't win at least 50 games.

Think about that for a minute. If, as I said, the burden of proof is in predictive capacity, and predictive capacity is inherently tied to wins, I think we have clear evidence that stats are describing reality.

"But," opponents will say, "maybe stats aren't what's really getting this done."

Now, let me say that I am in no way a strict empiricist. I'd categorize my philosophical outlook as "Kantian" - empirical science is a great boon to humanity, but we must be conscious of the limitations of our perception. Hume's internal critique of science is valid. We cannot see causal connections.

But, as Hume notes, just because we can't see the causal connection between eating and not starving to death, we'd be fools not to eat.

If we keep seeing these conjunctions - stats-oriented FO's win more games than "traditional" FO's, they make better decisions more often, etc. - then perhaps we should recognize that these GM's and front offices are doing something different that is important, and that "something" is advanced statistical analysis.

And that means we have to take these stats seriously (on both of the levels I described). We must understand that statistical analysis better predicts player contributions than "gut feeling," and so we need to take statistics seriously by recognizing their ability to accurately describe the world. And, as a result of that, we must not shy away from what those statistics say simply because they clash with the basketball discourse we bring to the table.

I suppose this is a long way of writing "Chuck Hayes really is good and Daryl Morey really does know what he's doing because the Rockets win games," but such is the nature of the debate, I suppose.