Tuesday, March 18, 2014

On the NCAA Tournament and Its Seeds

There's a lot of frustration among Cat fans, Louisville fans -- and Wichita State fans -- over how the NCAA seeded the Midwest Region.  Louisville fans expected a one seed -- they were given a four.  UK fans were expecting a six seed -- they were given an eight.  And Wichita State fans, who thought they had earned a relatively easy draw with their 34-0 record, now face the possibility that the Shockers would have to beat the last two national champions just to reach the Elite Eight.

How did this happen?  As usual, the NCAA won't explain exactly how they reached their decision.  But Sporting News quoted Ron Wellman, the chairman of the Selection Committee as saying that "{t}he factor that probably hurt Kentucky as much as anything is that they had two wins against tournament teams, both of which occurred in December."  He also said that "when we looked at the entire body of work of Louisville versus everyone else on the board, we felt that they were slotted appropriately."

Before saying anything else, I should point out that there are obvious problems with each of these statements:

1.  Kentucky has three wins against NCAA Tournament teams, one of which was in January when they beat Tennessee.  (Oh, and San Diego State also had only three wins against Tournament teams.  But it's seeded fourth in the West.)

2.  As Sporting News concedes, Wellman's statement about Louisville isn't really an argument at all -- it's just an assertion with no evidence.

In the rest of the article, Wellman throws out other arguments when asked about other teams:

On why SMU didn't get in:  "Their nonconference strength of schedule was ranked number 302 out of 350 teams eligible for the tournament.  Their overall strength of schedule was ranked 129; 129 would have been by far the worst at large strength of schedule going into the tournament."

On why N.C. State did get in:  "The positive factor for N.C. State was that they had three wins against top 50 teams away from home."

Now you can get into an argument over each of these factors.  But that would be a mistake.  Because this isn't 1983 anymore.  We have computers, and we have lots of data, and we have lots of smart people who have spent a huge amount of time trying to analyze college basketball.  And you know what people like Ken Pomeroy and Nate Silver and the folks at ESPN who developed the Basketball Power Index didn't do?  They didn't base their systems on principles like "How many times did you beat NCAA Tournament teams?"  "How many times did you beat top 50 teams on the road?" or "What was your record against top 50 teams?"  In fact, most sophisticated analysis of college basketball recognizes that there is a big element of chance that can affect the outcome of any one particular game.

And they're right.  Let's take NCSU and its road wins.  In the first place, this appears to be another mistake -- as far as I can tell the Wolfpack only had two road wins against teams in the top 50 of the RPI.  On December 18, they beat Tennessee 65-58 in Knoxville, and on March 3 they beat Pittsburgh 74-67 at Pittsburgh.  (NCSU beat Syracuse in the ACC Tournament, but that was hardly a road game.)  Furthermore, these two wins were obviously flukey.  The Vols went 3-24 from three-point range when they played the Wolfpack, while NCSU's win over Pitt was driven by T.J. Warren's second-best game of the year -- he went for 42 points on 16-22 shooting from the field.  Back in January, when the same two teams met in Raleigh, he scored only 23 points, and the Wolfpack lost by 12.

And that's why serious analysts don't rely on the results of a single game, or even a few games, to measure a team.  You need a system that does a better job of weeding out the inconsistencies.  For example, 12 days after UT lost to NCSU, the Vols hosted Virginia.  This time, UT was on fire -- the Vols went 11-18 from behind the arc.  They beat Virginia by 35.  Does that mean that NCSU is better than UVA?  Of course not, as shown by UVA's ACC championship season (which featured a 31-point win over NCSU).

From years of looking at data like this, most analysts have concluded that the best way to study basketball is to focus on how efficient a team is on both offense and defense, and then to recognize that there will be significant inconsistencies from game to game that can reflect nothing more than bad luck.  As far as I can tell, all of the most sophisticated efforts to study college basketball are built among these basic assumptions.  So hearing the head of the NCAA Selection Committee referring to outdated concepts like "top 50 wins" or "wins against NCAA tournament teams" is very discouraging.

Why doesn't the Committee use a more sophisticated approach?  We can't know for sure.  But I think it's worth reflecting on the potential winners and losers if the NCAA were to make a change.

If the NCAA would just adopt a system -- any coherent system that could be studied and understood -- fans, players, and coaches would all be winners.  No longer would we have to spend weeks guessing over what was going to happen -- the standards would be clear, and teams would know what they had to do and whether they had done it or not.  But on the other hand, the Committee would be big losers.  In fact, if you had a good system, you would only need a committee to do things like make sure that you didn't have teams from the same conference meeting in the first few rounds, or that BYU wouldn't have to play on a Sunday -- just housekeeping issues, really.  And where would be the fun in that?  What about the joy of poking through briefing books full of (mostly irrelevant) data, and playing around with the matchups, and knowing stuff before everyone else?

The important thing to remember is this:  so long as the NCAA doesn't have any system, the members of the Committee can make the seeds come out pretty much any way they want.  And that seems to be the way they want it.


  1. #OpentheNC2A selection-committee meetings.

    1. Yeah! #OpentheNC2A isn't just a hashtag; now it's a movement!