Last month, we asked who (or what) was best at predicting March Madness winners: experts, crowds, or algorithms? Now that Kentucky and its unibrowed hero Anthony Davis have emerged as victors, and the streets of America are strewn with torn-up brackets, which methodology reigned supreme?
Robots, amateurs, and experts can all pat themselves on the back, as the majority of each group accurately predicted a win for Kentucky. This was no fluke either, with the Wildcats winning each game by an average of nearly 12 points. Interestingly, despite their dominance, Kentucky’s win defied two pieces of conventional roundball wisdom: A. No. 1 overall seeds rarely win (it’s only happened twice in the nine years since the committee starting handing out that distinction); B. Experience wins championships (Kentucky’s five leading scorers on the season were all freshmen and sophomores).
The relative success rates of each predictive method begin to vary once you look past the title game. First, let’s consider the algorithmic approaches. On one hand, teams like Memphis, Wichita St., and Missouri were all favored by computer models, and yet suffered early exits. But these calculations were made by comparing each team against the field as a whole, and not necessarily against the squads they were slated to play.
When algorithmic approaches took individual matchups into consideration, however, the computers fared much better. Scott Turner, who has a PhD in Artificial Intelligence from UCLA, explored the success of single-game computer-based predictions on his blog, Net Prophet, pinpointing weaknesses in both Memphis and Missouri, at least when compared to their first-round opponents. Of course, the lesson here isn’t that we should’ve known to pick 15-seeded Norfolk State over 2-seeded Missouri (before this tournament, 15-seeds had only won four times in 27 years), but to simply think twice before penciling Missouri in to the Final Four based on overall statistical strength (like I made the mistake of doing in my bracket). Incidentally, Turner’s bracket, compiled entirely by a computer with no human input as part of the annual Machine March Madness competition, ranked 37,251 in ESPN’s Tournament Challenge going into the final game, which isn’t bad considering over 6 million brackets were submitted.
What about the experts? As we said last month, expert brackets tend to be littered with upsets, just not always the correct ones. Some of the trendiest upset picks from the CBS Sports camp included Long Beach State, Davidson, and Memphis, all of whom lost early. Like the film critic who heralds a movie no one’s ever heard of as the best film of the year, it only makes sense that someone who lives and breathes college basketball five months out of the year would make some unconventional picks. Just don’t let them take your bracket down with them.
As for the crowds, they tend to place their trust in the Selection Committees’ decisions by calling very few upsets, which worked brilliantly in 2008 when all four No. 1 seeds made it to the Final Four, and disastrously in 2006 when none of them did. But if there’s one thing the public’s collective 2012 bracket made clear, it’s this: If you’re going to pick upsets, don’t go for the popular ones. Of the mere five upsets participants of CBS Sports’ National Bracket did bet on, only one of them panned out (North Carolina State over San Diego State).
The conclusion here is one we should all be used to hearing by now: The robots are winning. While expert picks tend to be too esoteric, and fan picks tend to be too safe, computer-based methods, particularly when taking individual matchups into consideration, strike a strong balance between picking favorites and calling educated upsets. After all, computers aren’t trying to look smarter than everyone else–they’re just trying to win.
[Image: Flickr user jspatchwork]