# Experts, Amateurs, Or Algorithms: Who Or What’s Best At Predicting Oscars?

## Every year, there’s a long list of people making Oscar predictions, from esteemed critics to… not so esteemed critics. This year, a growing number of data and social media forecasters weighed in. You may have predicted that algorithms were the safest bet, but the numbers are more complicated.

As was generally anticipated, the mostly silent throwback film, The Artist, was the big winner at Sunday night’s Academy Awards. But when it came to making accurate Oscar predictions, which group reigned victorious: the writers or the robots?

There was a huge amount of data-based conjecture leading up to this year’s Oscars, from Organic’s social-media-driven analysis covered here last week to Harvard freshman Ben Zauzmer’s linear algebra approach.

But do these systems work in terms of predicting actual outcomes? Who, or what, should you rely on if you want to win your 2013 Oscar pool–Ebert or algorithms?

Not surprisingly, it depends on which critic (or data) you’re using. While no mathematic method that we found could top the New York Times’ Melena Ryzik, who correctly guessed 20 out of 24 possible winners, the bots were much more consistent than most of the humans. For example, while few would doubt the knowledge of the cinema possessed by Ryzik’s colleague A.O. Scott, he only scored a 12/24. The New Yorker’s Richard Brody didn’t fare much better, predicting only 14 winners.

While individual critics provided wildly mixed results, they had a more solid showing when their picks were considered in the aggregate. That’s how Gold Derby, a website launched by Los Angeles Times alum Tom O’Neil, calculated its predictions, applying an algorithm that draws on expert opinions. This year, the site went 18 for 24.

But can Oscar winners be forecast without any consideration for human predictions? That’s what Zauzmer of Harvard University sought to do. His formula calculated the correlation between winners of contests like the Golden Globes and the BAFTAs (which have already been decided) and eventual Oscar winners over the past 10 years. He then applied those factors to this year’s nominees while also taking into consideration scores from review-aggregator sites Rotten Tomatoes and Metacritic, which rate films based on quality, but make no Oscar predictions. His final tally was an impressive 19/24.

The success of Zauzmer’s method lies in the fact that it mimics the way many critics like Roger Ebert already make their predictions: looking at the results of earlier award shows in the year (Ebert only guessed the major categories, but went 9 for 10). Since this analysis is rooted in numbers, however, it makes sense that a math formula could perform at least as well as any human. Furthermore, formulas don’t allow personal preference to muddy the waters, which can consciously or subconsciously affect human projections, especially when the human is as passionate about movies as critics are.