Co.Create

Experts, Amateurs, Or Algorithms: Who Or What's Best At Predicting Oscars?

Every year, there’s a long list of people making Oscar predictions, from esteemed critics to… not so esteemed critics. This year, a growing number of data and social media forecasters weighed in. You may have predicted that algorithms were the safest bet, but the numbers are more complicated.

As was generally anticipated, the mostly silent throwback film, The Artist, was the big winner at Sunday night’s Academy Awards. But when it came to making accurate Oscar predictions, which group reigned victorious: the writers or the robots?

There was a huge amount of data-based conjecture leading up to this year’s Oscars, from Organic’s social-media-driven analysis covered here last week to Harvard freshman Ben Zauzmer’s linear algebra approach.

But do these systems work in terms of predicting actual outcomes? Who, or what, should you rely on if you want to win your 2013 Oscar pool—Ebert or algorithms?

Not surprisingly, it depends on which critic (or data) you’re using. While no mathematic method that we found could top the New York Times’ Melena Ryzik, who correctly guessed 20 out of 24 possible winners, the bots were much more consistent than most of the humans. For example, while few would doubt the knowledge of the cinema possessed by Ryzik’s colleague A.O. Scott, he only scored a 12/24. The New Yorker’s Richard Brody didn’t fare much better, predicting only 14 winners.

While individual critics provided wildly mixed results, they had a more solid showing when their picks were considered in the aggregate. That’s how Gold Derby, a website launched by Los Angeles Times alum Tom O’Neil, calculated its predictions, applying an algorithm that draws on expert opinions. This year, the site went 18 for 24.

But can Oscar winners be forecast without any consideration for human predictions? That’s what Zauzmer of Harvard University sought to do. His formula calculated the correlation between winners of contests like the Golden Globes and the BAFTAs (which have already been decided) and eventual Oscar winners over the past 10 years. He then applied those factors to this year’s nominees while also taking into consideration scores from review-aggregator sites Rotten Tomatoes and Metacritic, which rate films based on quality, but make no Oscar predictions. His final tally was an impressive 19/24.

The success of Zauzmer’s method lies in the fact that it mimics the way many critics like Roger Ebert already make their predictions: looking at the results of earlier award shows in the year (Ebert only guessed the major categories, but went 9 for 10). Since this analysis is rooted in numbers, however, it makes sense that a math formula could perform at least as well as any human. Furthermore, formulas don’t allow personal preference to muddy the waters, which can consciously or subconsciously affect human projections, especially when the human is as passionate about movies as critics are.

Finally, there were numerous social media analyses measuring positive or negative sentiments toward Oscar-nominated films on Twitter and Facebook. These were the least accurate predictors, probably because the demographics of Oscar voters (overwhelmingly male and white) don’t exactly reflect the moviegoing public. While The Artist still prevailed as Best Picture in most social media surveys, the tweeting public showed support for recognizable names like Martin Scorsese, George Clooney, and Brad Pitt in the Director and Actor categories over Michel Hazanavicius and Jean Dujardin, the guys who won. Though we should credit the social media world for correctly calling Meryl Streep for Best Actress over the fashionable critic’s pick, Viola Davis.

Oscar conjecture is far from a perfect science. While there are predictable tendencies Oscar voters exhibit again and again, these tendencies often clash in the same category, like in this year’s Best Actress race where they had to decide between an acting legend like Streep and a rising star like Davis, both of whom the Academy historically loves. But even if humans and algorithms will never accurately predict every Oscar winner all the time, that won’t stop them from trying (audiences need something more than Billy Crystal laughing at his own jokes to sustain them for three hours).

And if you think algorithmic projection is big now, just wait: Data junkies are going to have a field day next month when March Madness rolls around.

Add New Comment

0 Comments