Two college seniors got the election right when almost everyone else got it wrong

Sejla Palic and Mary Beth Benzing, St. Lawrence University students who were among very few in a national statistics competition to correctly predict Donald Trump's Election Day victory. Photo courtesy of Tara Freeman, St. Lawrence University

Mary Beth Benzing and Sejla Palic knew that Donald Trump would be elected president on Tuesday. In fact, they predicted it.

The two seniors at St. Lawrence University in New York took part in Prediction 2016, a contest from the American Statistics Association for high school and college students to analyze publicly available data and predict the election’s winner.

Benzing and Palic’s final analysis, which they submitted on Oct. 30, had Trump winning 36 states, including Michigan and Wisconsin, and tying in New Hampshire. Trump ended up winning 33 states, but it was enough to secure the election and put Benzing and Palic among a tiny group of statistical prognosticators who got the final result correct.

The students said they created a model that relied heavily on information gleaned from the 2012 presidential general election and primaries.

- FWBP Digital Partners -

“Everyone was putting weight on 2016 polling, but we used a lot of data from 2012 because we thought it was most relevant,” Benzing said Wednesday. “Putting a little history in there made for a better prediction.”

Ahead of Election Day, the students knew their prediction was an outlier. Of the 193 individual and group entrants to Prediction 2016 from high schools and colleges across the country, just two had Trump emerging victorious.

“We didn’t necessarily think we’d be right, but the numbers were just what our model was saying,” said Palic, a double-major in statistics and psychology.

As the results came in Tuesday night, “we were quite shocked,” Palic said. And neither was particularly pleased.

- Advertisement -

“Basically I would have preferred for it to go the other way,” said Benzing, a statistics and economics double-major. “But the results are what they are.”

For Palic, an international student who is Muslim, the result made her “a little scared.” But, she added, “the people have voted and there’s not much we can do about it.”

The students’ statistics professor, Michael Schuckers, was pleased they predicted the correct result, but he worries about the overall performance by trained statisticians in this year’s election.

“Certainly one of the thoughts that I had was that statisticians had generally done a poor job of predicting the outcomes,” he wrote in an email. “As a group, we’ll have to reconsider how we model these sorts of data, though I’m not sure there were many non-statisticians who did any better.”

- Advertisement -

As many students who took part in the contest found out Tuesday night, statistics is a tricky business.

Yashelle Hunte, a junior at Magruder High School in Rockville, Maryland, averaged the results of several polls over a five-day period to come up with a election prediction.

Her submission had Hillary Clinton winning the popular vote total by a whopping eight percent. Watching the real numbers come in on Tuesday was “kind of shocking,” Hunte said.

“A lot of states that I thought were going to be one color turned out to be the other,” she said. “I found out that polls are very wrong. People tell the pollsters how they’re going to vote, but it’s not set in stone. They go out to vote and change their minds.”

Ronald Wasserstein, the American Statistical Association’s executive director, was thrilled that so many students embraced the inaugural event.

“Part of the project was digging into a complicated problem and figuring out what types of data could be used and which type of data was more useful than others,” he said. “Some used polling aggregators, some went right to exact polls. Others were looking at trends or past elections to try and predict party turnout. It was all across the board.”

William Christensen, head of statistics at Brigham Young University, served as an advisor for this year’s contest, and he said he was surprised at the level of sophistication many of the students displayed given their limited amount of statistics training.

“I was pleased to see how much students seem to engage not just in the statistical aspects of it, but the politics and historical aspects of the problem,” he said. And even though some of the methods they use might not translate precisely to business or government or education, the tools they are developing are very much important and translatable.”

On Tuesday, before any of the election results were known and before so many predictions were proven incorrect, Wasserstein offered an explanation of election forecasting that would cheer the hearts of beleaguered statisticians everywhere.

“Let’s take a prediction that says there’s 75 percent certainty that Clinton will win. And then Trump wins. Was that estimate wrong? It wasn’t wrong necessarily,” he said. “The fact is that it gave a one in four chance for Trump to win. It’s just that as human beings we have a tendency to think that if there’s only a 30 percent chance of rain and it rains that the forecast was wrong. And that’s just not the case.”