What Allan Lichtman’s failed presidential prediction teaches us about data

Barr Moses
3 min readNov 12, 2024

Before last week, Dr. Allan Lichtman had accurately forecasted 9 of the last 10 U.S. presidential elections.

Heralded as the Nostradamus of polling, Lichtman has become something of a national celebrity over the past few decades with his 13 Keys to the White House, a model he built in 1981 with famed geophysicist Vladimir Kellis-Brook that applies quantitative methods to predict the outcome of popular elections.

Earlier this year, Lichtman forecasted that incumbent Vice President Kamala Harris would win the 2024 Presidential Election against former President and third-time Republican nominee Donald Trump.

However, this time around, Lichtman — and his 13 Keys — were wrong.

Politics aside, Lichtman’s forecasting failure raises an important question about statistics and data science: namely, how do we plan for the variables we don’t expect?

Fortunately, we have Allan Lichtman on deck for IMPACT 2024 to share his side of the story — and what we can all learn from it. I had the pleasure of chatting with Lichtman in advance of his conversation at IMPACT a few days after the election. Here’s what he had to say.

Predictive analytics is a process of thinking in bets

“As political scientists…we have to base our judgements on empirical analysis and historical precedence can be shattered. But, the problem for forecasters is that you can’t predict this in advance.”

Last year, we welcomed Annie Duke, former professional poker player and decision strategist, to the IMPACT stage to discuss the relationship between gambling and statistics. Of course, election forecasting isn’t exactly like gambling, but it shares some interesting parallels that resonated with Lichtman — namely Annie’s philosophy of “thinking in bets.”

When a model gets it right a few times, it can be easy to believe the same model will get it right again. This is sometimes called the “hot hand fallacy” — the idea that whatever has happened will continue to happen. Of course, this is a dangerous game to play at the Blackjack table — and it can be even more dangerous in our organizations. According to Lichtman, there are always variables that are outside our control — and even the most battle-tested approaches require constant scrutiny.

Models need to be tested before they can be trusted

“If a model doesn’t work, the first step to understanding why is to try to figure out what’s different between now and the last time you used it, what these new variables are, and monitor the situation that follows. Because I have a model that’s worked for 160 years, I have a basis for telling you what’s different this year, but human behavior is not entirely predictive.”

So, what really happened with Lichtman’s prediction for the 2024 election? While he stopped short of admitting that his 13 Keys might be outdated, he conceded that his model didn’t take into account digital disinformation campaigns and other 21st century phenomena. Of course, we’re barely scratching the surface! There’s plenty more to tell — and Lichtman is excited to share.

Tune in to IMPACT: The Data Observability Summit this Thursday at 9 a.m. PST to hear more of Allan’s election story — and what we can all learn from it — as well as incredible sessions with other industry leaders on the importance of data quality, what’s next for AI, and how you can deliver more accurate insights at scale.

RSVP: https://impactdatasummit.com/?utm_source=linkedin&utm_medium=social&utm_campaign=barrnewsletter

Stay reliable,
Barr Moses

--

--

Barr Moses
Barr Moses

Written by Barr Moses

Co-Founder and CEO, Monte Carlo (www.montecarlodata.com). @BM_DataDowntime #datadowntime

No responses yet