Why Prediction Markets are Better at Predicting Covid than Public Health Experts

What are the predictions for Covid? Prediction markets like Polymarket are the best way to know what is actually going to happen.

9 min readJul 21, 2022

Throughout the COVID-19 pandemic, prediction markets such as Polymarket, where traders put real money on the line, have consistently delivered more accurate forecasts for future COVID case numbers than expert models such as those used by the WHO, the CDC, or Imperial College London.

How Did Scientists and Public Health Experts Handle the COVID-19 Pandemic?

According to a recent study by the Pew Research Center, only “29% percent of U.S. adults say they have a great deal of confidence in medical scientists to act in the best interests of the public, down from 40% who said this in November 2020”- a few months before the pandemic began. While this shift in attitudes is often attributed to the pervasiveness of medical misinformation and conspiracy theories disseminated through social media, there are plenty of reasons why medical scientists and public health experts might have lost credibility with the general public over the course of the COVID-19 pandemic.

To begin with, scientists and public health officials hardly ever spoke with one voice. This made it difficult to simply “follow the science,” because different scientists would say different things, and lay people often needed to choose among many competing-and sometimes contradictory-sets of recommendations and predictions.

Bet on the future of Covid cases on Polymarket

Experts and public health authorities repeatedly revised their recommendations on how to prevent the spread COVID-19, how to treat it, and when to receive testing. At first, the Centers for Disease Control and Prevention (CDC) directed ordinary people to forego the use of masks, only to reverse course completely after a few weeks, urging states to require people to wear them in all public settings. In the first year of the pandemic, the CDC changed the language used to describe the likelihood of airborne transmission-which ranged from “probably” to “possibly” to “most commonly”-a total of four times.

Expert predictions about the future trajectory of the COVID-19 pandemic-in terms of the number of future caseloads, hospitalizations, as well as the effectiveness of the vaccines that were eventually rolled out to prevent it-were all over the place. They were often wrong, often wildly so, and sometimes in very different ways.

Visual from an April, 2020 New York Times report on the contrasting visions painted by the Covid-19 forecasts generated early models.

Pandemics are notoriously difficult to forecast, and COVID-19 was no exception

“Predicting the trajectory of a novel emerging pathogen is like waking in the middle of the night and finding yourself in motion-but not knowing where you are headed, how fast you are traveling, how far you have come, or even what matter of vehicle conveys you into the darkness,” writes University of Texas Biologist Claus O. Wilke. The last half century has seen the emergence of a number of viral pathogens-think ebola, the “Hong Kong” flu of 1968, swine flu, avian flu-that some experts predicted would result in untold casualties worldwide. In all of those cases, the epidemic was successfully contained before it could reach the catastrophic proportions foretold by predictions of that sort.

In the Summer of 2009, as cases of the H1N1 swine flu mounted, the U.K’s Chief Medical Officer Sir Liam Donaldson cautioned of a “worst case scenario” in which 65,000 people would be killed by the H1N1 virus. His best case scenario placed that number at 3,100. By the time the outbreak was contained, some four months later, a mere 457 people had died of the virus.

PRO TIP: Think you know which way Covid cases are going? Bet on your belief on Polymarket right now.

Predictions of viral case loads are also often unreliable. A prediction model might accurately forecast the number of cases of a virus on a given future date, but, in the absence of accessible testing, seem nonetheless inaccurate. In the first year of the pandemic in the United States, demand for COVID tests often exceeded supply, and many infections went undetected. Many people who never experienced COVID symptoms would discover that they’d had an infection well after the fact, when laboratory tests revealed that they had COVID19 antibodies.

Conflicting Predictions

Early expert predictions of the severity of the COVID-19 pandemic tended to fall at one of two extremes. Either they foretold a catastrophic, “doomsday” scenario in which infection was widespread and people were left to die as hospitals were flooded with patients they hadn’t the capacity to treat, or they characterized the threat of the virus as relatively minor and temporary.

Let’s take a look at two arrestingly different predictions that were issued within one day of each other, just as the COVID-19 pandemic began to gain momentum.

On March 26th, 2020 the Institute for Health Metrics and Evaluation (IMHE) at the University of Washington issued what were, at the time, the very first long-term COVID-19 forecasts by a widely respected research institution. It estimated that the cumulative death toll in the United States would likely not surpass 162,000, and that the virus would be fully contained by the end of July.

Just one day later, as the as the United States became the first country in the world to surpass 100,000 confirmed cases, Dr. Ezekiel Emanuel, a medical ethicist at the University of Pennsylvania, made an alarming claim during an appearance on MSNBC’s “Morning Joe”: that the total number of COVID-19 infections would surpass 100 million in four weeks’ time.

On April 27th, 2020-exactly four weeks after Emanuel had made his prediction-approximately 913,800 confirmed cases of COVID-19 had been reported in the United States. (A paltry 2.7 million had been reported worldwide.)

The Limitations of Epidemiological Modeling

Most epidemiological forecasting models use some highly refined derivation of what is called ‘SIR’ Modeling, by which individuals within the simulation are classified as either ‘susceptible,’ ‘infectious,’ or ‘recovered.’ This is canonical epidemiological theory has been a primary part of the epidemiological toolkit for over one-hundred years.

So why did they fail to predict COVID numbers? In short, because most of them were developed before scientists had a working knowledge of how the virus actually behaved, and many of them continued to run simulations well after the virus’ characteristics had undergone multiple mutations.

Most of these models rule out the possibility of reinfection. Most people reading this today likely know people who have been infected with COVID two, or even three times.

In many cases, there was an almost complete lack of information about the models that researchers used to yield these predictions.

The COVID-19 pandemic was the first major epidemiological event to see the widespread application of Artificial Intelligence and Machine Learning to produce predictions. Many of these models were sufficiently complex that they constituted what computer scientists call “black boxes,” meaning that they are not straightforwardly interetable to humans, and cannot be independently verified. Whether such models can be subject to peer review is the subject of ongoing debate.

All prediction markets work the same-whether they are for sports, politics, or COVID-19 case loads.

Experts face a variety of pressures to create biased forecasts.

Expert predictions are often influenced by the imperatives of the institutions from which they derive their funding, or of which they often act as public representatives.

One notable example is Donald Trump’s coronavirus response coordinator, Dr. Deborah Birx, who recently testified before a congressional committee investigating the Trump administration’s pandemic response. According to Birx, White House officials made repeated requests that she deliberately change or withhold parts of the guidelines that she regularly issued to state and local health officials, estimating that White House officials found fault with her reports approximately 25% of the time.

Bet on Your Predictions on Polymarket

Even as Birx was under pressure from the White House to alter her COVID-19 predictions and prevention guidelines, she and her task force were repeatedly accused of “falsely increasing case counts” by the very same people demanding those revisions. In other words, the demand that she falsely manipulate her reports was couched in the claim that her initial reports had already been manipulated.

When Emanuel took to cable news to issue warn of the impending explosion of COVID cases, he hardly bothered to pretend that his intention wasn’t to stoke fear, framing his prediction as a retort to “politicians and people in Washington who don’t want the country to panic”-perhaps an oblique reference to Birx and her colleagues.

A September 2020 report by McKinsey, issued in the thick of first wave of lockdowns, warned that the pandemic would force one in four U.S. women to “downshift their careers or leave the workforce completely”-a prediction that did not, by any measure, come to pass. “Contrary to many accounts,” wrote Harvard economist Claudia Goldin, in a retrospective analysis of COVID-19’s economic impact on women, “women did not exit the labor force in large numbers, and they did not greatly decrease their hours of work.” (Women were affected disproportionately by the pandemic in other ways-working mothers tended to carry out the lion’s share of supervision and caretaking for children remote learning-but they added this on top of their normal work hours instead of reducing paid work.) It’s worth noting that the McKinsey report was commissioned and co-published by LeanIn.Org, the nonprofit founded by former Facebook exec Sheryl Sandberg, whose stated mission is to promote women’s participation in the workforce.

Predictions that portend massive upheavals, like Emanuel’s or McKinsey’s, are sometimes described as “fearmongering.” They’re distorted not because they face pressure from external sources, but because they’re designed to encourage people to adopt certain behaviors. These kind of predictions are arguably less harmful than those which drastically underestimate the extent of some calamity, because they compel people to act out of an abundance of caution and adopt recommended measures of hygiene, which in turn can dramatically curb the virus’ spread. Or, in the case of the predicted mass resignation of female workers, employers might proactively create policies meant to ease the double burden of work and parenting. The strong persuasive power of doomsday prophecy might create the very conditions necessary to prevent doom from actually occurring.

Experts, like other content creators, must compete for readers’ attention: the predictions that are most likely to be shared widely are deliberately extreme, lend themselves to sensational news articles, and/or confirm to a certain group’s preexisting biases. Built into many of the most high-profile expert predictions is the acknowledgement that other, existing predictions are already heavily distorted, and thereby almost tacitly admitting that if their analysis is inaccurate, it is in the service of neutralizing the effects of previous predictions that had been overstated in the other direction, as in a game of tug-of-war.

Experts are also sometimes reluctant to dramatically revise their predictions when new evidence comes to light, because they have to publicly admit that they were wrong. Experts are very process-focused and slow to respond to new information. These traits are poorly suited to predicting something like COVID, which mutated and spawned new strains almost more quickly than scientists could understand them.

Prediction markets do not have this problem. A prediction market might undergo multiple reversals, adapting to an actively unfolding situation, before it resolves to its final outcome. When health experts reverse themselves, they run the risk of hurting their professional reputations or of appearing to the public as fickle or even deceptive.

Why prediction markets provide better information than experts

Prediction markets are not a replacement for public health experts, but because they represent the collective analysis of thousands of traders with real money on the line, they serve as an important clearing house for synthesizing the best information available.

Alongside Birx’s congressional testimony, transcripts of a number of intra-departmental emails from the Summer of 2020 were also released. In an email addressed to colleagues on August 13th, 2020, Birx wrote that if caseloads continued to surge, there would be “300k dead by December.”

The U.S. surpassed 300 thousand deaths attributed to COVID-19 on December 16th. In her private correspondence, Birx’s prediction of the number of Covid-19 cases on this specific future date is remarkably accurate. She made no such similar statement in any public forum. On its way to expression as public statements, Birx’s high level of scientific expertise was often twisted by the demands of other, politically motivated parties from the institutions within which she operated.

If traders believe that certain experts are producing biased predictions, they can move the market in the direction of what they believe is the more likely outcome. Traders on Polymarket correctly predicted that Omicron would surge to become the dominant Covid strain, that a federal emergency use authorization would be granted for a COVID-19 vaccine before 2021, that new COVID cases in the United States would surpass 100,000 for a single day before January 1 of this year, and that, as promised, 225 million doses of COVID-19 vaccines would be administered by Joe Biden’s 100th day in office.

Interested in helping produce more accurate Covid-19 predictions? Trade now on Polymarket.

Originally published at https://blog.polymarket.com on July 21, 2022.