Competing Pandemic Projections Driving You Mad?

by | Apr 28, 2020 | News | 0 comments

As we think about the influence of computerized epidemiological models on our lives these days, we might think back to the old joke, “If you’re a hammer, the whole world looks like a nail.”

The hammer, of course, is big data, and we humans are the nails. That is, since we’ve created a digital layer on top of us—all the data, all the computers, all the artificial intelligence—then the epidemiological models are just more variations on the digital theme. After all, we’ve made millions of models for consumer behavior, for the weather, for the stock market, for sports, and just about anything else—so why not for Covid-19?

And now we’re seeing that these new virus models have gone, well, viral in our political culture.

One of these models comes from the Institute for Health Metrics and Evaluation (IHME) at the University of Washington. The site is user-friendly; it takes nothing away from its scientific rigor to say that it plays a bit like a video game. Indeed, it has greatly influenced the Trump administration, helping it shift from a mostly laissez-faire stance to a more serious approach—hence the president declared a “national emergency” on March 13.

Another model, less accessible, but perhaps even more influential, comes from Imperial College London. After weeks of behind-the-scenes briefings, on March 16 the college released a paper suggesting that the U.K. could suffer 500,000 deaths, and the United States, two million deaths. That got people’s attention.

Prime Minister Boris Johnson, who had previously been mulling the value of passive “herd immunity,” suddenly got active. On March 23, he ordered a nationwide lockdown. (Of course, the fact that Johnson was soon hospitalized with Covid-19 also made an impression.)

Plenty of other models and forecasts, too, are vying for attention; The Wall Street Journal reports that some 1,000 modeling papers have been published about the malady.

So yes, without a doubt, these models, and their modelers, have become political players; few politicians wish to be on the wrong side of such tech prestige. To be sure, most pols are immune to the intellectual charms of experts, and yet at the same time, they are not immune to the political weight of mass death.

Yet there’s just one thing that makes it hard for politicians to walk the right line: The models don’t seem to be particularly accurate. Or, to put it another way, they don’t agree with each other, such that if one is right, another one must be wrong.

For instance, the IHME model has been criticized as unduly optimistic. Sally Cripps, a statistician at the University of Sydney who led a team examining IHME’s projections, told Stat News on April 17 that its predictions “have been highly inaccurate.” She added, “It performs poorly even when it predicts the number of next-day deaths: The true number of next-day deaths has been outside the 95 percent intervals 70 percent of the time.”

Indeed, IHME, has come under heavy fire, perhaps because its optimism is seen as some sort of gift to Trump. (IHME is funded by the Gates Foundation, an outfit not known for its Trumpophilia.) On April 25, Politico made that critique more explicit, headlining, “How overly optimistic modeling distorted Trump team’s coronavirus response.” The piece quoted Gregg Gonsalves, an epidemiologist at Yale: “The IHME model is an odd duck in the pool of mathematical models. I fear the White House is looking for data that tells them a story they want to hear, and so they look to the model with the lowest projection of death.”

In response, IHME director Christopher Murray defended his team’s work: “We’re willing to make a forecast. Most academics want to hedge their bets and not be found to ever be wrong,”  Murray then continued, saying something revealing about the whole modeling biz: “We’re orders of magnitude more optimistic [than other models].”

Here we might pause to note that an order of magnitude is a logarithmic expression for a ten-fold change. So when Murray says that his model is “orders of magnitude” away from other models, he’s asserting that it’s some number of ten times more accurate—or at least different. And that’s quite a difference.

We can quickly see: Something’s gotta give—that is, if the models vary by 1,000 percent or more, then they can’t all be right.

On the other hand, the Imperial College model has been criticized for being unduly pessimistic. White House health adviser Deborah Birx directly addressed the model on March 26: “When people start talking about 20 percent of a population getting infected, it is very scary but we don’t have data that matches that based on the experience . . . There’s no . . .  reality on the ground where we can see that 60 to 70 percent of Americans are going to get infected in the next eight to 12 weeks.”

We could go on with the modeling tit for tat. And of course, we should stipulate that, strictly speaking, models aren’t supposed to be flat predictions; instead, they express a degree of contingency. That’s an important point to bear in mind, even as, of course, click-bait media naturally seizes on projected death totals, providing little or no context or elaboration.

As of now, the only thing we know for sure is that the U.K. has not suffered 500,000 deaths, but, rather, about 20,000. Nor has the U.S. suffered two million deaths, but, rather, about 55,000. Those are both horrendous death tolls, to be sure—but we don’t need a model to tell us that. Models should be useful tools, not panic devices.

Not surprisingly, disease models have garnered plenty of critics, some of them well credentialed. One of the well-credentialed is John Ioannidis, professor at Stanford’s School of Medicine; in mid-March, Ioannidis wrote a notably contrarian piece for Stat News, arguing that the “evidence fiasco” meant that vital policy decisions were being made on the basis of “utterly unreliable” data.

On April 24, critiquing the Imperial College model, Ioannidis told The Wall Street Journal, “They used inputs that were completely off in some of their calculation. If data are limited or flawed, their errors are being propagated through the model. . . . So if you have a small error, and you exponentiate that error, the magnitude of the final error in the prediction or whatever can be astronomical.”

Once again, Ioannidis is the opposite of a data Luddite; as he said, “I love models. I do a lot of mathematical modeling myself. But I think we need to recognize that they’re very, very low in terms of how much weight we can place on them and how much we can trust them. . . . They can give you a very first kind of mathematical justification to a gut feeling, but beyond that point, depending on models for evidence, I think it’s a very bad recipe.”

Ioannidis was echoed by Jeffrey Shaman, coauthor of Columbia University’s coronavirus model, who said to Politico, “You can’t oversell the models, and you have to view them within the correct context.” Shaman further warned against making projections based “on a highly fluid situation for which the information is woefully incomplete.”

Others agree: Keith Neal, an epidemiology professor at the University of Nottingham, told The Wall Street Journal, “Any model that gets within 50 percent of the actual result has done well.” Yet another critic, Scott Atlas, formerly at Stanford’s School of Medicine, now at the Hoover Institution, wrote in The Hill, “Let’s stop underemphasizing empirical evidence while instead doubling down on hypothetical models.”

We can give the last word to Anthony Fauci—a healthcare legend long before he started advising the White House on the current crisis—who told The New York Times, “All models are just models. When you get new data, you change them.”

So we can see: Maybe it’s not such a good idea to put too much credence in virus models.

Yet still, the repute of computers is such that even though the models vary to the point of randomness; they still have clout. Indeed, with apologies to Richard Weaver, author of the 1948 classic, Ideas Have Consequences, the models, too, are having consequences. Consequences, that is, not just for our physical health, but also for our economic health—and the economy, as we know, feeds back on physical wellbeing.

So it’s little wonder that the societal reaction to model-induced lockdowns is brewing, and not just in Trumpy Tea Party places, but also in sea-blue California. To put that another way, when the hammering from the models feels oppressive, the “nails” start hammering back.

Indeed, if we think back to our Greek mythology, we can see that being sickened or killed by Covid-19 is the Scylla, while economic wasting is the Charybdis. Not a happy prospect either way, and one has to wonder: Are the models actually adding objective value to the discussion? Or are they just the latest digital fad, perhaps sheathing one or another pre-existing political agenda?

Oh, and one other thing: Every debate we’re having about the coronavirus is likely to be replayed in regard to climate change. Yet one big difference, of course, is that while the virus models mostly look ahead at the next month, or the next year, the climate models look ahead for decades, even centuries. And if the virus models are off—even by orders of magnitude—about the near future, what should we conclude about climate models that purport to anticipate the far future?

The post Competing Pandemic Projections Driving You Mad? appeared first on The American Conservative.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.