“Utterly Unreliable” Data
March 23, 2020
FROM Stanford Professor John Ioannidas, professor of medicine, biomedical data science, statistics, and epidemiology and population health, writing last Tuesday:
If we had not known about a new virus out there, and had not checked individuals with PCR tests, the number of total deaths due to “influenza-like illness” would not seem unusual this year. At most, we might have casually noted that flu this season seems to be a bit worse than average. The media coverage would have been less than for an NBA game between the two most indifferent teams.
Some worry that the 68 deaths from Covid-19 in the U.S. as of March 16 will increase exponentially to 680, 6,800, 68,000, 680,000 … along with similar catastrophic patterns around the globe. Is that a realistic scenario, or bad science fiction? How can we tell at what point such a curve might stop?
The most valuable piece of information for answering those questions would be to know the current prevalence of the infection in a random sample of a population and to repeat this exercise at regular time intervals to estimate the incidence of new infections. Sadly, that’s information we don’t have.
In the absence of data, prepare-for-the-worst reasoning leads to extreme measures of social distancing and lockdowns. Unfortunately, we do not know if these measures work. School closures, for example, may reduce transmission rates. But they may also backfire if children socialize anyhow, if school closure leads children to spend more time with susceptible elderly family members, if children at home disrupt their parents ability to work, and more. School closures may also diminish the chances of developing herd immunity in an age group that is spared serious disease.