The Generalized Logistic Function and Pandemic Modeling

The logistic function was invented by Pierre Verhulst to represent exponential growth that levels off. To do this he chose the simplest thing he could think of: each additional “birth” knocks down the growth rate by an equal amount.

Exponential growth:  \frac{dP}{dt} = rP

Logistic growth:  \frac{dP}{dt} = rP(1-\frac{P}{K})

But what if the leveling off happens faster? Or slower? What if population growth really slows down after the first few generations? What if it only levels off when environmental resources really get strained?

OK, no problem: we can use basically any formula to moderate this exponential growth. The “generalized logistic function” adds a power to the logistic function that’s basically a shrug of the shoulders and a degree of flexibility. “Go ahead,” it says, “do whatever you want. As long as it starts exponential, approaches the carrying capacity, and is shaped like a nice ‘S’.”

Generalized Logistic growth: \large \frac{dP}{dt} = rP (1 - (\frac{P}{K})^n)

This function can tolerate a bit of funkiness, compared to the vanilla logistic. Note how with these parameters there’s a bit of a weird asymmetry as it approaches the carrying capacity.

https://www.desmos.com/calculator/stk54ocp5e

This generalized logistic function might better fit some S-shaped data. It adds another parameter, which is to say it introduces another degree of freedom. This amounts to wiggle room for researchers who use it. But without any particular reason to think the function should be one way or the other, it all amounts to guessing.

It’s this sort of guessing that can sometimes get you in trouble as a mathematical modeler.

Early on in the COVID-19 pandemic there was a clear need for information about the virus. What sort of spread should governments expect? Were hospitals at risk of overcrowding? How much death might the world be facing?

In the confusing weeks of March and April, one group of mathematical modelers filled the void and gained prominence above all others: IHME. They created a clear website that made predictions as to when the United States would experience shortages of hospital beds and ventilators.

But before long, the predictions of IHME came under fire from the epidemiological community. The headlines didn’t pull any punches: “Influential Covid-19 model uses flawed methods and shouldn’t guide U.S. policies, critics say.”

What were the issues? The article names a few. But the fundamental problem was this: IHME was just fitting data to curves, and any curve would do.

At the start of the pandemic, the IHME group was using a bit of software called “CurveFit” to make predictions. As you might guess, the software tries to find functions that best fit the given data. So far so good! The IHME group began with the generalized logistic function and looked for the best generalized logistic for existing COVID data.

Then, their plans changed. “We first tried building the analysis using the sigmoidal function,” they write, “we then discovered that the ERF error function provided a better fit to the data.”

Did the ERT error function fit the existing data better? I’m sure it did. But what does this function have to do with population growth? Absolutely nothing, is the answer. It’s just a shape. And for the researchers, that’s what the generalized logistic function was as well — just a shape, not an attempt to capture the underlying dynamics of a new viral epidemic. More from the April article:
“[IHME] doesn’t even try to model the transmission of disease, or the incubation period, or other features of Covid-19 … It doesn’t try to account for how many infected people interact with how many others, how many additional cases each earlier case causes, or other facts of disease transmission that have been the foundation of epidemiology models for decades.”
The message here for mathematical modelers seems to be that there is more to modeling than fitting the data. Unless we have some inkling of why a model should have the shape that it does, we should have absolutely no confidence that a fit is anything but a fluke. Then again, there’s another message here too: modeling is hard. The last paragraph of that article praises more conventional epidemiological models for their more sensible predictions:
“A different, data-driven model from researchers at the University of Washington predicts “about 1 million cases in the U.S. by the end of the epidemic, around the first week in June, with new cases peaking in mid-April,” said UW applied mathematician Ka-Kit Tung, who led the work.”
By early June about 14 million people had been infected with the novel coronavirus. IHME is hardly the only modeling group whose predictions veered wildly from reality, and it’s hard to blame anyone for that. It’s never going to be easy to make predictions about unprecedented times.

Leave a Reply

Your email address will not be published. Required fields are marked *