MS01 - MEPI-01

Scenario Modeling to Inform Public Policymaking (Part 1)

Monday, July 14 at 10:20am

SMB2025 SMB2025 Follow


Share this

Organizers:

Zhilan Feng (National Science Foundation), John W Glasser, The US Centers for Disease Control and Prevention (CDC)

Description:

Models describe observations, underlying processes, or both. Their utility depends on how well they reproduce observations, those to which they have been fit or others. In descriptive modeling, the parameters of functions with desirable properties are adjusted to minimize discrepancies between predictions and observations. The parameters of mechanistic models cannot be estimated from the observations to which their predictions will be compared. Why? Mechanistic models are hypotheses about the processes giving rise to observations. Fitting their parameters is tantamount to assuming that the underlying processes have been modeled correctly. Hypotheses are tested by comparing their predictions to independent observations. In public health, dynamic models purporting to describe the processes by which pathogens are transmitted among human hosts are simulated under counter-factual conditions to inform policy. Unless the underlying processes have been modeled correctly, their predictions are unreliable. The only way to know if the predictions of mechanistic models are reliable is to compare them to accurate independent observations. The operating characteristics of surveillance systems are rarely considered, if known. Different periods in time-series from such systems are not independent. Averaging the predictions of ad hoc ensembles of such models does not solve the problem. Their undeniable merits notwithstanding, neither do Bayesian methods.



John W Glasser

The US Centers for Disease Control and Prevention (CDC)
"Validating a SARS-CoV-2 transmission model"
During the COVID-19 pandemic, we endeavored to keep pace with understanding of biological phenomena that might affect SARS-CoV-2 transmission by modifying SEIR metapopulation models structured via age, location, or strain. With probabilities of infection on contact and initial conditions from a serial, cross-sectional survey of antibodies to nucleocapsid protein among commercial laboratory clients throughout the United States and all save one other parameter from the literature, our age- and location-structured model reproduced seroprevalence from this and another nationwide survey, of antibodies to spike as well as nucleocapsid protein among blood-donors, remarkably well. Because fit parameters are conditional on model formulae and other parameter values, we recommend that mechanistic modelers base theirs on first principles, estimate them from accurate independent observations, or source them from the primary, not the modeling literature. In this talk, I will describe our descriptive model of seroprevalence by age and time and then our calculation, via first principles, of the age-specific forces of infection, attack rates and -- given information from a contact study -- probabilities of infection on contact. Because those parameters were not estimated by fitting our transmission model to any observations, others could use them too.



Wendy S Parker

Virginia Tech
"Testing the adequacy-for-purpose of dynamical models"
Dynamical models, especially mechanistic ones, are often viewed as “hypotheses” about the workings of a target system. Such hypotheses, however, are often known to be false from the outset, insofar as models are known to involve various simplifications, idealizations, and omissions. A more coherent perspective instead views scientific models as representational tools, the evaluation of which is concerned with their adequacy or fitness for particular purposes of interest. Adopting this perspective, stringent testing is still an aim of model evaluation, but what is ultimately tested is not the model itself, but a hypothesis about its adequacy- or fitness-for-purpose. Ideally, model evaluation is carried out such that, if the model is inadequate for the purpose of interest, then the testing procedure is very likely to reveal that inadequacy.



Michael Y. Li

University of Alberta
"Why do models calibrated with data need to be validated?"
Mechanistic models based on dynamical system theory are natural for making predictions. There is a general belief that these models are constructed based on the best available science, they are inherently valid. But are they? Reliable quantitative model predictions rely on both the model structure (mechanisms incorporated) and the model parameters. Models with the same structure but different parameter values can make different finite-time quantitative predictions. Model parameter values are critical for accurate predictions. When parameter values of an epidemic model using the trusted SEIAR structure for COVID-19 are estimated from fitting model outputs to COVID-19 data, and these parameter values allow an excellent fitting between model outputs and the data, would this mean the calibrated model is validated, can be trusted for scenario analysis, and for making recommendations to public health decision makers? I will use examples to show that epidemic models calibrated from data are prone to the following failures: (1) fail the cross-validation test, (2) suffer from over-fitting, and (3) over-project the final size. I will provide some of the underlying reasons for these failings. I will also present a study on estimating the proportion of infected population of COVID-19, using identified-case data for model training and reserving the seroprevalence data for model validation.



Marie Betsy Varughese

Institute of Health Economics
"Real-time Validation of Model Projections of Seasonal Influenza in Alberta"
Modelling efforts during the COVID-19 pandemic highlighted the challenges that arose with making accurate and validating projections. The difficulty or near impossibility to accurately predict the peak time and other epidemic indicators using standard mathematical models with constate rate parameters have been stated previously in literature. This talk will describe an age-stratified Susceptible-Infectious-Recovered (SIR) deterministic model used to describe influenza transmission dynamics in Alberta. We will describe our validation approach and compare the performance of making accurate model projections based on our assumptions of case detection when calibrating to surveillance data between 2016 and 2019. In addition, we will present more recent real-time influenza model projections for cases and hospitalizations for the 2023-2024 respiratory virus seasons and discuss how we present these findings to decision and policy makers.



SMB2025
#SMB2025 Follow
Annual Meeting for the Society for Mathematical Biology, 2025.