CT03 - MFBM-01

MFBM Subgroup Contributed Talks

Friday, July 18 at 2:30pm

SMB2025 SMB2025 Follow
Share this

Silvia Berra

IRCCS Ospedale Policlinico San Martino, Genova, Italy
"In-silico modeling of simple enzyme kinetics: from Michaelis-Menten to microscopic rate constants"
Chemical Reaction Networks (CRNs) provide a powerful framework for modeling interactions between multiple chemical species within complex biological pathways. Their dynamics can be described by the mass-action law, representing variations over time in species concentrations through large systems of ODEs. The study of CRNs modeling signaling mechanisms within single cells has been successfully applied to provide insight into oncogenic pathways, enabling more predictive cancer models and improved therapeutic strategies [1,2,3]. Developing a CRN requires selecting the involved species, defining their interactions, and identifying key parameters such as microscopic reaction rates. This talk presents a method to retrieve these rates, particularly in the context of enzyme kinetics. We consider a simple model, where an enzyme E binds to a substrate S, forming a complex C that dissociates into a product P while regenerating E. This process is governed by four ODEs with three reaction rates: forward k_f, reverse k_r, and catalytic k_cat, typically hard to determine experimentally. A common simplification leads to the Michaelis-Menten (MM) model, where enzyme kinetics is characterized by a single ODE depending on two measurable parameters: the Michaelis constant K_M, and the maximum reaction rate V_max. These parameters may be expressed as functions of microscopic rates and are more accessible experimentally. This talk addresses the inverse problem of estimating k_f and k_r from K_M and V_max. A computational algorithm for solving this problem is presented and analyzed, along with an estimate of the reconstruction accuracy, and numerical simulations demonstrating its potential for refining kinetic models in biological and biomedical research. [1] Sommariva et al., J. Math. Biol., 82 (6): 55, 2021. [2] Berra et al., J. Optim. Theory Appl., 200 (1): 404-427, 2024. [3] Sommariva et al., Front. Syst. Biol., 3: 1207898, 2023.



Ismaila Muhammed

Khalifa University
"Data-driven Construction of Reduced Size Models Using Computational Singular Perturbation Method."
Most biological systems have underlying multiple spatial or temporal scales that require reduced-order models to capture their essential dynamics and analyze them. However, traditional model reduction techniques, such as Computational Singular Perturbation (CSP), rely on the availability of the governing or dynamical equations, which are often unknown from data in biomedical applications. To address this limitation, we propose a data-driven CSP framework that integrates Sparse Identification of Nonlinear Dynamics (SINDy) and Neural Networks to extract time-scale separated models directly from data. Our approach is validated on the Michaelis-Menten enzyme kinetics model, a well-established multiscale system, by identifying reduced models for the standard Quasi-Steady-State Approximation (sQSSA) and reverse Quasi-Steady-State Approximation (rQSSA). When the full model cannot be identified by SINDy due to noise, we use Neural Networks to estimate the Jacobian matrix, allowing CSP to determine the regions where reduced models are valid. We further analyze Partial Equilibrium Approximation (PEA) case, where the dynamics span both sQSSA and rQSSA regimes, requiring dataset splitting to accurately identify region-specific models. The results demonstrate that SINDy struggles in the presence of noise to identify full model from data that have underlying timescale evolution, but remains effective for identifying reduced models when dataset are partitioned correctly.



John Vastola

Harvard University
"Bayesian inference of chemical reaction network parameters given reaction degeneracy: an approximate analytic solution"
Although chemical reaction networks (CRN) provide performant and biophysically plausible models for explaining single-cell genomic data, inference of reaction network parameters in this setting usually assumes available data points can be viewed as independent samples from a steady state distribution. Less is known about how to perform efficient parameter inference in the case that there is a continuous-time data stream, which adds complexity like nontrivial correlations between samples from different times. In the continuous-time setting, one has two natural questions: (i) given a set of reactions that could plausibly explain the observed data stream, what are reasonable estimates of the associated reaction rate parameters? and (ii) what is the minimal set of reactions necessary to explain the data? Both questions can be formalized as Bayesian inference problems, with the former concerning the inference of a model-dependent parameter posterior, and the latter concerning ‘structure’ inference. If one can assume each possible reaction has a different stoichiometry vector, there is a well-known analytic solution to both problems; if reactions can have the same stoichiometry vector (i.e., there is reaction degeneracy), both problems become substantially more difficult, and no analytic solution is known. We present the first approximate analytic solution to both problems, which is valid when the number of observations becomes sufficiently large. In its regime of validity, this solution allows its user to avoid expensive likelihood computations that can involve summing over an exponentially large number of terms. We discuss interesting consequences of this solution, like the fact that ‘simpler’ models with fewer reactions are preferred over more complex ones, and the fact that the parameter posteriors of non-identifiable models are strongly prior-dependent.



Adelle Coster

School of Mathematics & Statistics, UNSW, Sydney Australia
"Cellular protein transport: Queuing models and parameter estimation in stochastic systems"
Real-world systems, especially in biology, exhibit significant complexity and inherent limitations in observability. What methods can enhance our understanding of the mechanisms underlying their functionality? Additionally, how can we develop and test explanatory models within a stochastic environment? Evaluating the effectiveness of these models requires quantitative measurements of the disparity between model outputs and observed data. While mean-field, deterministic models have well-established approaches for such assessments, stochastic systems—particularly those constrained by multiple data types—need carefully designed quantitative comparison methods. Methods for inferring the parameters of stochastic models generally require analytical forms of the model solutions, large data sets, summary statistics, or assumptions on the distribution of model outputs. These approaches can be limiting if you wish to preserve the information in the variability of the data but you do not have sufficient data to reliably fit distributions or determine robust statistics. We present a hierarchical approach to develop a distance measure for the direct comparison of model output distributions to experimentally observed distributions, avoiding any assumptions about distributions and the need to choose summary statistics. Our distance measure allows for constraining the model with multiple experiments, not necessarily of the same type, such that each experiment constrains some, or all, of the model parameters. We use this distance for parameter estimation with our queuing model of intracellular GLUT4 translocation. We will explore some practical considerations when using the distance for parameter inference, such as the effects of model output sampling and experimental error. Fitting the queuing model to data allowed us to uncover a possible mechanism of GLUT4 sequestration and release in response to insulin. Authors: Brock D. Sherlock and Adelle C.F. Coster



Clark Kendrick Go

Collaborative Analytics Group, Department of Mathematics, Ateneo de Manila University
"Exploring Mathematical Techniques in Collective Behaviour and Decision Making in Animal Groups"
Collective behaviour in animal groups are coordinated movements and interactions among members that aim to achieve a common goal. Whether these goals are for allocation of resources or defence from predators, the collective behaviour appears to be largely a group activity initiated by a member, known as the leader. In the absence of high-resolution spatio-temporal data, various qualitative studies offer a glimpse of how leader-follower interactions take place. For example, Nagy, et al., studied the average delay in response when pigeons change the direction inflight. Next, Bourjade, et al., studied the first mover and the succeeding order of movements of Przewalski's Horses. Furthermore, various studies on the collective motion in the animal kingdom offer mathematical models and infer how the interactions and decision making take place. Important questions arise during an event of coordinated motion in animals. During such an event, do individuals move according to a certain set of natural rules? Or certain patterns form due to the influence of a leader? How is this influence measured? Finally, how is influence transferred to other members of the group? In this study, we discuss the role of information theory to quantitatively uncover leader-follower relationship in a horse group. Specifically, we introduce concepts from information theory, specifically global and local transfer entropy being applied to a harem of horses. We will discuss their definitions, and how these key concepts are used to support causation in events. We will then discuss some important implications on how this technique can be used to analyse collective motion where data is scarce.



SMB2025
#SMB2025 Follow
Annual Meeting for the Society for Mathematical Biology, 2025.