Award details

Next generation approaches to connect models and quantitative data

ReferenceBB/R000816/1
Principal Investigator / Supervisor Professor Ruth Baker
Co-Investigators /
Co-Supervisors
Institution University of Oxford
DepartmentMathematical Institute
Funding typeResearch
Value (£) 297,282
StatusCompleted
TypeResearch Grant
Start date 01/01/2018
End date 01/03/2021
Duration38 months

Abstract

Using mathematical models to assist in the design and interpretation of biological experiments is becoming increasingly important in biomedical and life sciences research; yet fundamental questions remain unresolved about how best to integrate experimental data within mathematical modelling frameworks to provide useful predictions. Novel mathematical, statistical and computational tools are needed to provide a standardised pipeline that enables experimental data to be used effectively in the development of models, and in model parameterisation and selection. One key challenge in using mathematical modelling to interpret biological experiments is the question of how to integrate multiplex, multi-scale quantitative data generated in experimental laboratories to improve our understanding of a specific biological question. A standard protocol, that includes the design of experiments targeted towards parameterising models, validating specific model hypotheses, and inference of underlying mechanisms, based on quantitative data, is lacking. A significant reason for this is that, for the kinds of models that are required to interrogate phenomena in the modern life sciences, the calibration of models using quantitative data poses a formidable set of challenges. The models generally contain many parameters, and it is hard to obtain relevant data covering all the aspects of interest or importance to describe the system dynamics. In addition, the data that is collected usually has multiple, generally poorly characterised, sources of noise and uncertainty. Conventional statistical approaches either reach their limits or fail for such complex and, increasingly, high-dimensional problems. Here we seek to address precisely this point and develop a complementary suite of approaches that will enable scientists in the modern life and biomedical sciences to estimate model parameters and perform model selection for complex, multi-scale, and agent-based models.

Summary

Simple mathematical models have been remarkably successful in helping us understand key processes in biology. Traditionally, the utility of models has been to test biological hypotheses by encoding extremely simple descriptions of the biology in a mathematical framework. Mathematical analysis and computer simulation are then used to test whether qualitative predictions of the model match experimental observations. However, biology has advanced to the stage where experimental researchers can generate stunning images of cells and tissues at a level of resolution previously only dreamt of. Being able to visualise, for example, the dynamics of individual mRNAs and proteins over time, means that we can now generate extremely sophisticated hypotheses for how large gene regulatory networks or cells and tissues function. As a result, the mathematical models we develop to test biological hypotheses are quickly growing in size and complexity. In particular, the so-called agent-based models have become a popular tool in the modern life sciences. These allow the modeller to, for example, follow the fates and interactions of individual cells and, at the same time, include the effects of gene regulation and signalling. For these agent-based models to be truly useful, for them to direct experimental efforts or even, eventually, replace the need for some experiments, we need to calibrate them using quantitative data. This simply stated need, however, poses a formidable set of challenges for the modelling community: (i) the models have many parameters that must be estimated; (ii) the data is complex, of multiple different types and rarely, if ever, are all the relevant cells or proteins measured or tracked, for example; (iii) the data are obscured by noise that is both intrinsic to the measured processes and introduced during the experiments. The proposed research will generate new mathematical and computational tools to overcome these challenges. It will enable scientists inthe modern life and biomedical sciences to calibrate models, then select the most appropriate model(s), and hence distinguish between competing biological hypotheses. To make sure they are relevant for biology, these new tools will be developed whilst investigating key biological questions. To ensure that the tools are available for re-use and extension by other researchers in the field, all of our computational codes and resources will be made freely available.

Impact Summary

Economy: The use and analysis of data carry both social and economic costs. First, ineffective use of data generated at the expense of public funding is a waste of resources. Secondly, whenever animals are involved in research - as is routinely the case in immunology, developmental and stem cell biology, and physiology - we have to ensure that the 3R principles (replacement, reduction, refinement) are adhered to. The methodologies developed as part of this project will provide a direct means to mitigate these issues, by ensuring that experiments are designed to collect the appropriate data to answer specific questions. Society: In terms of healthcare, we increasingly rely on diverse sets of data and their integration in order to make or plan concrete interventions in the life of patients or, in public health, make regulations that affect large parts of the population. It is essential to the decision- and policy-making processes that we understand how to integrate and interpret these diverse data sets using mathematical and statistical models and techniques. In addition, in the medium-to-long term, for personalised medicine to become a reality requires us to understand how to efficiently and accurately integrate and interpret patient-specific, multiplex, quantitative data using theoretical approaches. The proposed research will bring the UK research community further towards a unified pipeline for interfacing mathematical models with quantitative data. Knowledge: It is now almost the norm, particularly in high profile journals, for publications from modern life sciences research groups to include a model that integrates biological hypotheses and validates them using experimental data. Rarely, however, are these models properly calibrated using quantitative data. A key reason for this is that conventional statistics approaches often reach their limits, or fail, for the complex and high-dimensional problems posed in attempting to calibrate the (increasingly) large and complex models now in routine use. The scientific advances that will be made as part of the proposed project will provide the relevant tools and techniques to overcome these issues. To ensure maximum impact, all computational algorithms and code for the technologies generated during this project will be made freely available for re-use and extension by the research community. People: The next generation of researchers working at the interface of theoretical and experimental life sciences will require new skills; to be able to calibrate and interrogate complex models using multiplex, quantitative data in order to generate new insights and predictions. To this end, this project will train two postdoctoral research associates in developing and applying computational statistics approaches to estimate model parameters and perform model selection for complex, multi-scale, and agent-based models in the life and biomedical sciences.
Committee Research Committee C (Genes, development and STEM approaches to biology)
Research TopicsSystems Biology
Research PriorityX – Research Priority information not available
Research Initiative X - not in an Initiative
Funding SchemeX – not Funded via a specific Funding Scheme
terms and conditions of use (opens in new window)
export PDF file