BBSRC Portfolio Analyser
Award details
Mesoscale structural biology using deep learning
Reference
BB/T011823/1
Principal Investigator / Supervisor
Dr Susan Cox
Co-Investigators /
Co-Supervisors
Institution
King's College London
Department
Randall Div of Cell and Molecular Biophy
Funding type
Research
Value (£)
149,413
Status
Completed
Type
Research Grant
Start date
01/07/2021
End date
30/09/2022
Duration
15 months
Abstract
We will create a deep learning based method to allow a 3D model consisting of a number of points to be fitted to a large number of 2D images of the structure under consideration. For each image in the dataset, the rotation of the 3D model that allows the best fit of the data will be found. The model will then be optimised to minimise the total error. This will allow the optimisation of the sample model without assuming particular symmetry constraints. The architecture of the deep learning network that extracts the pose information will comprise an encoding section that predicts a rotation, a differentiable renderer, and a loss function that takes an input and output image as its parameters. To allow the system to converge on the correct structure, input and output images will be heavily blurred initially (i.e. a large Gaussian will be used to render them from the point data), and the blur will be decreased as the model is optimised. Since real biological structures a few hundred nanometres in size often exhibit some variation in structure, we will allow a limited affine transformation for each image to model small amounts of flex and distortion. We will test on simulations of different structures to better understand what type of data this system will perform well on. In particular, the impact of localisation precision and the labelling rate will be tested by varying both systematically. This will inform our treatment of experimental data, since the data can be filtered for higher localisation precision, at the cost of a lower labelling rate. The performance on experimental data will be tested on localisation microscopy data of the centriole from the Manley lab at EPFL, with the performance of the method being cross checked against images of the same proteins imaged with expansion microscopy combined with structured illumination microscopy.
Summary
There are many structures in the cell which are thought to be the same (or almost the same) every time they form. Examples include the nuclear pore complex and the centriole. Structures from a lengthscale from around 30nm to a micron can be imaged by a form of fluorescence microscopy called localisation microscopy, where the position of each individual fluorophore is found to high precision. The localisation microscopy methods which are simplest to analyse and least likely to produce artifacts create images where the 3D structure is projected down onto a 2D image. This means that it is difficult to deduce what the 3D structure is. There are a number of other microscopy techniques, particularly cryo-electron microscopy, which have faced similar challenges. In general, this is approached by putting images into a number of classes which are then averaged to improve signal to noise and a model is then optimised to fit all of the information. However, there is a property of localisation microscopy which means that we can take a different approach, which has the potential to fit the data much better. In localisation microscopy the position of each individual fluorophore is found, and the image of the sample is then reconstructed by displaying a Gaussian at the location of each fluorophore. This means that the system used to display the data can be easily created as a differentiable renderer (i.e. a system of display where the first derivative at each point can be calculated). We will use this property to create a deep learning based optimisation system which will generate an optimised 3D model of points to describe a dataset with many 2D images of the structure. The model will start off as a random distribution of points. At each stage of the optimisation the model will be compared to all the 2D images, and for each of them the angle which produces the best fit to the data will be found. The model will then be changed and the process repeated, gradually optimising the model fit the data. The final result will be a 3D model which incorporates all the information from the different 2D images. This will be an unusual application of deep learning, since instead of training a network which will be useful for people to use directly, the training of the network will lead to the creation of the final model. Since we are fitting to each individual image, it will not be necessary to perform averaging of the images to improve the signal to noise. For relatively large structures such as the ones we are considering, this is an advantage because the structures are likely to flex or deform to some extent. Averaging would therefore wash out structure. In contrast, we can build deformation into our model and therefore will get an accurate structure back even if there are slight variations between different instances of the structure. We will test the performance of our method on simulations and experimental data. Simulations will allow us to assess the impact that experimental effects will have on our results. In particular, there is an uncertainty associated with the localisation of each fluorophore, and a certain proportion of the proteins are either not labelled or not detected. The method will then be tested on experimental datasets of different centriole proteins, each with several thousand images of individual centrioles. Since this is not enough to train a deep learning network, we will carry out data augmentation, in which the image is shifted slightly and rotated in the x,y plane to create new images. This artificially creates more data and assists the network in learning small shifts and rotations. The results of fitting to experimental data will be compared to images of the same structures imaged using another super-resolution microscopy technique where the sample is embedded in a gel which is then expanded. This will allow us to be confident that our method is able to reproduce real structure from experimental data.
Impact Summary
The initial impact is expected to be seen in an improved ability to reconstruct 3D structures from sets of 2D localisation microscopy images. We will initially target proteins of the centriole for reconstruction, with other possible targets being the nuclear pore complex and clathrin coated pits. These are all systems of high biomedical importance and in the longer term the greatest impact of this project is likely to be enabling and accelerating new biomedical research. The basic approach which we are developing could also be of much wider interest. We have already made initial contact with Professor Helen Saibil, a prominent member of the EM community, who thought that the method could be of considerable interest to those developing EM software. More generally, the method could also have applications for other types of fluorescence microscopy, since many techniques have a much worse resolution in z than in xy, meaning that in effect each image is a projection. For continuous structures (i.e. anything except small points), our approach could be the base of a method to reproduce the 3D structure of the sample at better than the resolution limit. We have links to a number of microscopy companies, including Nikon. When the algorithm is developed we will approach microscope companies both with a view towards entering into a dialogue into how their systems might be of use for acquiring data that could then be used with our method, and also potentially with regard to interest in a commercial version of the algorithm. Microscope companies stand to benefit from our work because it would extend the experiments that could be carried out on their systems and give users greater confidence in their results. In turn, their users will benefit because we will be able to advise on changes to the hardware and software of the system which would optimise performance. The post-doctoral researcher employed on the project will receive training in python programming with PyTorch, neural network architecture and testing procedures, image analysis and super-resolution microscopy. With regard to the deep learning component of the project, the approach taken here is highly unusual in the context of microscopy, where most uses of deep learning take fairly standard approaches to classification, or creation of images using generative adversarial networks. In contrast, here we are using computer vision/deep learning approaches at the cutting edge of the engineering field, using the operation of the neural network itself as a tool with which to perform the model optimisation. Both advanced microscopy and deep learning are rapidly growing areas, with many jobs being created and a shortage of people with in-depth training. We anticipate that the training and experience that the postdoc would acquire over the course of the project would be highly beneficial in enabling their future career choices.
Committee
Not funded via Committee
Research Topics
Structural Biology, Technology and Methods Development
Research Priority
X – Research Priority information not available
Research Initiative
Tools and Resources Development Fund (TRDF) [2006-2015]
Funding Scheme
X – not Funded via a specific Funding Scheme
I accept the
terms and conditions of use
(opens in new window)
export PDF file
back to list
new search