Award details

Brainwide neural populations supporting multisensory decision-making

ReferenceBB/T016639/1
Principal Investigator / Supervisor Professor Matteo Carandini
Co-Investigators /
Co-Supervisors
Dr Philip Coen
Institution University College London
DepartmentInstitute of Ophthalmology
Funding typeResearch
Value (£) 507,874
StatusCurrent
TypeResearch Grant
Start date 01/03/2021
End date 29/02/2024
Duration36 months

Abstract

To make optimal actions, the brain typically needs to integrate information from different sensory modalities. The neural basis of this integration is only partially understood. The underlying neuronal populations can be distributed widely and sparsely, and understanding their activity requires recording brainwide recordings at neuronal scale during multisensory behavior. These experiments are now possible thanks two advances: a new audiovisual localization task for head-fixed mice, and next-generation Neuropixels probes. Objective 1: Role of primary sensory cortices in multisensory decision-making. We will test the hypotheses that even primary sensory areas are multisensory, and that their interaction is relevant for behavior. Alternatively, they may share behavioral but not multisensory signals. We will simultaneously record from large populations in auditory and visual cortex. We will then use analyses that characterize sensory and behavioral signals and the communication signals across populations. Objective 2: Cortical transformation of audiovisual signals into decisions and actions. We have identified a region of frontal cortex that is required for audiovisual decision-making. Do neurons in this region carry audiovisual signals, and how are these signals combined and transformed from earlier unisensory signals? We will answer these questions by performing simultaneous recordings from early sensory regions and frontal cortex and analyzing the results using techniques similar to those in Objective 1. Objective 3: Brainwide map of audiovisual decision-making. We will use enhanced Neuropixels probes to generate the first brainwide map of audiovisual processing, comprising ~100,000 neurons. This map will allow us to test longstanding hypotheses about the neural basis of multisensory behavior. It will reveal the flow of audiovisual information throughout the brain and may even reveal hitherto unknown regions of audiovisual integration.

Summary

To represent the external world and make appropriate decisions, brains need to combine information from different sensory modalities. This process is ubiquitous and vital, whether predator, prey, or pedestrian trying to safely cross the street. But despite the prevalence of multisensory decisions in natural environments, we remain largely ignorant about where and how multimodal streams of information are combined in the brain. This is partly because multisensory behaviors are difficult to robustly recreate in a laboratory environment, and partly because multisensory decisions involve neurons dispersed across a wide set of brain regions, and it's technically challenging to record from all of them. However, with new developments in rodent behavior and recording technology, we are now able to tackle the neural mechanisms underlying multisensory decision-making with unprecedented efficacy. Our lab recently developed a multisensory behavioral task for mice where they turn a wheel to indicate whether a stimulus appeared on the left or right. The stimuli can be auditory, visual, or a combination of the two. We specifically designed this behavioral task to be compatible with new electrophysiology methods called Neuropixels probes, which we helped develop. These probes allow us to record from hundreds of neurons anywhere in the brain. By combining these two developments, we will answer longstanding questions about multisensory decision-making. We can also create the first brainwide map to trace the audio and visual signals as they propagate through the brain and evolve into the mouse's decision and action. Our first objective focuses on the role of early sensory regions in multisensory decisions. Historically, certain regions of cortex were considered unisensory: they represented a single sensory modality, like vision or audition. However, several recent studies have claimed that these regions respond to multiple modalities, and that the first stages of audiovisual integration happens in these areas. We will record large neural populations simultaneously in two of these areas-primary visual and auditory cortices-in behaving mice. With these recordings, we can conclusively test whether these regions contain multisensory information and if so, whether this information guides the behavior of the mouse. Our second objective focuses on how auditory and visual information is combined in multisensory regions. It's been proposed that multisensory brain regions mix visual and auditory information such that some neurons respond to only one sensory modality, some respond to neither, and a smaller fraction respond to both. However, this hypothesis was based on a region of the brain that we now suspect isn't required for audiovisual decisions. Thus, we plan to simultaneously record from a brain region we know to be required for the behavior, called frontal cortex, and earlier sensory regions. Through this experiment, we will understand how information is transformed between early and late regions in the multisensory decision-making pathway, and determine how auditory and visual signals are combined. Our final objective is our most ambitious: to create a brainwide map of audiovisual signals while mice perform the behavioral task. This map will comprise ~100,000 neurons from regions across the brain, something that would have been unachievable just a few years ago. This map will be invaluable for two reasons. It will establish which regions of the brain have the potential to represent the mouse's choice, because these regions must contain multisensory neurons. Further, we expect to identify previously unexplored regions of multisensory integration, providing exciting new avenues of research. Together, these experiments combine new developments in behavioral neuroscience and electrophysiology to gain unprecedented insights into the mechanisms underlying multisensory integration and decision-making.

Impact Summary

The proposed work is at the level of fundamental science and its main impact is in the generation of essential knowledge about how the brain works. But we expect the findings to have far-reaching implications on a longer timescale, particularly in the following domains. (1) "Understanding and treating disease". Atypical audiovisual integration is a commonly observed in a number of mental disorders, including autism spectrum disorder and schizophrenia. Individuals with these conditions typically exhibit extended integration windows (i.e. combining auditory and visual stimuli from separate sources) and don't display the usual improvements in reaction time and reliability when responding to multisensory stimuli. These deficits extend to speech recognition, where sounds are typically combined with lip movements, indicating a possible link to difficulties with social communication. Mouse models of these conditions exist, but differences in audiovisual task performance or neural mechanisms have never been examined. The proposed project will create the tools needed to develop this area of research. By using the audiovisual task we have developed and recording neural activity in the brain regions we will identify, the multisensory integration properties of these disease models can be examined. This may help to understand atypical integration in human patients, and in the longer term, could lead to new treatment programs. (2) "Development of artificial intelligence". Deep neural networks are being used for a vast array of projects, from analyzing health data to beating humans at computer games. However, human brains still handily beat the machines in multisensory integration and sensory processing, whether they are identifying objects in an image or recognizing speech. The visual system has been a major source of inspiration to image recognition research, with deep neural networks modelled after the hierarchical structure of visual cortex. But despite the prevalence and expertise of human multisensory integration, the neural mechanisms of this process haven't had a similarly significant impact on machine learning. This is primarily because we still don't understand how multisensory neural networks in the brain are structured. The proposed research will finally answer this question and will inspire future deep neural networks. The kind of computations we discover in the brain could one day be part of an autonomous vehicle or improving the text-to-speech capabilities of mobile phones. (3) "Systems approaches to the biosciences." The project falls under Response Priority Mode "Systems approaches to the biosciences" because we are characterizing a complex biological process with unprecedented scope and accuracy. The system involves different hierarchical components, with jobs ranging from sensory processing of auditory and visual signals to integrated multisensory decisions. These components interact in complex ways, and will be studied during behavior (decisions, actions, arousal). Our final model of the system will capture the different components and their interactions so that it can be used to guide future studies of multisensory integration. To this end, we will build on strong collaborations with bioscientists and computational neuroscientists (e.g. the International Brain Laboratory, comprising more than 20 labs).
Committee Research Committee A (Animal disease, health and welfare)
Research TopicsNeuroscience and Behaviour
Research PriorityX – Research Priority information not available
Research Initiative X - not in an Initiative
Funding SchemeX – not Funded via a specific Funding Scheme
terms and conditions of use (opens in new window)
export PDF file