Award details

Allocentric and egocentric representations of sound space in auditory cortex

ReferenceBB/R004420/1
Principal Investigator / Supervisor Dr Jennifer Bizley
Co-Investigators /
Co-Supervisors
Dr Stephen Town
Institution University College London
DepartmentEar Institute
Funding typeResearch
Value (£) 522,404
StatusCompleted
TypeResearch Grant
Start date 01/12/2017
End date 30/11/2020
Duration36 months

Abstract

The location of a sound source is not represented at the cochlear and instead is computed by the auditory brain from localisation cues which are defined relative to the head: differences in the timing and level of the sound signal at the two ears and monaural spectral cues. However, perceptually we can locate a sound both relative to us and within the world. Previous physiological studies of the neural coding of sound location have not been able to determine whether spatial tuning is egocentric (head-centered) or allocentric (world-centered) as investigations have been performed in static listeners. We recently demonstrated that egocentric and allocentric representations can be disambiguated by measuring spatial tuning in freely moving subjects. With this approach we described both egocentric and allocentric representations in primary auditory cortex. This proposal aims to extend these findings to address a number of outstanding questions by recording multi-dimensional spatial receptive fields in an arena incorporating a novel speaker grid. Unlike a speaker ring that provides a very sparse and biased sampling of space, a speaker grid will enable tuning to be systematically measured to estimate the size, shape and position of allocentric receptive fields. By changing visual cues, entrance points and arena shape/size we will determine how allocentric tuning varies across different environments. The grid arrangement will allow us to make unique insights into the influence of distance on egocentric receptive fields. We will test the hypothesis that the prevalence of allocentric tuning increases through the auditory 'where' processing stream. We will train animals in allocentric and egocentric localisation tasks to determine the coordinate frames of non-human auditory spatial cognition. We will then record spatial receptive fields during behaviour to determine the influence of both long term training and short term attentional modulation on receptive field structure.

Summary

We can recognise and localise the many sounds that make up our everyday environment. For example when sitting outside a café, we can hear the rumble of a plane overhead, the pop music coming from the shop across the street, the chink of cutlery from the table behind us and the voice of the friend to our right who we're having coffee with. However, the cochlear, which encodes the frequency composition of sounds, does not provide information about where a sound comes from. Instead sound location must be computed in the brain by comparing the signals arriving at the two ears. Auditory space thus must be constructed from measurements relative to the head - sounds from our left will hit our left ear sooner than the right ear and will be louder in left than right ear because of the shadow cast by our head. Nonetheless, we can intuitively describe sound location in several ways: relative to ourselves (the toddler is giggling on my left) or relative to the world (the clock is chiming on the fireplace). Indeed as we move around the world, our perception of sound sources in the world remains stable - if we turn our heads towards a ringing phone we don't perceive the phone rotating around us; rather we are aware that we are turning towards it. This stability is remarkable because the sounds arriving at the ear are very different before and after we move - if I turn my head 180 degrees, a sound that originally arrived first at my left ear may arrive at my right ear first after turning. To achieve this, the brain must distinguish changes in sounds caused by our own movement and compensate to make the world stable. Perceptual stability and our intuitive experience of sound location in the world, suggests that the brain can represent both head-centered and world-centered space. We recently developed new methods to determine whether spatial sensitivity in the brain is head-centered (egocentric) or world-centered (allocentric). In previous studies of spatial processing, subjectsare held static at the centre of a ring of speakers while neural activity is recorded. Because the subject never moves, the sound location relative to the head and the world are always the same and so it's impossible to tell which space a neuron represents. In our study however, subjects freely explored an environment while turning and moving their heads so that sound location relative to the head and the world could differ and we could determine if neural coding was head or world centered. For the first time we proved what most scientists had assumed, that many neurons in auditory cortex represent sound location relative to the head and are thus egocentric. But surprisingly, we also discovered a smaller population of allocentric neurons that could represent sound location in the world regardless of head position / direction. In this project we seek to better understand how egocentric and allocentric neurons are organised in the brain and how they contribute to sound perception. We will use a grid of speakers to map in unprecedented detail how space is represented by egocentric and allocentric neurons. We will also ask how animals experience sound location by training animals to perform either a head-centered or world-centered sound localization task. We will then compare neurons when animals are actively performing tasks or passively listening to understand how attention changes spatial processing. Finally, we will investigate how experience influences how the brain represents sound source location by comparing neurons in animals trained in each task or with no training. The ability to localise a sound in space is crucial for survival and communication: when listening to one voice in a noisy background our use of spatial cues significantly improves our ability to pick the target voice out from the background. Understanding how the brain represents auditory space may influence the design of hearing aids and inspire improvements in speech recognition.

Impact Summary

The goal of this project is to answer fundamental questions about how the brain represents a sounds' location in the world. While researchers have addressed this question for several decades, it is only with the combination of high channel count wireless recordings and computer vision approaches for head tracking that we can tackle the fundamental question of what coordinate frames spatial tuning exists within the brain. Understanding the neural mechanisms of sound localization will provide knowledge for engineers working on signal processing devices for auditory prostheses including hearing aids, cochlear and mid-brain implants. Currently technology exists to equip implants with multiple microphone arrays that should in theory facilitate better sound source separation and selection, but their utilization is limited by the requirement to 'steer' such devices. A number of research groups are working on ways to intelligently steer selective source amplification using EEG based neural activity from auditory cortex. However, these methods all assume a head-centered representation in auditory cortex (i.e. that increased activity in the right brain will indicate a source of interest is to the left of the head). This approach will only work if the dominant spatial representation is egocentric: Understanding to what extent this assumption is valid is a crucial step in determining the likelihood of success. There is also evidence that hearing aid users can benefit from gyroscopic information that introduces movement related cues to listeners, thus illustrating a practical effort to recreate conditions necessary for allocentric hearing. Similar approaches may be beneficial for cochlea implant users and together illustrate the biomedical relevance of understanding auditory spatial processing. Technology beneficiaries will include communications companies and electrical engineers for whom a better understanding of how to extract one signal from many may lead to developmentsin machine-listening. Biologically inspired methods of world-centered sound localisation have broader applications in fields of science and engineering. One aspect that may be of particular relevance to robotics experts is the ability to maintain a representation of location in the world across movement - in humans this is known as perceptual stability but the problem is just as applicable for artificial intelligence. Spatial cognition and its neural mechanisms are critical for normal hearing and therapies targeting hearing loss and auditory symptoms of neuropsychiatric and neurodevelopmental disorders. Failure to effectively localize sounds is a diagnostic for Auditory Processing Disorder (APD) in children and individuals with autism and schizophrenia show abnormal sound localization. Audiologists, clinicians and patient groups may thus benefit from advanced understanding of spatial sound processing and future testing to determine if auditory spatial dysfunction results from impaired utilization of localization cues or failure to transform egocentric to allocentric space. Our project will also impact a wide range of academic beneficiaries: this is basic neuroscience research which will provide key insights into how the auditory brain is able to segregate sounds into their component sources. Such knowledge will be key to researchers working within auditory neuroscience, sensory and cognitive fields of neuroscience and psychology more broadly.
Committee Research Committee A (Animal disease, health and welfare)
Research TopicsNeuroscience and Behaviour
Research PriorityX – Research Priority information not available
Research Initiative X - not in an Initiative
Funding SchemeX – not Funded via a specific Funding Scheme
terms and conditions of use (opens in new window)
export PDF file