Award details

Identifying the signal in the noise: a systems approach for examining invariance in auditory cortex

ReferenceBB/H016813/2
Principal Investigator / Supervisor Dr Jennifer Bizley
Co-Investigators /
Co-Supervisors
Institution University College London
DepartmentEar Institute
Funding typeResearch
Value (£) 384,541
StatusCompleted
TypeResearch Grant
Start date 10/10/2011
End date 09/10/2014
Duration36 months

Abstract

Failure to understand speech in a noisy environment is one of the principal complaints of the hearing impaired. Our remarkable ability to recognize and understand speech across many different speakers, voice pitches and listening conditions likely depends on the auditory brain extracting those acoustic cues which provide reliable information about a particular sound feature in order to form a neural representation which robustly identifies the stimulus regardless of task-irrelevant 'nuisance' variables. This proposal employs a systems neuroscience approach to examine how auditory cortex forms invariant neural representations of vowel identity enabling a listener to differentiate vowels such as /ae/ from /ih/ irrespective of the voice pitch or location in space, and how this is maintained despite changes in the background noise environment. Three specific questions will be addressed: (1) How (with what neural code) and where (in which cortical field) do neurons support invariant perception of vowel identity? (2) Which cortical fields are necessary for an animal to perform invariant vowel recognition? (3) What role does visual information play in helping us to 'hear better'? By simultaneously measuring spiking activity, local field potentials and neural oscillations in trained ferrets performing a vowel identification task we will examine the incidence and location of invariant vowel timbre encoding. We will explore the neural codes which might support invariant coding and use reversible inactivation techniques examining causal relationships between the activity in specific cortical fields and vowel discrimination behaviour. Lastly we will examine how visual information is integrated within auditory cortex in order to aid listening in difficult conditions. Whilst multisensory integration has been documented in early sensory cortices very few studies have sought to correlate physiological measures with simultaneous assessment of any multisensory behavioural advantage.

Summary

We are able to recognize and understand speech across many different speakers, voice pitches and listening conditions. However, the acoustic waveform of a sound (e.g. the vowel 'ae') will vary considerably depending on the individual speaker, and the 'ae' may be embedded in a cacophony of other, background sounds in our often noisy environments. Despite this, we have no difficulty recognizing an 'ae' as an 'ae', suggesting that the brain is capable of forming a representation of the vowel sound which is invariant to these 'nuisance' variables. For vowel sounds, the timbre, or vowel identity, is determined by the spectral envelope. Filtering by the mouth, lips and tongue results in energy peaks, or 'formants' in the spectrum, and it is the location of these formants which differentiates vowel sounds from one another. Thus, the fact that we are able to discriminate 'ae' from 'ih' irrespective of the gender, age or accent of a speaker suggests that we are able to form an invariant representation of the formant relations independently of the fundamental frequency, room reverberations, or spatial location in both quiet and noisy conditions. The aim of this research program is to discover where and how such invariant representations arise in the central auditory system and how these representations are maintained in noisy environments. Forming invariant representations is one of the greatest challenges for sensory systems, and understanding where and how such representations are read out is crucial for the design of any neuroprosthetic device. Our research uses ferrets as their hearing range spans a very similar range of frequencies to ours. Moreover, ferret vocalizations share many similarities with human vowel sounds. Ferrets rapidly learn to discriminate vowel sounds and we are able to record the activity of their nerve cells whilst they perform such listening tasks. By probing the circumstances under which the ferret is able to discriminate vowel sounds, and measuring the neural activity, we can look for where in the auditory brain invariant vowel representation might occur. The second part of this project involves reversibly silencing individual brain areas by cooling them. The principle of this technique is much the same was as using an ice pack to cool pain neurons in a bruised piece of skin. Small 'cryoloops' are implanted above auditory cortex in trained animals.This technique allows us to test whether particular brain areas are causally involved in vowel discrimination. The final part of this project investigates the role of visual information in auditory perception. It is well known that seeing a persons mouth movements while they talk to you enhances your ability to understand them - especially if you are listening in a very noisy room. When trying to pick out a quiet sound in a noisy background knowing when the sound is likely to occur also enhances your ability to correctly identify it. It has recently been shown that visual information is integrated into the very earliest auditory cortical areas. However, quite how this visual information shapes our auditory perception is unknown. The work in this proposal seeks to examine how visual information helps a trained animal to identify vowel sounds more accurately, whilst simultaneously examining how the visual stimulus influences the behaviour of neurons in auditory cortex. Inappropriate integration of auditory and visual information is postulated to underlie schizophrenic symptoms and understanding how informative visual stimuli influence auditory cortical activity will provide valuable insight into how sensory integration occurs in the healthy brain.Hearing impaired individuals most frequently suffer from an inability to effectively identify speech in noisy environments. Understanding how neurons are able to represent vowel identity robustly across a variety of listening conditions and noise environments will enhance hearing aid and cochlear implant design.

Impact Summary

The proposed work is a fundamental neuroscience research project and will produce key insights into the functional organisation of mammalian sensory pathways and the processing of sounds by biological systems. Discriminating speech sounds in a noisy environment is one of the principle complaints of hearing impaired listeners. Understanding the neural processing which underlies this ability in normal hearing brains will offer key insights into how to better design signal processors in cochlear implants and hearing aids. Similar advantages will be afforded to communication technologies; we need only consider the very substantial shortcomings of even the most artificial speech recognition systems to be reminded of the remarkable sophistication of the auditory system. Collaboration with ENT surgeons and audiologists both within and beyond Oxford University ensures that our work maintains a clinical focus. Dr Hartley, a clinician-scientist has recently developed a ferret cochlear implant model meaning that the neural processing insights we gain from our efforts to relate perception to neural firing can be used to generate testable hypothesis which can be implemented within this animal model. We have established collaborations within Oxford with ENT clinicians. We maintain active collaborations with the growing number of sensory neuroscience groups, both in the UK and USA, who use ferrets as an animal model. Through regular dialogue we will continue to share new data, techniques and methods to the benefit of all. An example of such collaborative endeavour is the ferret brain atlas which is being developed by neuroanatomist Dr Sussane Radtke-Schuller (based in Munich), in collaboration with the University of Maryland based group of Dr Shihab Shamma, and the Oxford group. This atlas is the first of its kind for this species and will provide a free online resource containing cytoarchitectonic data and high resolution structural MRI scans which will be available to assistthe increasing number of scientists who are working with ferrets. We increase the impact of our work by participate in the Deafness Research UK's public outreach programs. In the past year we have had a number of school work experience students, and a Nuffield bursary student spend time in the lab in an attempt to encourage more school leavers to consider pursuing a career in biomedical science. We maintain an uptodate website which details our most recent work as well as our research goals and includes routes through which members of the public can contact us. The research in this proposal addresses fundamental neuroscience issues with little immediate commercial exploitation potential. Its benefits will be in enhancing our scant knowledge of how the healthy brain operates to process sounds. This knowledge will be, through publication and dissemination at international meetings, made available to engineers and others who will be able to apply what we learn about neural coding to improving the signal processing capabilities of cochlear implants, hearing aids and telecommunication devices. Whilst cochlear implants have been tremendously successful they are only an option for those individuals with an intact auditory nerve. A knowledge of the neural coding mechanisms within auditory cortex will guide the stimulation strategies and design of neural prosthesis which targets auditory centres in the brainstem or midbrain.
Committee Closed Committee - Agri-food (AF)
Research TopicsNeuroscience and Behaviour
Research PriorityX – Research Priority information not available
Research Initiative X - not in an Initiative
Funding SchemeX – not Funded via a specific Funding Scheme
terms and conditions of use (opens in new window)
export PDF file