Award details

Selective Attention: How does Neural Response Modulation in Auditory Cortex Enable Auditory Scene Analysis?

ReferenceBB/N001818/1
Principal Investigator / Supervisor Dr Jennifer Bizley
Co-Investigators /
Co-Supervisors
Institution University College London
DepartmentEar Institute
Funding typeResearch
Value (£) 527,364
StatusCompleted
TypeResearch Grant
Start date 01/04/2016
End date 30/09/2019
Duration42 months

Abstract

Our goal is to understand how active listening shapes neural responses in auditory cortex (AC), and to determine whether and when feedback connections from non-primary to primary areas facilitate selective attention. Real-world hearing is made challenging by the presence of multiple competing sound sources. Thus, listeners must direct their attention to a source of interest while ignoring others. Recent studies, utilising imaging techniques or ECoG recordings in humans, have demonstrated that neural activity in non-primary AC represents predominantly attended sound sources, yet little is known about the physiological mechanisms that facilitate this. In this proposal we seek to determine how single cell responses in AC are shaped by current task demands. We will record from the AC of animals actively discriminating speech sounds and trained to report the occurrence of a target word. By employing different variants of the same paradigm we will determine how attentional mechanisms influence stimulus representation. These include a single stimulus stream in silence, conditions in which there is a competing stream of masking noise and a selective-attention task where animals discriminate one of two competing speech streams. We will test the hypothesis that spatial and feature based attentional mechanisms have different auditory cortical loci. We will assess whether changes to single neuron receptive fields are best summarised as gain changes. Attention related changes in sensory representations are thought to result from feedback connections from secondary to primary Auditory Cortex areas. We will address this hypothesis by determining the behavioural consequences of inactivating each of the secondary and primary auditory cortical areas, and selectively targeting feedback projections (while leaving feedforward processing intact) during behaviour using spatially and temporally precise optogenetic neural silencing.

Summary

Listening to conversation in a crowded room is one of the greatest challenges that the auditory system faces, and the most common cause of complaint for many of the 10 million people in the UK who suffer hearing loss. For example, to fully appreciate the piece of gossip that your friend is telling you in a restaurant, you must be able to separate his voice from the voices of other people, the clatter of glasses, and the music playing in the background. Your brain is able to 'select' the voice of your friend over all these other sounds - perhaps on the basis of where he is standing, or the pitch of his voice. Normal hearing listeners achieve this feat effortlessly, although engineers have yet to create a machine that can successfully match such signal separation in noisy backgrounds. In this proposal we try to understand how the neural machinery of the brain is able to extract sounds of interest while ignoring others. Our work focuses on a brain area called the auditory cortex; an area that is thought to be necessary for listening in complex situations like the one described above. Our goal is to understand how the responses of neurons in auditory cortex represent multiple competing sounds, and how the neural responses can be shaped in order to best represent sounds according to the listener's current demands. In this proposal we train animals in a series of listening tasks that will enable us to impose different demands on auditory cortex. Animals will listen for a target word amongst a series of non-target words. In some cases they will do this in silence, in others in the presence of background noise. In further variations they will listen to two streams of speech, each from a different talker, and from different locations, and be asked to selectively attend to one talker over the other (equivalent to trying to listen to your friend while ignoring the loud man behind her). We will record from neurons in auditory cortex while animals perform these tasks in order to understand how the different task requirements change the way in which sounds evoke neural activity. Auditory cortex is made of multiple, hierarchically organised areas that are thought to perform different functions. We will determine whether areas early in this hierarchy are affected by attention differently from those in higher areas. In the second part of this project we will use a technique called optogenetics to selectively silence neural activity in particular regions of auditory cortex. We will test the hypothesis that different areas of Auditory Cortex facilitate different sorts of attention - for example separating sounds according to their location in space, as opposed to the pitch or timbre or a particular talker's voice. Finally we will determine whether feedback from higher auditory areas to primary auditory areas is essential for active listening. This work would represent a fundamental advance in our knowledge of the role of these 'feedback' projections and the role that they play in active listening. Our work has the potential to enable the development of more sophisticated, biologically inspired, signal processing devices for hearing aids and cochlear implants - both of which perform poorly in many real-world listening conditions. Listeners whose hearing is assessed via an audiogram as normal can still struggle with listening in noisy situations - this problem is particularly acute in aged listeners. Problems in processing complex sounds underlie Central Auditory Processing Disorder, and disorders of attention are thought to underpin a variety of developmental disorders including autism, attention related hyperactivity disorder, as well as dementia and other neuropsychiatric conditions. Understanding the neural mechanisms in the healthy brain responsible for engaging attention to select sources in a sound scene will lay the foundation for understanding, and potentially treating, conditions in which these mechanisms are impaired.

Impact Summary

Hearing loss affects more than 300 million people world-wide and as such is the most common sensory disorder. Even people with mild hearing loss consider understanding speech in the presence of competing sounds to be a challenge. Hearing loss has profound social and economic implications, which will only be compounded by an ageing population and an increasing prevalence of hearing loss in younger listeners. Currently, hearing aids and cochlear implants can compensate (albeit poorly) for hearing loss at the level of the ear. While a degraded signal reaching the brain (as in the case of hearing loss) will have a clear negative impact upon listening in complex situations problems successfully engaging neural mechanisms to segregate or select a source from a mixture will also cause difficulties in listening in noise. Even normal hearing listeners differ in their ability to process complex sounds and a significant proportion of this variability is explained by the ability of listeners to employ auditory attention effectively. However, very little is known about how central mechanisms contribute to listening in healthy adults, let alone in those with impaired hearing or in ageing listeners for whom declining central processing and/or cognitive function may additionally contribute. Impaired central mechanisms are known to underlie perceptual impairments in children with Central Auditory Processing Disorder (CAPD). 5% of children presenting at Audiological clinics receive a diagnosis of CAPD, characterised by a normal audiogram but impaired performance in complex listening tasks such as sound localisation or speech-in-noise. In some cases CAPD patients exhibit marked cortical abnormalities highlighting the importance of a better understanding of the mechanism by which auditory cortex facilitates complex listening. A better knowledge of the principles used by the brain to separate and select sources will provide knowledge for engineers working on signal processing devices for auditory prosthesis including hearing aids, cochlear and mid-brain implants. Currently technology exists to equip implants with multiple microphone arrays that should in theory facilitate better source separation and selection, but their utilization is limited by the requirement to 'steer' such devices. Our work could lead to the development of brain computer interfaces capable of detecting cortical signatures associated with an attended auditory object. Such signals could potentially be 'read-out' non-invasively and be used in conjunction with multi-microphone arrays for source steering. Furthermore, understanding how the brain processes sounds is pivotal for designing and optimizing devices that are biologically compatible at a computational level. Technology beneficiaries will include communications companies and electrical engineers for whom a better understanding of how to extract one signal from many may lead to developments in machine-listening. In particular, biologically inspired signal decomposition techniques could find broader applications in fields of science and engineering where multiple signal sources must be identified from a single input signal. We are keen to promote our science to the public: the PI actively participates in public engagement work to communicate our science to a variety of audiances and will encourage the PDR to do so. We will be able to deliver impact within the duration of the grant in the form of disseminating information to academic beneficiaries and to lay audiences. Our impact upon telecommunications, hearing prosthesis manufacturers and clinical beneficiaries will be on a longer timescale but the Ear Institute offers the appropriate links with industry to be able to do so when appropriate. UCL has a proactive and effective media office which we will work with in order to ensure the impact of our work beyond the academic community.
Committee Research Committee A (Animal disease, health and welfare)
Research TopicsNeuroscience and Behaviour
Research PriorityX – Research Priority information not available
Research Initiative X - not in an Initiative
Funding SchemeX – not Funded via a specific Funding Scheme
terms and conditions of use (opens in new window)
export PDF file