Award details

Evaluating probabilistic inferential models of learnt sound representations in auditory cortex

ReferenceBB/X013391/1
Principal Investigator / Supervisor Professor Maneesh Sahani
Co-Investigators /
Co-Supervisors
Professor Jennifer Linden
Institution University College London
DepartmentGatsby Computational Neuroscience Unit
Funding typeResearch
Value (£) 202,118
StatusCurrent
TypeResearch Grant
Start date 15/02/2023
End date 14/02/2025
Duration24 months

Abstract

What are the computational principles that underlie the formation of perceptual representations? Animals and some artificial agents build internal representations of their sensory environments, which they use to guide and inform cognition and action. From both evolutionary and engineering standpoints, good representations are those that facilitate flexible and adaptive behaviour, but direct feedback about actions in the form of reinforcement or supervision is rare in nature. Thus, for animals at least, good internal representations may predominantly be shaped by unsupervised learning based on statistical regularities in sensory input---and indeed many experiments reveal changes in both behaviour and neural representations with passive exposure to altered sensory statistics, especially during early development. It is very likely that data-efficient learning in artificial agents will also ultimately depend on developing effective unsupervised algorithms. We will apply three state-of-the-art unsupervised inferential approaches---structured variational autoencoders, contrastive predictive coding, and recognition-parametrised models---to learn models of acoustic environments. The outputs of these models on probe sounds will be evaluated against auditory receptive field models and novel Neuropixels high-density multielectrode recordings of responses to naturalistic sounds from auditory cortical areas. We will explore changes in representation that are induced by exposure to modified sound ensembles during development, using the inferential models to design synthetic sounds that should drive maximal representational change, and then using the resulting changes in cortical representation to assess the computational similarities between the biological and artificial networks. Understanding the statistical principles that organise biological perception is likely to lead to better representational learning in AI, without the need for labelled or augmented data sets.

Summary

Humans, animals, and some artificial intelligence (AI) systems can all build internal representations of their sensory environments that guide and inform their actions. From both evolutionary and engineering standpoints, good representations are those that facilitate flexible and adaptive behavioural outcomes. Training of AI systems often involves providing feedback about outcomes (reinforcement or supervised learning). However, direct feedback about behavioural outcomes is rare in nature. Thus, for animals at least, good internal representations may predominantly be shaped by unsupervised learning from statistical regularities in sensory input. Indeed, many experiments have shown that neural representations and behaviour in animals can be changed by passive exposure to altered sensory environments, especially during early or adolescent development. It is very likely that data-efficient learning in AI systems will also ultimately depend on effective unsupervised learning algorithms. Our goal in this project is to understand the computational principles underlying unsupervised learning of sensory representations in biological systems, and how those computational principles relate to recent advances in unsupervised learning algorithms for AI systems. We will apply state-of-the-art unsupervised inferential approaches to learn probabilistic models of acoustic environments, and evaluate the fidelity with which those models can reproduce neural recordings in the auditory cortex from animals raised in routine and altered acoustic environments. Understanding the statistical principles that organise biological perception is likely to lead to better representational learning in AI systems, without the need for reinforcement or supervision. Conversely, algorithms for efficient, flexible representational learning explored in AI systems will help to elucidate the computational principles governing learning in biological systems.
Committee Not funded via Committee
Research TopicsX – not assigned to a current Research Topic
Research PriorityX – Research Priority information not available
Research Initiative Supporting research in cognitive computational neuroscience [2022]
Funding SchemeX – not Funded via a specific Funding Scheme
terms and conditions of use (opens in new window)
export PDF file