Award details

Anisotropic retinal circuits for processing of colour and space in nature

ReferenceBB/R014817/1
Principal Investigator / Supervisor Professor Tom Baden
Co-Investigators /
Co-Supervisors
Institution University of Sussex
DepartmentSch of Life Sciences
Funding typeResearch
Value (£) 744,621
StatusCompleted
TypeResearch Grant
Start date 01/08/2018
End date 31/07/2021
Duration36 months

Abstract

All sensory systems are specialised to best serve an animal's sensory-ecological niche. In vision, many retinas feature pronounced anatomical asymmetries aligned with different positions in visual space. However, how these anisotropies translate into functional differences in retinal processing and their output to the brain in different parts of the eye remains poorly understood. Using 2-photon in-vivo imaging of retinal neurons we recently found that larval zebrafish invest more neuronal hardware into computing chromatic image content in their lower visual field, and field work confirmed that this is where most chromatic information exists in their natural habitat. However, not only colour, but also other image aspects vary with elevation. In the shallow freshwaters inhabited by zebrafish, the ground is always near and presents the main source of spatial detail information in this underwater visual world. Further, higher frequency detail arises above the horizon, mainly driven by floating debris on the water surface. Accordingly, we hypothesise that not only chromatic but also circuits dealing with spatial detail are arranged anisotropically in the zebrafish eye. However, since the tiny zebrafish eye offers little room for further neuronal expansion, any additional investment in circuits invariably comes at the cost of others. Accordingly, we will establish how zebrafish trade-off a need to process colour and spatial detail in different parts of the visual field. We will also test if and how these circuits change as the animal grows up and attains new visual capabilities and requirements. Beside its direct impact on sensory neuroscience, a better understanding how retinas of animals anisotropically arrange computational circuits dealing with specific image content can potentially benefit a wide range of applications, ranging from retinal implants to computer vision and the design of "intelligent" camera systems.

Summary

In vision, a constant stream of light patterns that vary in space, time and colour drive electrical activity in millions of photoreceptor neurons in our retinas. Depending on the colour and shape of this light, different sets of photoreceptors are activated to form a camera-like image. However, to send this information to the brain, it needs to be transmitted by the optic nerve. Much like a regular video cable, the amount of information that can be transmitted by this nerve is limited. In humans, the optic nerve has about the same information rate capacity as required to drive a pixel-by-pixel UHD TV picture at video rates. However, across its entire visual field the human retina is 100x more finely resolved still, meaning that only 1% of all pixels could be send to the brain. This is why we need a retina. Instead of wiring each photoreceptor directly to the brain, the retina compares the signals across groups of neighbouring photoreceptors in a series of pre-processing steps to compress the transmitted image. For example, if 1000s of neighbouring photoreceptors signal an image part of a clear-blue sky, there is no need to send 1000 versions of this information to the brain - 1 will do. How the retina achieves this, and many other types of computations is an area of active research that can potentially benefit a wide range of applications, ranging from medicine to computer vision and the design of "intelligent" camera systems. Like in humans, the eyes of all vertebrates such as mice, birds or fish have an optic nerve with a retina as its input. However, depending on the animal, and depending on the position in visual space, the types of information that needs to be sent to the brain varies dramatically. For example, a mouse needs to excel at spotting dark spots in the sky such as the silhouette of a predatory bird. As predatory birds never attack from below, this special computation is only required in half of the eye. In contrast, for a deep-sea fish it may be essential so detect faint luminescence signals emanating from other animals on the backdrop of the pitch-black ocean in any direction. The need for different types of retinal computations has driven specialisations in the way that the retinas of different animals are organised. Together, these present a vast resource for driving our understanding of how our senses work, how brains evolve, and how important information in images can be efficiently detected. We will use the highly visual zebrafish to study how retinal circuits that are positioned in different parts of this animal's eye differ from one another to best extract key information in the zebrafish's visual world. Zebrafish inhabit shallow freshwaters of the Indian subcontinent. In this underwater world, the visual field in front of and below the animal tends to contain a lot of colour, and we recently found that zebrafish invest more neurons for circuits computing colour to survey their lower visual field. In contrast, the upper visual field is dominated by light-dark contrasts, and so zebrafish invest more neurons into detecting bright and dark edges. However, not only colour, but also spatial detail available for vision to detect shapes varies between the upper and lower visual field. In shallow water, you are never far from the ground, and this is where most spatial detail is to be explored using vision. Accordingly, we will now study if like for colour, retinal circuits computing spatial detail are also predominately set-up to survey the ground - and if so, how they overlap with circuits computing colour. After all, there is only so much space for neurons in the tiny zebrafish's eye, and some functions may have to give way to allow the space for others. Studying which colour- and space-computations are implemented in different positions of the zebrafish's eye will shed new light onto how sensory systems can be optimised to preferentially transmit information that matters to the user.

Impact Summary

Potential beneficiaries of this research include: Academia: Scientists working in systems neuroscience, and in particular those interested in the senses, will benefit from insights gained on how a complete sensory system can tune its circuits to best serve its particular sensory niche. A direct link between natural input statistics and its neuronal representation at different processing stages presents a key angle for our understanding of sensory systems in general, and is likely to stimulate active debate and further research in the field. Moreover, colleagues specifically interested in colour vision and/or sensory ecology will benefit from insights into how the zebrafish's tetrachromatic retina is organised at a functional level to process chromatic information - this type of data is currently only available for few model species, and never in a systematic manner as proposed to be recorded in the proposed project. Finally, colleagues in computer vision and theoretical neuroscience will benefit from access to the novel datasets to be recorded which will include the responses of 10s of thousands of neurons to a defined set of stimuli as well as image and video data of these neuron's natural input statistics. Currently, to my knowledge, there exists no similarly comprehensive and connected dataset of sensory processing and its natural input. Industry and Medicine: Racing advances in our understanding of retinal computations over the past decade have led to an unprecedented performance of computational models capable of predicting real neuronal firing patterns in response to visual stimuli. This capability is fundamental to our ability to programme retinal implants aimed at restoring vision in the blind. These chips usually take light as the input and perform simple on-chip calculations aimed at mimicking retinal function and subsequently electrically stimulate degenerated nerve-fibres that still exist in the eye and project to the brain. The more accurate the modelof retinal function, the better these chips can mimic real computations done in the healthy retina, and ultimately restore a more natural version of human vision. However, current models of retinal function, though excellent at mimicking the neuronal responses to simple stimuli such as spots of light, struggle to accurately reflect retinal function if stimulus complexity increases to e.g. include natural images. This is what the proposed research is aimed to address. By measuring the response dimensions in colour and space that drives of retinal circuits positioned in different parts of the eye, we provide a rich dataset to hone generalised models of retinal processes that acknowledge neuronal receptive field substructures to ultimately to deliver more accurate predictive power of real retinal function when viewing natural scenes. In tandem, an increased understanding of how retinal circuits acknowledge large-scale statistical asymmetries in their natural input promises to inspire engineers and computer scientists to implement similar functional asymmetries into existing and novel imaging technology. General public: Improved understanding of the sense of vision, and of how neuronal circuits adapt and evolve under changing environmental pressures.
Committee Research Committee A (Animal disease, health and welfare)
Research TopicsNeuroscience and Behaviour
Research PriorityX – Research Priority information not available
Research Initiative X - not in an Initiative
Funding SchemeX – not Funded via a specific Funding Scheme
terms and conditions of use (opens in new window)
export PDF file