A service of Penn’s Neuroscience Graduate Group

Neural mechanisms of visual search: How does the brain help us find our keys?

or, technically,
Signals in inferotemporal and perirhinal cortex suggest an untangling of visual target information [See the original abstract on PubMed]

Authors: Marino Pagan, Luke S. Urban, Margot P. Wohl, Nicole C. Rust

Brief prepared by: Diana Lynn Xie
Brief approved by: Elelbin Amanda Ortiz and Noam Roth
Section Chief: Yunshu Fan
Date posted: April 20, 2017
Brief in Brief (TL;DR)

What do we know: Visual search is when we try to find something we are looking for, such as our keys. To do that, our brain needs to combine what we are looking at (the shapes our eyes see) and what we are looking for (the shape of the keys) and figure out if they match.

What don’t we know: We don't know how what we're looking at (visual signals) and what we're looking for (memory signals) are combined to tell us that we have found something. Where do these signals come from? How do they combine? And when does this combination result in our 'recognition' that we have found what we're looking for?

What this study shows: The scientists figured out that a brain region called inferotemporal cortex, or IT, contains combined information about visual and memory signals. They also found that another brain region, called perirhinal cortex or PRH, takes these signals from IT and converts them into a more easily readable format to tell whether there's a match between what we see and what we're trying to find.

What we can do in the future because of this study: We can examine how the connections between IT and PRH combine and reformat information and how we can manipulate those connections to make information processing better or worse.

Why you should care: To this day, our brain still out-performs computers in many tasks that we perform naturally, such as searching for an object and identifying what we see. The brain performs the tasks so efficiently and accurately that we hardly even notice. Understanding how the brain processes visual information, forms a perception, and achieves a goal can inspire engineers to design better algorithms for computers to perform visual tasks.

Brief for Non-Neuroscientists

When we look for some specific object (target) in our environment, we must retain it in working memory during our search and recognize when we visually perceive it. In the brain, this occurs when the 'working memory signal' matches with the 'visual signal'. Our brain has different areas with specialized functions, so these memory and visual signals are initially separated in different parts of our brain. Where and how do they converge and tell us that we have found our target? The authors found evidence that working memory signals and visual signals are found together in the inferotemporal cortex (IT), a late-stage visual region. However, this combination of signals is 'tangled' (nonlinearly-decodable) and initially difficult to 'read' by the brain. So, in a further step, this memory-visual signal combination is relayed to a brain region called the perirhinal cortex (PRH), where the combination is 'untangled' (linearly decodable) so that a 'match' signal can be detected.

Brief for Neuroscientists

During visual search, finding and recognizing a specific target (i.e. knowing that the currently viewed image matches the sought target) involves the combination of working memory signals about what the target is, and visual signals about what is currently being viewed. The authors conducted neural recordings in both the perirhinal cortex (PRH) and the inferotemporal cortex (IT). They then analyzed this data using computational and machine learning methods to find that both working memory signals and visual signals present in both regions, as well as signals with their combination -- i.e. match signals. These memory-visual match signals were decodable from the activity of populations of IT neurons using nonlinear methods, while the match signals in PRH were present in individual cells and were therefore linearly decodable. These results support a model in which the combined signals are first received by the ventral visual pathway and IT and then relayed to PRH, where they are transformed into information that allows matches and distractors to be distinguished more readily (in a linearly-decodable format). By 'untangling' this target match information, PRH serves as an information processor for inputs from IT, which may lack the topography to independently untangle the visual information it receives from the visual system.

Site design by Peter Dong and M. Morgan Taylor.