↓ Skip to main content

Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

Overview of attention for article published in Frontiers in Psychology, January 2015
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
24 Dimensions

Readers on

mendeley
22 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements
Published in
Frontiers in Psychology, January 2015
DOI 10.3389/fpsyg.2014.01457
Pubmed ID
Authors

Stephen Grossberg, Karthik Srinivasan, Arash Yazdanbakhsh

Abstract

How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 22 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 1 5%
France 1 5%
Unknown 20 91%

Demographic breakdown

Readers by professional status Count As %
Student > Master 6 27%
Researcher 6 27%
Student > Doctoral Student 3 14%
Student > Ph. D. Student 1 5%
Lecturer > Senior Lecturer 1 5%
Other 2 9%
Unknown 3 14%
Readers by discipline Count As %
Psychology 9 41%
Neuroscience 3 14%
Computer Science 2 9%
Agricultural and Biological Sciences 2 9%
Engineering 2 9%
Other 1 5%
Unknown 3 14%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 14 January 2015.
All research outputs
#20,249,662
of 22,778,347 outputs
Outputs from Frontiers in Psychology
#24,000
of 29,687 outputs
Outputs of similar age
#296,773
of 353,651 outputs
Outputs of similar age from Frontiers in Psychology
#369
of 400 outputs
Altmetric has tracked 22,778,347 research outputs across all sources so far. This one is in the 1st percentile – i.e., 1% of other outputs scored the same or lower than it.
So far Altmetric has tracked 29,687 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.5. This one is in the 1st percentile – i.e., 1% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 353,651 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 400 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.