↓ Skip to main content

Why vision is not both hierarchical and feedforward

Overview of attention for article published in Frontiers in Computational Neuroscience, October 2014
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
3 X users

Citations

dimensions_citation
28 Dimensions

Readers on

mendeley
143 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Why vision is not both hierarchical and feedforward
Published in
Frontiers in Computational Neuroscience, October 2014
DOI 10.3389/fncom.2014.00135
Pubmed ID
Authors

Michael H. Herzog, Aaron M. Clarke

Abstract

In classical models of object recognition, first, basic features (e.g., edges and lines) are analyzed by independent filters that mimic the receptive field profiles of V1 neurons. In a feedforward fashion, the outputs of these filters are fed to filters at the next processing stage, pooling information across several filters from the previous level, and so forth at subsequent processing stages. Low-level processing determines high-level processing. Information lost on lower stages is irretrievably lost. Models of this type have proven to be very successful in many fields of vision, but have failed to explain object recognition in general. Here, we present experiments that, first, show that, similar to demonstrations from the Gestaltists, figural aspects determine low-level processing (as much as the other way around). Second, performance on a single element depends on all the other elements in the visual scene. Small changes in the overall configuration can lead to large changes in performance. Third, grouping of elements is key. Only if we know how elements group across the entire visual field, can we determine performance on individual elements, i.e., challenging the classical stereotypical filtering approach, which is at the very heart of most vision models.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 143 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 4 3%
Germany 3 2%
Switzerland 2 1%
Spain 2 1%
United Kingdom 1 <1%
Chile 1 <1%
Australia 1 <1%
Canada 1 <1%
Unknown 128 90%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 39 27%
Researcher 26 18%
Student > Master 20 14%
Professor > Associate Professor 7 5%
Professor 6 4%
Other 22 15%
Unknown 23 16%
Readers by discipline Count As %
Neuroscience 31 22%
Psychology 29 20%
Agricultural and Biological Sciences 16 11%
Computer Science 16 11%
Engineering 6 4%
Other 16 11%
Unknown 29 20%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 22 April 2021.
All research outputs
#14,205,797
of 22,772,779 outputs
Outputs from Frontiers in Computational Neuroscience
#691
of 1,341 outputs
Outputs of similar age
#134,990
of 260,348 outputs
Outputs of similar age from Frontiers in Computational Neuroscience
#18
of 30 outputs
Altmetric has tracked 22,772,779 research outputs across all sources so far. This one is in the 35th percentile – i.e., 35% of other outputs scored the same or lower than it.
So far Altmetric has tracked 1,341 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.2. This one is in the 44th percentile – i.e., 44% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 260,348 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 45th percentile – i.e., 45% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 30 others from the same source and published within six weeks on either side of this one. This one is in the 36th percentile – i.e., 36% of its contemporaries scored the same or lower than it.