↓ Skip to main content

A Hierarchical Predictive Coding Model of Object Recognition in Natural Images

Overview of attention for article published in Cognitive Computation, December 2016
Altmetric Badge

About this Attention Score

  • Good Attention Score compared to outputs of the same age and source (78th percentile)

Mentioned by

twitter
2 X users

Citations

dimensions_citation
53 Dimensions

Readers on

mendeley
130 Mendeley
Title
A Hierarchical Predictive Coding Model of Object Recognition in Natural Images
Published in
Cognitive Computation, December 2016
DOI 10.1007/s12559-016-9445-1
Pubmed ID
Authors

M. W. Spratling

Abstract

Predictive coding has been proposed as a model of the hierarchical perceptual inference process performed in the cortex. However, results demonstrating that predictive coding is capable of performing the complex inference required to recognise objects in natural images have not previously been presented. This article proposes a hierarchical neural network based on predictive coding for performing visual object recognition. This network is applied to the tasks of categorising hand-written digits, identifying faces, and locating cars in images of street scenes. It is shown that image recognition can be performed with tolerance to position, illumination, size, partial occlusion, and within-category variation. The current results, therefore, provide the first practical demonstration that predictive coding (at least the particular implementation of predictive coding used here; the PC/BC-DIM algorithm) is capable of performing accurate visual object recognition.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 130 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 1 <1%
Germany 1 <1%
Unknown 128 98%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 32 25%
Researcher 23 18%
Student > Bachelor 12 9%
Student > Master 11 8%
Student > Doctoral Student 7 5%
Other 20 15%
Unknown 25 19%
Readers by discipline Count As %
Neuroscience 20 15%
Computer Science 20 15%
Psychology 17 13%
Engineering 9 7%
Agricultural and Biological Sciences 9 7%
Other 21 16%
Unknown 34 26%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 17 June 2018.
All research outputs
#16,868,818
of 24,801,176 outputs
Outputs from Cognitive Computation
#165
of 435 outputs
Outputs of similar age
#268,699
of 431,799 outputs
Outputs of similar age from Cognitive Computation
#3
of 14 outputs
Altmetric has tracked 24,801,176 research outputs across all sources so far. This one is in the 21st percentile – i.e., 21% of other outputs scored the same or lower than it.
So far Altmetric has tracked 435 research outputs from this source. They receive a mean Attention Score of 2.6. This one is in the 49th percentile – i.e., 49% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 431,799 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 29th percentile – i.e., 29% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 14 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 78% of its contemporaries.