↓ Skip to main content

Visual dictionaries as intermediate features in the human brain

Overview of attention for article published in Frontiers in Computational Neuroscience, January 2015
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Above-average Attention Score compared to outputs of the same age and source (62nd percentile)

Mentioned by

twitter
5 X users

Citations

dimensions_citation
14 Dimensions

Readers on

mendeley
41 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Visual dictionaries as intermediate features in the human brain
Published in
Frontiers in Computational Neuroscience, January 2015
DOI 10.3389/fncom.2014.00168
Pubmed ID
Authors

Kandan Ramakrishnan, H Steven Scholte, Iris I A Groen, Arnold W M Smeulders, Sennay Ghebreab, H. Steven Scholte, Iris I. A. Groen, Arnold W. M. Smeulders

Abstract

The human visual system is assumed to transform low level visual features to object and scene representations via features of intermediate complexity. How the brain computationally represents intermediate features is still unclear. To further elucidate this, we compared the biologically plausible HMAX model and Bag of Words (BoW) model from computer vision. Both these computational models use visual dictionaries, candidate features of intermediate complexity, to represent visual scenes, and the models have been proven effective in automatic object and scene recognition. These models however differ in the computation of visual dictionaries and pooling techniques. We investigated where in the brain and to what extent human fMRI responses to short video can be accounted for by multiple hierarchical levels of the HMAX and BoW models. Brain activity of 20 subjects obtained while viewing a short video clip was analyzed voxel-wise using a distance-based variation partitioning method. Results revealed that both HMAX and BoW explain a significant amount of brain activity in early visual regions V1, V2, and V3. However, BoW exhibits more consistency across subjects in accounting for brain activity compared to HMAX. Furthermore, visual dictionary representations by HMAX and BoW explain significantly some brain activity in higher areas which are believed to process intermediate features. Overall our results indicate that, although both HMAX and BoW account for activity in the human visual system, the BoW seems to more faithfully represent neural responses in low and intermediate level visual areas of the brain.

X Demographics

X Demographics

The data shown below were collected from the profiles of 5 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 41 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 2 5%
United Kingdom 1 2%
Chile 1 2%
France 1 2%
Unknown 36 88%

Demographic breakdown

Readers by professional status Count As %
Researcher 13 32%
Student > Master 8 20%
Student > Ph. D. Student 7 17%
Other 3 7%
Student > Doctoral Student 2 5%
Other 7 17%
Unknown 1 2%
Readers by discipline Count As %
Psychology 8 20%
Computer Science 8 20%
Neuroscience 7 17%
Agricultural and Biological Sciences 4 10%
Engineering 4 10%
Other 5 12%
Unknown 5 12%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 28 February 2015.
All research outputs
#13,423,088
of 22,785,242 outputs
Outputs from Frontiers in Computational Neuroscience
#574
of 1,341 outputs
Outputs of similar age
#186,145
of 379,803 outputs
Outputs of similar age from Frontiers in Computational Neuroscience
#13
of 35 outputs
Altmetric has tracked 22,785,242 research outputs across all sources so far. This one is in the 39th percentile – i.e., 39% of other outputs scored the same or lower than it.
So far Altmetric has tracked 1,341 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.2. This one has gotten more attention than average, scoring higher than 54% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 379,803 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 49th percentile – i.e., 49% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 35 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 62% of its contemporaries.