↓ Skip to main content

Attentional Bias in Human Category Learning: The Case of Deep Learning

Overview of attention for article published in Frontiers in Psychology, April 2018
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
4 X users
reddit
1 Redditor

Citations

dimensions_citation
5 Dimensions

Readers on

mendeley
35 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Attentional Bias in Human Category Learning: The Case of Deep Learning
Published in
Frontiers in Psychology, April 2018
DOI 10.3389/fpsyg.2018.00374
Pubmed ID
Authors

Catherine Hanson, Leyla Roskan Caglar, Stephen José Hanson

Abstract

Category learning performance is influenced by both the nature of the category's structure and the way category features are processed during learning. Shepard (1964, 1987) showed that stimuli can have structures with features that are statistically uncorrelated (separable) or statistically correlated (integral) within categories. Humans find it much easier to learn categories having separable features, especially when attention to only a subset of relevant features is required, and harder to learn categories having integral features, which require consideration of all of the available features and integration of all the relevant category features satisfying the category rule (Garner, 1974). In contrast to humans, a single hidden layer backpropagation (BP) neural network has been shown to learn both separable and integral categories equally easily, independent of the category rule (Kruschke, 1993). This "failure" to replicate human category performance appeared to be strong evidence that connectionist networks were incapable of modeling human attentional bias. We tested the presumed limitations of attentional bias in networks in two ways: (1) by having networks learn categories with exemplars that have high feature complexity in contrast to the low dimensional stimuli previously used, and (2) by investigating whether a Deep Learning (DL) network, which has demonstrated humanlike performance in many different kinds of tasks (language translation, autonomous driving, etc.), would display human-like attentional bias during category learning. We were able to show a number of interesting results. First, we replicated the failure of BP to differentially process integral and separable category structures when low dimensional stimuli are used (Garner, 1974; Kruschke, 1993). Second, we show that using the same low dimensional stimuli, Deep Learning (DL), unlike BP but similar to humans, learns separable category structures more quickly than integral category structures. Third, we show that even BP can exhibit human like learning differences between integral and separable category structures when high dimensional stimuli (face exemplars) are used. We conclude, after visualizing the hidden unit representations, that DL appears to extend initial learning due to feature development thereby reducing destructive feature competition by incrementally refining feature detectors throughout later layers until a tipping point (in terms of error) is reached resulting in rapid asymptotic learning.

X Demographics

X Demographics

The data shown below were collected from the profiles of 4 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 35 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 35 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 6 17%
Researcher 6 17%
Student > Bachelor 5 14%
Student > Ph. D. Student 4 11%
Student > Postgraduate 3 9%
Other 6 17%
Unknown 5 14%
Readers by discipline Count As %
Psychology 10 29%
Neuroscience 7 20%
Computer Science 5 14%
Engineering 2 6%
Social Sciences 2 6%
Other 3 9%
Unknown 6 17%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 16 August 2018.
All research outputs
#13,346,498
of 23,026,672 outputs
Outputs from Frontiers in Psychology
#12,619
of 30,283 outputs
Outputs of similar age
#164,523
of 327,953 outputs
Outputs of similar age from Frontiers in Psychology
#346
of 593 outputs
Altmetric has tracked 23,026,672 research outputs across all sources so far. This one is in the 41st percentile – i.e., 41% of other outputs scored the same or lower than it.
So far Altmetric has tracked 30,283 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.5. This one has gotten more attention than average, scoring higher than 56% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 327,953 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 48th percentile – i.e., 48% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 593 others from the same source and published within six weeks on either side of this one. This one is in the 40th percentile – i.e., 40% of its contemporaries scored the same or lower than it.