↓ Skip to main content

How Well Can Saliency Models Predict Fixation Selection in Scenes Beyond Central Bias? A New Approach to Model Evaluation Using Generalized Linear Mixed Models

Overview of attention for article published in Frontiers in Human Neuroscience, October 2017
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (61st percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
6 X users

Citations

dimensions_citation
29 Dimensions

Readers on

mendeley
58 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
How Well Can Saliency Models Predict Fixation Selection in Scenes Beyond Central Bias? A New Approach to Model Evaluation Using Generalized Linear Mixed Models
Published in
Frontiers in Human Neuroscience, October 2017
DOI 10.3389/fnhum.2017.00491
Pubmed ID
Authors

Antje Nuthmann, Wolfgang Einhäuser, Immo Schütz

Abstract

Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead ("central bias"). This problem is further exacerbated in the context of model comparisons, because some-but not all-models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox "GridFix" available.

X Demographics

X Demographics

The data shown below were collected from the profiles of 6 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 58 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 58 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 15 26%
Student > Master 8 14%
Student > Bachelor 7 12%
Researcher 6 10%
Student > Doctoral Student 5 9%
Other 6 10%
Unknown 11 19%
Readers by discipline Count As %
Psychology 18 31%
Neuroscience 8 14%
Computer Science 6 10%
Agricultural and Biological Sciences 2 3%
Economics, Econometrics and Finance 1 2%
Other 6 10%
Unknown 17 29%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 4. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 08 November 2017.
All research outputs
#8,006,543
of 24,226,848 outputs
Outputs from Frontiers in Human Neuroscience
#3,350
of 7,440 outputs
Outputs of similar age
#125,994
of 333,352 outputs
Outputs of similar age from Frontiers in Human Neuroscience
#82
of 149 outputs
Altmetric has tracked 24,226,848 research outputs across all sources so far. This one has received more attention than most of these and is in the 66th percentile.
So far Altmetric has tracked 7,440 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.8. This one has gotten more attention than average, scoring higher than 54% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 333,352 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 61% of its contemporaries.
We're also able to compare this research output to 149 others from the same source and published within six weeks on either side of this one. This one is in the 44th percentile – i.e., 44% of its contemporaries scored the same or lower than it.