↓ Skip to main content

Bayesian cross‐validation for model evaluation and selection, with application to the North American Breeding Bird Survey

Overview of attention for article published in Ecology, July 2016
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
5 X users

Citations

dimensions_citation
36 Dimensions

Readers on

mendeley
34 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Bayesian cross‐validation for model evaluation and selection, with application to the North American Breeding Bird Survey
Published in
Ecology, July 2016
DOI 10.1890/15-1286.1
Pubmed ID
Authors

William A Link, John R Sauer

Abstract

The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.

X Demographics

X Demographics

The data shown below were collected from the profiles of 5 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 34 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United Kingdom 1 3%
United States 1 3%
France 1 3%
Unknown 31 91%

Demographic breakdown

Readers by professional status Count As %
Researcher 11 32%
Student > Ph. D. Student 9 26%
Student > Master 4 12%
Student > Postgraduate 2 6%
Student > Doctoral Student 1 3%
Other 2 6%
Unknown 5 15%
Readers by discipline Count As %
Agricultural and Biological Sciences 16 47%
Environmental Science 10 29%
Biochemistry, Genetics and Molecular Biology 1 3%
Nursing and Health Professions 1 3%
Earth and Planetary Sciences 1 3%
Other 0 0%
Unknown 5 15%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 22 March 2016.
All research outputs
#14,083,834
of 24,549,201 outputs
Outputs from Ecology
#5,142
of 6,823 outputs
Outputs of similar age
#185,367
of 358,908 outputs
Outputs of similar age from Ecology
#44
of 79 outputs
Altmetric has tracked 24,549,201 research outputs across all sources so far. This one is in the 41st percentile – i.e., 41% of other outputs scored the same or lower than it.
So far Altmetric has tracked 6,823 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 13.3. This one is in the 23rd percentile – i.e., 23% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 358,908 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 47th percentile – i.e., 47% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 79 others from the same source and published within six weeks on either side of this one. This one is in the 44th percentile – i.e., 44% of its contemporaries scored the same or lower than it.