↓ Skip to main content

Experimental investigation of false positive errors in auditory species occurrence surveys

Overview of attention for article published in Ecological Applications, July 2012
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (86th percentile)
  • High Attention Score compared to outputs of the same age and source (94th percentile)

Mentioned by

blogs
2 blogs

Citations

dimensions_citation
80 Dimensions

Readers on

mendeley
143 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Experimental investigation of false positive errors in auditory species occurrence surveys
Published in
Ecological Applications, July 2012
DOI 10.1890/11-2129.1
Pubmed ID
Authors

David A. W. Miller, Linda A. Weir, Brett T. McClintock, Evan H. Campbell Grant, Larissa L. Bailey, Theodore R. Simons

Abstract

False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, who recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors, and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI:--46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%;--3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.

Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 143 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 5 3%
Brazil 3 2%
Spain 2 1%
India 1 <1%
Australia 1 <1%
Puerto Rico 1 <1%
Unknown 130 91%

Demographic breakdown

Readers by professional status Count As %
Researcher 45 31%
Student > Master 26 18%
Student > Ph. D. Student 24 17%
Student > Bachelor 10 7%
Professor 5 3%
Other 20 14%
Unknown 13 9%
Readers by discipline Count As %
Agricultural and Biological Sciences 82 57%
Environmental Science 32 22%
Biochemistry, Genetics and Molecular Biology 2 1%
Economics, Econometrics and Finance 2 1%
Social Sciences 2 1%
Other 5 3%
Unknown 18 13%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 10. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 17 August 2017.
All research outputs
#3,289,054
of 24,717,821 outputs
Outputs from Ecological Applications
#843
of 3,335 outputs
Outputs of similar age
#20,910
of 167,543 outputs
Outputs of similar age from Ecological Applications
#2
of 37 outputs
Altmetric has tracked 24,717,821 research outputs across all sources so far. Compared to these this one has done well and is in the 86th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 3,335 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 16.0. This one has gotten more attention than average, scoring higher than 74% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 167,543 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 86% of its contemporaries.
We're also able to compare this research output to 37 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 94% of its contemporaries.