↓ Skip to main content

Objective Fidelity Evaluation in Multisensory Virtual Environments: Auditory Cue Fidelity in Flight Simulation

Overview of attention for article published in PLOS ONE, September 2012
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
2 X users

Citations

dimensions_citation
26 Dimensions

Readers on

mendeley
75 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Objective Fidelity Evaluation in Multisensory Virtual Environments: Auditory Cue Fidelity in Flight Simulation
Published in
PLOS ONE, September 2012
DOI 10.1371/journal.pone.0044381
Pubmed ID
Authors

Georg F. Meyer, Li Ting Wong, Emma Timson, Philip Perfect, Mark D. White

Abstract

We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 75 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Japan 1 1%
United Kingdom 1 1%
Mexico 1 1%
Brazil 1 1%
Unknown 71 95%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 13 17%
Researcher 13 17%
Student > Master 13 17%
Student > Bachelor 7 9%
Other 3 4%
Other 12 16%
Unknown 14 19%
Readers by discipline Count As %
Psychology 12 16%
Engineering 12 16%
Computer Science 8 11%
Design 5 7%
Neuroscience 4 5%
Other 16 21%
Unknown 18 24%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 23 December 2013.
All research outputs
#14,151,132
of 22,678,224 outputs
Outputs from PLOS ONE
#115,596
of 193,568 outputs
Outputs of similar age
#98,142
of 169,211 outputs
Outputs of similar age from PLOS ONE
#2,503
of 4,380 outputs
Altmetric has tracked 22,678,224 research outputs across all sources so far. This one is in the 35th percentile – i.e., 35% of other outputs scored the same or lower than it.
So far Altmetric has tracked 193,568 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 15.0. This one is in the 36th percentile – i.e., 36% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 169,211 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 39th percentile – i.e., 39% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 4,380 others from the same source and published within six weeks on either side of this one. This one is in the 38th percentile – i.e., 38% of its contemporaries scored the same or lower than it.