↓ Skip to main content

Validation of educational assessments: a primer for simulation and beyond

Overview of attention for article published in Advances in Simulation, December 2016
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • Among the highest-scoring outputs from this source (#18 of 271)
  • High Attention Score compared to outputs of the same age (96th percentile)
  • High Attention Score compared to outputs of the same age and source (83rd percentile)

Mentioned by

twitter
84 X users
facebook
1 Facebook page

Citations

dimensions_citation
221 Dimensions

Readers on

mendeley
355 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Validation of educational assessments: a primer for simulation and beyond
Published in
Advances in Simulation, December 2016
DOI 10.1186/s41077-016-0033-y
Pubmed ID
Authors

David A. Cook, Rose Hatala

Abstract

Simulation plays a vital role in health professions assessment. This review provides a primer on assessment validation for educators and education researchers. We focus on simulation-based assessment of health professionals, but the principles apply broadly to other assessment approaches and topics. Validation refers to the process of collecting validity evidence to evaluate the appropriateness of the interpretations, uses, and decisions based on assessment results. Contemporary frameworks view validity as a hypothesis, and validity evidence is collected to support or refute the validity hypothesis (i.e., that the proposed interpretations and decisions are defensible). In validation, the educator or researcher defines the proposed interpretations and decisions, identifies and prioritizes the most questionable assumptions in making these interpretations and decisions (the "interpretation-use argument"), empirically tests those assumptions using existing or newly-collected evidence, and then summarizes the evidence as a coherent "validity argument." A framework proposed by Messick identifies potential evidence sources: content, response process, internal structure, relationships with other variables, and consequences. Another framework proposed by Kane identifies key inferences in generating useful interpretations: scoring, generalization, extrapolation, and implications/decision. We propose an eight-step approach to validation that applies to either framework: Define the construct and proposed interpretation, make explicit the intended decision(s), define the interpretation-use argument and prioritize needed validity evidence, identify candidate instruments and/or create/adapt a new instrument, appraise existing evidence and collect new evidence as needed, keep track of practical issues, formulate the validity argument, and make a judgment: does the evidence support the intended use? Rigorous validation first prioritizes and then empirically evaluates key assumptions in the interpretation and use of assessment scores. Validation science would be improved by more explicit articulation and prioritization of the interpretation-use argument, greater use of formal validation frameworks, and more evidence informing the consequences and implications of assessment.

X Demographics

X Demographics

The data shown below were collected from the profiles of 84 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 355 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 355 100%

Demographic breakdown

Readers by professional status Count As %
Other 43 12%
Student > Master 33 9%
Student > Ph. D. Student 31 9%
Researcher 29 8%
Student > Postgraduate 22 6%
Other 92 26%
Unknown 105 30%
Readers by discipline Count As %
Medicine and Dentistry 133 37%
Nursing and Health Professions 21 6%
Social Sciences 12 3%
Engineering 11 3%
Computer Science 9 3%
Other 47 13%
Unknown 122 34%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 57. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 21 May 2019.
All research outputs
#737,080
of 25,199,243 outputs
Outputs from Advances in Simulation
#18
of 271 outputs
Outputs of similar age
#15,326
of 432,037 outputs
Outputs of similar age from Advances in Simulation
#2
of 6 outputs
Altmetric has tracked 25,199,243 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 97th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 271 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 17.8. This one has done particularly well, scoring higher than 93% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 432,037 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 96% of its contemporaries.
We're also able to compare this research output to 6 others from the same source and published within six weeks on either side of this one. This one has scored higher than 4 of them.