↓ Skip to main content

Exploring the impact of mental workload on rater-based assessments

Overview of attention for article published in Advances in Health Sciences Education, April 2012
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age

Mentioned by

twitter
2 X users

Readers on

mendeley
131 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Exploring the impact of mental workload on rater-based assessments
Published in
Advances in Health Sciences Education, April 2012
DOI 10.1007/s10459-012-9370-3
Pubmed ID
Authors

Walter Tavares, Kevin W. Eva

Abstract

When appraising the performance of others, assessors must acquire relevant information and process it in a meaningful way in order to translate it effectively into ratings, comments, or judgments about how well the performance meets appropriate standards. Rater-based assessment strategies in health professional education, including scale and faculty development strategies aimed at improving them have generally been implemented with limited consideration of human cognitive and perceptual limitations. However, the extent to which the task assigned to raters aligns with their cognitive and perceptual capacities will determine the extent to which reliance on human judgment threatens assessment quality. It is well recognized in medical decision making that, as the amount of information to be processed increases, judges may engage mental shortcuts through the application of schemas, heuristics, or the adoption of solutions that satisfy rather than optimize the judge's needs. Further, these shortcuts may fundamentally limit/bias the information perceived or processed. Thinking of the challenges inherent in rater-based assessments in an analogous way may yield novel insights regarding the limits of rater-based assessment and may point to greater understanding of ways in which raters can be supported to facilitate sound judgment. This paper presents an initial exploration of various cognitive and perceptual limitations associated with rater-based assessment tasks. We hope to highlight how the inherent cognitive architecture of raters might beneficially be taken into account when designing rater-based assessment protocols.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 131 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Canada 2 2%
Malaysia 1 <1%
Netherlands 1 <1%
Australia 1 <1%
Indonesia 1 <1%
United Kingdom 1 <1%
United States 1 <1%
Unknown 123 94%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 21 16%
Researcher 19 15%
Student > Master 18 14%
Professor > Associate Professor 9 7%
Student > Bachelor 9 7%
Other 37 28%
Unknown 18 14%
Readers by discipline Count As %
Medicine and Dentistry 47 36%
Social Sciences 20 15%
Psychology 14 11%
Engineering 8 6%
Business, Management and Accounting 5 4%
Other 15 11%
Unknown 22 17%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 30 November 2013.
All research outputs
#14,628,184
of 22,713,403 outputs
Outputs from Advances in Health Sciences Education
#633
of 851 outputs
Outputs of similar age
#98,622
of 161,393 outputs
Outputs of similar age from Advances in Health Sciences Education
#11
of 14 outputs
Altmetric has tracked 22,713,403 research outputs across all sources so far. This one is in the 35th percentile – i.e., 35% of other outputs scored the same or lower than it.
So far Altmetric has tracked 851 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 5.7. This one is in the 24th percentile – i.e., 24% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 161,393 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 38th percentile – i.e., 38% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 14 others from the same source and published within six weeks on either side of this one. This one is in the 21st percentile – i.e., 21% of its contemporaries scored the same or lower than it.