↓ Skip to main content

Modifying Hofstee standard setting for assessments that vary in difficulty, and to determine boundaries for different levels of achievement

Overview of attention for article published in BMC Medical Education, January 2016
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
3 X users

Citations

dimensions_citation
3 Dimensions

Readers on

mendeley
74 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Modifying Hofstee standard setting for assessments that vary in difficulty, and to determine boundaries for different levels of achievement
Published in
BMC Medical Education, January 2016
DOI 10.1186/s12909-016-0555-y
Pubmed ID
Authors

Steven A. Burr, John Whittle, Lucy C. Fairclough, Lee Coombes, Ian Todd

Abstract

Fixed mark grade boundaries for non-linear assessment scales fail to account for variations in assessment difficulty. Where assessment difficulty varies more than ability of successive cohorts or the quality of the teaching, anchoring grade boundaries to median cohort performance should provide an effective method for setting standards. This study investigated the use of a modified Hofstee (MH) method for setting unsatisfactory/satisfactory and satisfactory/excellent grade boundaries for multiple choice question-style assessments, adjusted using the cohort median to obviate the effect of subjective judgements and provision of grade quotas. Outcomes for the MH method were compared with formula scoring/correction for guessing (FS/CFG) for 11 assessments, indicating that there were no significant differences between MH and FS/CFG in either the effective unsatisfactory/satisfactory grade boundary or the proportion of unsatisfactory graded candidates (p > 0.05). However the boundary for excellent performance was significantly higher for MH (p < 0.01), and the proportion of candidates returned as excellent was significantly lower (p < 0.01). MH also generated performance profiles and pass marks that were not significantly different from those given by the Ebel method of criterion-referenced standard setting. This supports MH as an objective model for calculating variable grade boundaries, adjusted for test difficulty. Furthermore, it easily creates boundaries for unsatisfactory/satisfactory and satisfactory/excellent performance that are protected against grade inflation. It could be implemented as a stand-alone method of standard setting, or as part of the post-examination analysis of results for assessments for which pre-examination criterion-referenced standard setting is employed.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 74 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Netherlands 1 1%
Unknown 73 99%

Demographic breakdown

Readers by professional status Count As %
Researcher 10 14%
Other 9 12%
Student > Master 9 12%
Student > Postgraduate 7 9%
Professor > Associate Professor 7 9%
Other 14 19%
Unknown 18 24%
Readers by discipline Count As %
Medicine and Dentistry 38 51%
Psychology 3 4%
Social Sciences 3 4%
Veterinary Science and Veterinary Medicine 2 3%
Biochemistry, Genetics and Molecular Biology 1 1%
Other 7 9%
Unknown 20 27%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 22 April 2020.
All research outputs
#15,758,801
of 25,402,889 outputs
Outputs from BMC Medical Education
#2,194
of 3,988 outputs
Outputs of similar age
#215,482
of 405,642 outputs
Outputs of similar age from BMC Medical Education
#50
of 89 outputs
Altmetric has tracked 25,402,889 research outputs across all sources so far. This one is in the 37th percentile – i.e., 37% of other outputs scored the same or lower than it.
So far Altmetric has tracked 3,988 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.4. This one is in the 43rd percentile – i.e., 43% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 405,642 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 45th percentile – i.e., 45% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 89 others from the same source and published within six weeks on either side of this one. This one is in the 42nd percentile – i.e., 42% of its contemporaries scored the same or lower than it.