↓ Skip to main content

The ABCs of DKA: Development and Validation of a Computer-Based Simulator and Scoring System

Overview of attention for article published in Journal of General Internal Medicine, July 2015
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
2 X users

Citations

dimensions_citation
12 Dimensions

Readers on

mendeley
112 Mendeley
Title
The ABCs of DKA: Development and Validation of a Computer-Based Simulator and Scoring System
Published in
Journal of General Internal Medicine, July 2015
DOI 10.1007/s11606-015-3273-y
Pubmed ID
Authors

Catherine H. Y. Yu, Sharon Straus, Ryan Brydges PhD

Abstract

Clinical management of diabetic ketoacidosis (DKA) continues to be suboptimal; simulation-based training may bridge this gap and is particularly applicable to teaching DKA management skills given it enables learning of basic knowledge, as well as clinical reasoning and patient management skills. 1) To develop, test, and refine a computer-based simulator of DKA management; 2) to collect validity evidence, according to National Standard's validity framework; and 3) to judge whether the simulator scoring system is an appropriate measure of DKA management skills of undergraduate and postgraduate medical trainees. After developing the DKA simulator, we completed usability testing to optimize its functionality. We then conducted a preliminary validation of the scoring system for measuring trainees' DKA management skills. We recruited year 1 and year 3 medical students, year 2 postgraduate trainees, and endocrinologists (n = 75); each completed a simulator run, and we collected their simulator-computed scores. We collected validity evidence related to content, internal structure, relations with other variables, and consequences. Our simulator consists of six cases highlighting DKA management priorities. Real-time progression of each case includes interactive order entry, laboratory and clinical data, and individualised feedback. Usability assessment identified issues with clarity of system status, user control, efficiency of use, and error prevention. Regarding validity evidence, Cronbach's α was 0.795 for the seven subscales indicating favorable internal structure evidence. Participants' scores showed a significant effect of training level (p < 0.001). Scores also correlated with the number of DKA patients they reported treating, weeks on Medicine rotation, and comfort with managing DKA. A score on the simulation exercise of 75 % had a sensitivity and specificity of 94.7 % and 51.8%, respectively, for delineating between expert staff physicians and trainees. We demonstrate how a simulator and scoring system can be developed, tested, and refined to determine its quality for use as an assessment modality. Our evidence suggests that it can be used for formative assessment of trainees' DKA management skills.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 112 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Spain 1 <1%
Brazil 1 <1%
Unknown 110 98%

Demographic breakdown

Readers by professional status Count As %
Student > Master 22 20%
Student > Ph. D. Student 12 11%
Student > Bachelor 11 10%
Lecturer 7 6%
Researcher 6 5%
Other 25 22%
Unknown 29 26%
Readers by discipline Count As %
Medicine and Dentistry 26 23%
Nursing and Health Professions 16 14%
Social Sciences 8 7%
Engineering 6 5%
Computer Science 4 4%
Other 17 15%
Unknown 35 31%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 20 July 2015.
All research outputs
#16,223,992
of 23,911,072 outputs
Outputs from Journal of General Internal Medicine
#6,057
of 7,806 outputs
Outputs of similar age
#157,308
of 265,850 outputs
Outputs of similar age from Journal of General Internal Medicine
#76
of 126 outputs
Altmetric has tracked 23,911,072 research outputs across all sources so far. This one is in the 21st percentile – i.e., 21% of other outputs scored the same or lower than it.
So far Altmetric has tracked 7,806 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 21.8. This one is in the 17th percentile – i.e., 17% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 265,850 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 32nd percentile – i.e., 32% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 126 others from the same source and published within six weeks on either side of this one. This one is in the 30th percentile – i.e., 30% of its contemporaries scored the same or lower than it.