↓ Skip to main content

Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment

Overview of attention for article published in BMJ Health & Care Informatics, July 2022
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Among the highest-scoring outputs from this source (#46 of 512)
  • High Attention Score compared to outputs of the same age (90th percentile)
  • Good Attention Score compared to outputs of the same age and source (75th percentile)

Mentioned by

news
1 news outlet
twitter
15 X users

Citations

dimensions_citation
17 Dimensions

Readers on

mendeley
54 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment
Published in
BMJ Health & Care Informatics, July 2022
DOI 10.1136/bmjhci-2022-100549
Pubmed ID
Authors

Paul Festor, Yan Jia, Anthony C Gordon, A Aldo Faisal, Ibrahim Habli, Matthieu Komorowski

Abstract

Establishing confidence in the safety of Artificial Intelligence (AI)-based clinical decision support systems is important prior to clinical deployment and regulatory approval for systems with increasing autonomy. Here, we undertook safety assurance of the AI Clinician, a previously published reinforcement learning-based treatment recommendation system for sepsis. As part of the safety assurance, we defined four clinical hazards in sepsis resuscitation based on clinical expert opinion and the existing literature. We then identified a set of unsafe scenarios, intended to limit the action space of the AI agent with the goal of reducing the likelihood of hazardous decisions. Using a subset of the Medical Information Mart for Intensive Care (MIMIC-III) database, we demonstrated that our previously published 'AI clinician' recommended fewer hazardous decisions than human clinicians in three out of our four predefined clinical scenarios, while the difference was not statistically significant in the fourth scenario. Then, we modified the reward function to satisfy our safety constraints and trained a new AI Clinician agent. The retrained model shows enhanced safety, without negatively impacting model performance. While some contextual patient information absent from the data may have pushed human clinicians to take hazardous actions, the data were curated to limit the impact of this confounder. These advances provide a use case for the systematic safety assurance of AI-based clinical systems towards the generation of explicit safety evidence, which could be replicated for other AI applications or other clinical contexts, and inform medical device regulatory bodies.

Timeline

Login to access the full chart related to this output.

If you don’t have an account, click here to discover Explorer

X Demographics

X Demographics

The data shown below were collected from the profiles of 15 X users who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 54 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 54 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 7 13%
Student > Postgraduate 4 7%
Student > Ph. D. Student 4 7%
Student > Doctoral Student 3 6%
Other 2 4%
Other 8 15%
Unknown 26 48%
Readers by discipline Count As %
Medicine and Dentistry 7 13%
Unspecified 4 7%
Engineering 4 7%
Nursing and Health Professions 3 6%
Arts and Humanities 2 4%
Other 6 11%
Unknown 28 52%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 19. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 27 April 2023.
All research outputs
#2,022,521
of 26,391,552 outputs
Outputs from BMJ Health & Care Informatics
#46
of 512 outputs
Outputs of similar age
#41,830
of 424,112 outputs
Outputs of similar age from BMJ Health & Care Informatics
#2
of 8 outputs
Altmetric has tracked 26,391,552 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 92nd percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 512 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 9.3. This one has done particularly well, scoring higher than 90% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 424,112 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 90% of its contemporaries.
We're also able to compare this research output to 8 others from the same source and published within six weeks on either side of this one. This one has scored higher than 6 of them.