↓ Skip to main content

Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards

Overview of attention for article published in BMJ Health & Care Informatics, October 2023
Altmetric Badge

About this Attention Score

  • Good Attention Score compared to outputs of the same age (70th percentile)
  • Good Attention Score compared to outputs of the same age and source (69th percentile)

Mentioned by

twitter
6 X users

Citations

dimensions_citation
8 Dimensions

Readers on

mendeley
22 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards
Published in
BMJ Health & Care Informatics, October 2023
DOI 10.1136/bmjhci-2023-100830
Pubmed ID
Authors

Richard HR Roberts, Stephen R Ali, Hayley A Hutchings, Thomas D Dobbs, Iain S Whitaker

Abstract

Amid clinicians' challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against human evaluation in scoring abstracts to determine its potential as a tool for evidence synthesis. We compared ChatGPT's scoring of implant dentistry abstracts with human evaluators using the Consolidated Standards of Reporting Trials for Abstracts reporting standards checklist, yielding an overall compliance score (OCS). Bland-Altman analysis assessed agreement between human and AI-generated OCS percentages. Additional error analysis included mean difference of OCS subscores, Welch's t-test and Pearson's correlation coefficient. Bland-Altman analysis showed a mean difference of 4.92% (95% CI 0.62%, 0.37%) in OCS between human evaluation and ChatGPT. Error analysis displayed small mean differences in most domains, with the highest in 'conclusion' (0.764 (95% CI 0.186, 0.280)) and the lowest in 'blinding' (0.034 (95% CI 0.818, 0.895)). The strongest correlations between were in 'harms' (r=0.32, p<0.001) and 'trial registration' (r=0.34, p=0.002), whereas the weakest were in 'intervention' (r=0.02, p<0.001) and 'objective' (r=0.06, p<0.001). LLMs like ChatGPT can help automate appraisal of medical literature, aiding in the identification of accurately reported research. Possible applications of ChatGPT include integration within medical databases for abstract evaluation. Current limitations include the token limit, restricting its usage to abstracts. As AI technology advances, future versions like GPT4 could offer more reliable, comprehensive evaluations, enhancing the identification of high-quality research and potentially improving patient outcomes.

Timeline

Login to access the full chart related to this output.

If you don’t have an account, click here to discover Explorer

X Demographics

X Demographics

The data shown below were collected from the profiles of 6 X users who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 22 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 22 100%

Demographic breakdown

Readers by professional status Count As %
Unspecified 2 9%
Student > Master 1 5%
Lecturer 1 5%
Student > Bachelor 1 5%
Student > Ph. D. Student 1 5%
Other 3 14%
Unknown 13 59%
Readers by discipline Count As %
Medicine and Dentistry 3 14%
Unspecified 2 9%
Computer Science 2 9%
Social Sciences 1 5%
Unknown 14 64%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 5. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 13 November 2023.
All research outputs
#7,453,701
of 26,617,918 outputs
Outputs from BMJ Health & Care Informatics
#173
of 517 outputs
Outputs of similar age
#109,050
of 370,284 outputs
Outputs of similar age from BMJ Health & Care Informatics
#4
of 13 outputs
Altmetric has tracked 26,617,918 research outputs across all sources so far. This one has received more attention than most of these and is in the 71st percentile.
So far Altmetric has tracked 517 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 9.9. This one has gotten more attention than average, scoring higher than 66% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 370,284 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 70% of its contemporaries.
We're also able to compare this research output to 13 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 69% of its contemporaries.