↓ Skip to main content

Exploring differences in adverse symptom event grading thresholds between clinicians and patients in the clinical trial setting

Overview of attention for article published in Journal of Cancer Research and Clinical Oncology, January 2017
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
37 Dimensions

Readers on

mendeley
47 Mendeley
Title
Exploring differences in adverse symptom event grading thresholds between clinicians and patients in the clinical trial setting
Published in
Journal of Cancer Research and Clinical Oncology, January 2017
DOI 10.1007/s00432-016-2335-9
Pubmed ID
Authors

Thomas M. Atkinson, Lauren J. Rogak, Narre Heon, Sean J. Ryan, Mary Shaw, Liora P. Stark, Antonia V. Bennett, Ethan Basch, Yuelin Li

Abstract

Symptomatic adverse event (AE) monitoring is essential in cancer clinical trials to assess patient safety, as well as inform decisions related to treatment and continued trial participation. As prior research has demonstrated that conventional concordance metrics (e.g., intraclass correlation) may not capture nuanced aspects of the association between clinician and patient-graded AEs, we aimed to characterize differences in AE grading thresholds between doctors (MDs), registered nurses (RNs), and patients using the Bayesian Graded Item Response Model (GRM). From the medical charts of 393 patients aged 26-91 (M = 62.39; 43% male) receiving chemotherapy, we retrospectively extracted MD, RN and patient AE ratings. Patients reported using previously developed Common Terminology Criteria for Adverse Events (CTCAE) patient-language adaptations called STAR (Symptom Tracking and Reporting). A GRM was fitted to calculate the latent grading thresholds between MDs, RNs and patients. Clinicians have overall higher average grading thresholds than patients when assessing diarrhea, dyspnea, nausea and vomiting. However, RNs have lower grading thresholds than patients and MDs when assessing constipation. The GRM shows higher variability in patients' AE grading thresholds than those obtained from clinicians. The present study provides evidence to support the notion that patients report some AEs that clinicians might not consider noteworthy until they are more severe. The availability of GRM methodology could serve to enhance clinical understanding of the patient symptomatic experience and facilitate discussion where AE grading discrepancies exist. Future work should focus on capturing explicit AE grading decision criteria from MDs, RNs, and patients.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 47 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 47 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 8 17%
Student > Bachelor 7 15%
Researcher 7 15%
Other 6 13%
Student > Master 5 11%
Other 9 19%
Unknown 5 11%
Readers by discipline Count As %
Medicine and Dentistry 13 28%
Nursing and Health Professions 7 15%
Psychology 4 9%
Computer Science 3 6%
Pharmacology, Toxicology and Pharmaceutical Science 2 4%
Other 10 21%
Unknown 8 17%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 23 January 2017.
All research outputs
#19,221,261
of 23,815,455 outputs
Outputs from Journal of Cancer Research and Clinical Oncology
#1,814
of 2,632 outputs
Outputs of similar age
#314,018
of 422,785 outputs
Outputs of similar age from Journal of Cancer Research and Clinical Oncology
#13
of 21 outputs
Altmetric has tracked 23,815,455 research outputs across all sources so far. This one is in the 11th percentile – i.e., 11% of other outputs scored the same or lower than it.
So far Altmetric has tracked 2,632 research outputs from this source. They receive a mean Attention Score of 3.6. This one is in the 22nd percentile – i.e., 22% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 422,785 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 15th percentile – i.e., 15% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 21 others from the same source and published within six weeks on either side of this one. This one is in the 23rd percentile – i.e., 23% of its contemporaries scored the same or lower than it.