↓ Skip to main content

Mode equivalence and acceptability of tablet computer-, interactive voice response system-, and paper-based administration of the U.S. National Cancer Institute’s Patient-Reported Outcomes version of…

Overview of attention for article published in Health and Quality of Life Outcomes, February 2016
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (54th percentile)
  • Good Attention Score compared to outputs of the same age and source (78th percentile)

Mentioned by

twitter
4 X users

Readers on

mendeley
114 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Mode equivalence and acceptability of tablet computer-, interactive voice response system-, and paper-based administration of the U.S. National Cancer Institute’s Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE)
Published in
Health and Quality of Life Outcomes, February 2016
DOI 10.1186/s12955-016-0426-6
Pubmed ID
Authors

Antonia V. Bennett, Amylou C. Dueck, Sandra A. Mitchell, Tito R. Mendoza, Bryce B. Reeve, Thomas M. Atkinson, Kathleen M. Castro, Andrea Denicoff, Lauren J. Rogak, Jay K. Harness, James D. Bearden, Donna Bryant, Robert D. Siegel, Deborah Schrag, Ethan Basch, on behalf of the National Cancer Institute PRO-CTCAE Study Group

Abstract

PRO-CTCAE is a library of items that measure cancer treatment-related symptomatic adverse events (NCI Contracts: HHSN261201000043C and HHSN 261201000063C). The objective of this study is to examine the equivalence and acceptability of the three data collection modes (Web-enabled touchscreen tablet computer, Interactive voice response system [IVRS], and paper) available within the US National Cancer Institute (NCI) Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE) measurement system. Participants (n = 112; median age 56.5; 24 % high school or less) receiving treatment for cancer at seven US sites completed 28 PRO-CTCAE items (scoring range 0-4) by three modes (order randomized) at a single study visit. Subjects completed one page (approx. 15 items) of the EORTC QLQ-C30 between each mode as a distractor. Item scores by mode were compared using intraclass correlation coefficients (ICC); differences in scores within the 3-mode crossover design were evaluated with mixed-effects models. Difficulties with each mode experienced by participants were also assessed. 103 (92 %) completed questionnaires by all three modes. The median ICC comparing tablet vs IVRS was 0.78 (range 0.55-0.90); tablet vs paper: 0.81 (0.62-0.96); IVRS vs paper: 0.78 (0.60-0.91); 89 % of ICCs were ≥0.70. Item-level mean differences by mode were small (medians [ranges] for tablet vs. IVRS = -0.04 [-0.16-0.22]; tablet vs paper = -0.02 [-0.11-0.14]; IVRS vs paper = 0.02 [-0.07-0.19]), and 57/81 (70 %) items had bootstrapped 95 % CI around the effect sizes within +/-0.20. The median time to complete the questionnaire by tablet was 3.4 min; IVRS: 5.8; paper: 4.0. The proportion of participants by mode who reported "no problems" responding to the questionnaire was 86 % tablet, 72 % IVRS, and 98 % paper. Mode equivalence of items was moderate to high, and comparable to test-retest reliability (median ICC = 0.80). Each mode was acceptable to a majority of respondents. Although the study was powered to detect moderate or larger discrepancies between modes, the observed ICCs and very small mean differences between modes provide evidence to support study designs that are responsive to patient or investigator preference for mode of administration, and justify comparison of results and pooled analyses across studies that employ different PRO-CTCAE modes of administration. NCT Clinicaltrials.gov identifier: NCT02158637.

X Demographics

X Demographics

The data shown below were collected from the profiles of 4 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 114 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 114 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 26 23%
Student > Ph. D. Student 13 11%
Other 9 8%
Student > Doctoral Student 9 8%
Student > Master 9 8%
Other 20 18%
Unknown 28 25%
Readers by discipline Count As %
Medicine and Dentistry 32 28%
Nursing and Health Professions 11 10%
Psychology 7 6%
Social Sciences 6 5%
Computer Science 4 4%
Other 19 17%
Unknown 35 31%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 21 September 2019.
All research outputs
#12,947,444
of 22,851,489 outputs
Outputs from Health and Quality of Life Outcomes
#966
of 2,159 outputs
Outputs of similar age
#134,143
of 297,903 outputs
Outputs of similar age from Health and Quality of Life Outcomes
#8
of 42 outputs
Altmetric has tracked 22,851,489 research outputs across all sources so far. This one is in the 42nd percentile – i.e., 42% of other outputs scored the same or lower than it.
So far Altmetric has tracked 2,159 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 5.4. This one has gotten more attention than average, scoring higher than 54% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 297,903 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 54% of its contemporaries.
We're also able to compare this research output to 42 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 78% of its contemporaries.