↓ Skip to main content

Quantifying Benefit–Risk Preferences for Medical Interventions: An Overview of a Growing Empirical Literature

Overview of attention for article published in Applied Health Economics and Health Policy, May 2013
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age

Mentioned by

twitter
4 X users

Citations

dimensions_citation
106 Dimensions

Readers on

mendeley
115 Mendeley
citeulike
1 CiteULike
Title
Quantifying Benefit–Risk Preferences for Medical Interventions: An Overview of a Growing Empirical Literature
Published in
Applied Health Economics and Health Policy, May 2013
DOI 10.1007/s40258-013-0028-y
Pubmed ID
Authors

A. Brett Hauber, Angelyn O. Fairchild, F. Reed Johnson

Abstract

Decisions regarding the development, regulation, sale, and utilization of pharmaceutical and medical interventions require an evaluation of the balance between benefits and risks. Such evaluations are subject to two fundamental challenges-measuring the clinical effectiveness and harms associated with the treatment, and determining the relative importance of these different types of outcomes. In some ways, determining the willingness to accept treatment-related risks in exchange for treatment benefits is the greater challenge because it involves the individual subjective judgments of many decision makers, and these decision makers may draw different conclusions about the optimal balance between benefits and risks. In response to increasing demand for benefit-risk evaluations, researchers have applied a variety of existing welfare-theoretic preference methods for quantifying the tradeoffs decision makers are willing to accept among expected clinical benefits and risks. The methods used to elicit benefit-risk preferences have evolved from different theoretical backgrounds. To provide some structure to the literature that accommodates the range of approaches, we begin by describing a welfare-theoretic conceptual framework underlying the measurement of benefit-risk preferences in pharmaceutical and medical treatment decisions. We then review the major benefit-risk preference-elicitation methods in the empirical literature and provide a brief overview of the studies using each of these methods. The benefit-risk preference methods described in this overview fall into two broad categories: direct-elicitation methods and conjoint analysis. Rating scales (6 studies), threshold techniques (9 studies), and standard gamble (2 studies) are examples of direct elicitation methods. Conjoint analysis studies are categorized by the question format used in the study, including ranking (1 study), graded pairs (1 study), and discrete choice (21 studies). The number of studies reviewed here demonstrates that this body of research already is substantial, and it appears that the number of benefit-risk preference studies in the literature will continue to increase. In addition, benefit-risk preference-elicitation methods have been applied to a variety of healthcare decisions and medical interventions, including pharmaceuticals, medical devices, surgical and medical procedures, and diagnostics, as well as resource-allocation decisions such as facility placement. While preference-elicitation approaches may differ across studies, all of the studies described in this review can be used to provide quantitative measures of the tradeoffs patients and other decision makers are willing to make between benefits and risks of medical interventions. Eliciting and quantifying the preferences of decision makers allows for a formal, evidence-based consideration of decision-makers' values that currently is lacking in regulatory decision making. Future research in this area should focus on two primary issues-developing best-practice standards for preference-elicitation studies and developing methods for combining stated preferences and clinical data in a manner that is both understandable and useful to regulatory agencies.

X Demographics

X Demographics

The data shown below were collected from the profiles of 4 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 115 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 3 3%
Portugal 1 <1%
Netherlands 1 <1%
Colombia 1 <1%
Canada 1 <1%
United Kingdom 1 <1%
Unknown 107 93%

Demographic breakdown

Readers by professional status Count As %
Researcher 23 20%
Student > Ph. D. Student 17 15%
Student > Master 14 12%
Student > Doctoral Student 7 6%
Student > Bachelor 6 5%
Other 18 16%
Unknown 30 26%
Readers by discipline Count As %
Medicine and Dentistry 26 23%
Economics, Econometrics and Finance 9 8%
Agricultural and Biological Sciences 7 6%
Social Sciences 6 5%
Pharmacology, Toxicology and Pharmaceutical Science 6 5%
Other 30 26%
Unknown 31 27%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 14 August 2013.
All research outputs
#13,383,750
of 22,708,120 outputs
Outputs from Applied Health Economics and Health Policy
#441
of 767 outputs
Outputs of similar age
#101,752
of 192,823 outputs
Outputs of similar age from Applied Health Economics and Health Policy
#11
of 14 outputs
Altmetric has tracked 22,708,120 research outputs across all sources so far. This one is in the 39th percentile – i.e., 39% of other outputs scored the same or lower than it.
So far Altmetric has tracked 767 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 8.9. This one is in the 41st percentile – i.e., 41% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 192,823 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 45th percentile – i.e., 45% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 14 others from the same source and published within six weeks on either side of this one. This one is in the 21st percentile – i.e., 21% of its contemporaries scored the same or lower than it.