↓ Skip to main content

The use of mechanistic evidence in drug approval

Overview of attention for article published in Journal of Evaluation in Clinical Practice, June 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (88th percentile)
  • Above-average Attention Score compared to outputs of the same age and source (60th percentile)

Mentioned by

blogs
2 blogs
twitter
5 tweeters

Citations

dimensions_citation
5 Dimensions

Readers on

mendeley
5 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
The use of mechanistic evidence in drug approval
Published in
Journal of Evaluation in Clinical Practice, June 2018
DOI 10.1111/jep.12960
Pubmed ID
Authors

Jeffrey K. Aronson, Adam La Caze, Michael P. Kelly, Veli-Pekka Parkkinen, Jon Williamson

Abstract

The role of mechanistic evidence tends to be under-appreciated in current evidence-based medicine (EBM), which focusses on clinical studies, tending to restrict attention to randomized controlled studies (RCTs) when they are available. The EBM+ programme seeks to redress this imbalance, by suggesting methods for evaluating mechanistic studies alongside clinical studies. Drug approval is a problematic case for the view that mechanistic evidence should be taken into account, because RCTs are almost always available. Nevertheless, we argue that mechanistic evidence is central to all the key tasks in the drug approval process: in drug discovery and development; assessing pharmaceutical quality; devising dosage regimens; assessing efficacy, harms, external validity, and cost-effectiveness; evaluating adherence; and extending product licences. We recommend that, when preparing for meetings in which any aspect of drug approval is to be discussed, mechanistic evidence should be systematically analysed and presented to the committee members alongside analyses of clinical studies.

Twitter Demographics

The data shown below were collected from the profiles of 5 tweeters who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 5 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 5 100%

Demographic breakdown

Readers by professional status Count As %
Lecturer 1 20%
Student > Bachelor 1 20%
Student > Ph. D. Student 1 20%
Researcher 1 20%
Unspecified 1 20%
Other 0 0%
Readers by discipline Count As %
Philosophy 2 40%
Unspecified 1 20%
Nursing and Health Professions 1 20%
Medicine and Dentistry 1 20%

Attention Score in Context

This research output has an Altmetric Attention Score of 19. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 21 December 2018.
All research outputs
#864,633
of 13,818,870 outputs
Outputs from Journal of Evaluation in Clinical Practice
#78
of 1,016 outputs
Outputs of similar age
#30,769
of 274,839 outputs
Outputs of similar age from Journal of Evaluation in Clinical Practice
#10
of 25 outputs
Altmetric has tracked 13,818,870 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 93rd percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,016 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 7.6. This one has done particularly well, scoring higher than 92% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 274,839 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 88% of its contemporaries.
We're also able to compare this research output to 25 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 60% of its contemporaries.