↓ Skip to main content

Identifying scientific artefacts in biomedical literature: The Evidence Based Medicine use case

Overview of attention for article published in Journal of Biomedical Informatics, February 2014
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
37 Dimensions

Readers on

mendeley
79 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Identifying scientific artefacts in biomedical literature: The Evidence Based Medicine use case
Published in
Journal of Biomedical Informatics, February 2014
DOI 10.1016/j.jbi.2014.02.006
Pubmed ID
Authors

Hamed Hassanzadeh, Tudor Groza, Jane Hunter

Abstract

Evidence Based Medicine (EBM) provides a framework that makes use of the current best evidence in the domain to support clinicians in the decision making process. In most cases, the underlying foundational knowledge is captured in scientific publications that detail specific clinical studies or randomised controlled trials. Over the course of the last two decades, research has been performed on modelling key aspects described within publications (e.g., aims, methods, results), to enable the successful realisation of the goals of EBM. A significant outcome of this research has been the PICO (Population/Problem-Intervention-Comparison-Outcome) structure, and its refined version PIBOSO (Population-Intervention-Background-Outcome-Study Design-Other), both of which provide a formalisation of these scientific artefacts. Subsequently, using these schemes, diverse automatic extraction techniques have been proposed to streamline the knowledge discovery and exploration process in EBM. In this paper, we present a Machine Learning approach that aims to classify sentences according to the PIBOSO scheme. We use a discriminative set of features that do not rely on any external resources to achieve results comparable to the state of the art. A corpus of 1000 structured and unstructured abstracts - i.e., the NICTA-PIBOSO corpus - is used for training and testing. Our best CRF classifier achieves a micro-average F-score of 90.74% and 87.21%, respectively, over structured and unstructured abstracts, which represents an increase of 25.48 percentage points and 26.6 percentage points in F-score when compared to the best existing approaches.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 79 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 2 3%
United Kingdom 1 1%
Unknown 76 96%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 19 24%
Student > Master 10 13%
Professor > Associate Professor 6 8%
Professor 6 8%
Student > Bachelor 6 8%
Other 18 23%
Unknown 14 18%
Readers by discipline Count As %
Computer Science 24 30%
Medicine and Dentistry 15 19%
Social Sciences 6 8%
Nursing and Health Professions 6 8%
Agricultural and Biological Sciences 4 5%
Other 9 11%
Unknown 15 19%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 15 February 2014.
All research outputs
#20,655,488
of 25,371,288 outputs
Outputs from Journal of Biomedical Informatics
#1,819
of 2,247 outputs
Outputs of similar age
#250,120
of 330,507 outputs
Outputs of similar age from Journal of Biomedical Informatics
#32
of 42 outputs
Altmetric has tracked 25,371,288 research outputs across all sources so far. This one is in the 10th percentile – i.e., 10% of other outputs scored the same or lower than it.
So far Altmetric has tracked 2,247 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 5.3. This one is in the 5th percentile – i.e., 5% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 330,507 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 12th percentile – i.e., 12% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 42 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.