↓ Skip to main content

Supporting systematic reviews using LDA-based document representations

Overview of attention for article published in Systematic Reviews, November 2015
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (86th percentile)
  • Good Attention Score compared to outputs of the same age and source (70th percentile)

Mentioned by

twitter
15 tweeters

Citations

dimensions_citation
20 Dimensions

Readers on

mendeley
85 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Supporting systematic reviews using LDA-based document representations
Published in
Systematic Reviews, November 2015
DOI 10.1186/s13643-015-0117-0
Pubmed ID
Authors

Yuanhan Mo, Georgios Kontonatsios, Sophia Ananiadou

Abstract

Identifying relevant studies for inclusion in a systematic review (i.e. screening) is a complex, laborious and expensive task. Recently, a number of studies has shown that the use of machine learning and text mining methods to automatically identify relevant studies has the potential to drastically decrease the workload involved in the screening phase. The vast majority of these machine learning methods exploit the same underlying principle, i.e. a study is modelled as a bag-of-words (BOW). We explore the use of topic modelling methods to derive a more informative representation of studies. We apply Latent Dirichlet allocation (LDA), an unsupervised topic modelling approach, to automatically identify topics in a collection of studies. We then represent each study as a distribution of LDA topics. Additionally, we enrich topics derived using LDA with multi-word terms identified by using an automatic term recognition (ATR) tool. For evaluation purposes, we carry out automatic identification of relevant studies using support vector machine (SVM)-based classifiers that employ both our novel topic-based representation and the BOW representation. Our results show that the SVM classifier is able to identify a greater number of relevant studies when using the LDA representation than the BOW representation. These observations hold for two systematic reviews of the clinical domain and three reviews of the social science domain. A topic-based feature representation of documents outperforms the BOW representation when applied to the task of automatic citation screening. The proposed term-enriched topics are more informative and less ambiguous to systematic reviewers.

Twitter Demographics

The data shown below were collected from the profiles of 15 tweeters who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 85 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United Kingdom 1 1%
Canada 1 1%
Unknown 83 98%

Demographic breakdown

Readers by professional status Count As %
Student > Master 20 24%
Student > Ph. D. Student 17 20%
Researcher 10 12%
Librarian 7 8%
Student > Bachelor 5 6%
Other 20 24%
Unknown 6 7%
Readers by discipline Count As %
Computer Science 35 41%
Medicine and Dentistry 15 18%
Social Sciences 6 7%
Agricultural and Biological Sciences 6 7%
Engineering 3 4%
Other 8 9%
Unknown 12 14%

Attention Score in Context

This research output has an Altmetric Attention Score of 10. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 20 March 2016.
All research outputs
#1,665,652
of 14,004,996 outputs
Outputs from Systematic Reviews
#350
of 1,219 outputs
Outputs of similar age
#49,676
of 359,491 outputs
Outputs of similar age from Systematic Reviews
#40
of 135 outputs
Altmetric has tracked 14,004,996 research outputs across all sources so far. Compared to these this one has done well and is in the 88th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,219 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.2. This one has gotten more attention than average, scoring higher than 71% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 359,491 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 86% of its contemporaries.
We're also able to compare this research output to 135 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 70% of its contemporaries.