↓ Skip to main content

An attention-based effective neural model for drug-drug interactions extraction

Overview of attention for article published in BMC Bioinformatics, October 2017
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
2 tweeters

Citations

dimensions_citation
21 Dimensions

Readers on

mendeley
70 Mendeley
citeulike
1 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
An attention-based effective neural model for drug-drug interactions extraction
Published in
BMC Bioinformatics, October 2017
DOI 10.1186/s12859-017-1855-x
Pubmed ID
Authors

Wei Zheng, Hongfei Lin, Ling Luo, Zhehuan Zhao, Zhengguang Li, Yijia Zhang, Zhihao Yang, Jian Wang

Abstract

Drug-drug interactions (DDIs) often bring unexpected side effects. The clinical recognition of DDIs is a crucial issue for both patient safety and healthcare cost control. However, although text-mining-based systems explore various methods to classify DDIs, the classification performance with regard to DDIs in long and complex sentences is still unsatisfactory. In this study, we propose an effective model that classifies DDIs from the literature by combining an attention mechanism and a recurrent neural network with long short-term memory (LSTM) units. In our approach, first, a candidate-drug-oriented input attention acting on word-embedding vectors automatically learns which words are more influential for a given drug pair. Next, the inputs merging the position- and POS-embedding vectors are passed to a bidirectional LSTM layer whose outputs at the last time step represent the high-level semantic information of the whole sentence. Finally, a softmax layer performs DDI classification. Experimental results from the DDIExtraction 2013 corpus show that our system performs the best with respect to detection and classification (84.0% and 77.3%, respectively) compared with other state-of-the-art methods. In particular, for the Medline-2013 dataset with long and complex sentences, our F-score far exceeds those of top-ranking systems by 12.6%. Our approach effectively improves the performance of DDI classification tasks. Experimental analysis demonstrates that our model performs better with respect to recognizing not only close-range but also long-range patterns among words, especially for long, complex and compound sentences.

Twitter Demographics

The data shown below were collected from the profiles of 2 tweeters who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 70 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 70 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 22 31%
Student > Master 12 17%
Researcher 10 14%
Student > Bachelor 5 7%
Student > Doctoral Student 4 6%
Other 5 7%
Unknown 12 17%
Readers by discipline Count As %
Computer Science 32 46%
Medicine and Dentistry 4 6%
Engineering 4 6%
Agricultural and Biological Sciences 2 3%
Economics, Econometrics and Finance 2 3%
Other 11 16%
Unknown 15 21%

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 10 October 2018.
All research outputs
#8,220,693
of 13,606,339 outputs
Outputs from BMC Bioinformatics
#3,269
of 5,068 outputs
Outputs of similar age
#146,777
of 273,982 outputs
Outputs of similar age from BMC Bioinformatics
#16
of 33 outputs
Altmetric has tracked 13,606,339 research outputs across all sources so far. This one is in the 37th percentile – i.e., 37% of other outputs scored the same or lower than it.
So far Altmetric has tracked 5,068 research outputs from this source. They receive a mean Attention Score of 4.9. This one is in the 31st percentile – i.e., 31% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 273,982 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 42nd percentile – i.e., 42% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 33 others from the same source and published within six weeks on either side of this one. This one is in the 45th percentile – i.e., 45% of its contemporaries scored the same or lower than it.