↓ Skip to main content

Long short-term memory RNN for biomedical named entity recognition

Overview of attention for article published in BMC Bioinformatics, October 2017
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
2 X users

Citations

dimensions_citation
100 Dimensions

Readers on

mendeley
136 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Long short-term memory RNN for biomedical named entity recognition
Published in
BMC Bioinformatics, October 2017
DOI 10.1186/s12859-017-1868-5
Pubmed ID
Authors

Chen Lyu, Bo Chen, Yafeng Ren, Donghong Ji

Abstract

Biomedical named entity recognition(BNER) is a crucial initial step of information extraction in biomedical domain. The task is typically modeled as a sequence labeling problem. Various machine learning algorithms, such as Conditional Random Fields (CRFs), have been successfully used for this task. However, these state-of-the-art BNER systems largely depend on hand-crafted features. We present a recurrent neural network (RNN) framework based on word embeddings and character representation. On top of the neural network architecture, we use a CRF layer to jointly decode labels for the whole sentence. In our approach, contextual information from both directions and long-range dependencies in the sequence, which is useful for this task, can be well modeled by bidirectional variation and long short-term memory (LSTM) unit, respectively. Although our models use word embeddings and character embeddings as the only features, the bidirectional LSTM-RNN (BLSTM-RNN) model achieves state-of-the-art performance - 86.55% F1 on BioCreative II gene mention (GM) corpus and 73.79% F1 on JNLPBA 2004 corpus. Our neural network architecture can be successfully used for BNER without any manual feature engineering. Experimental results show that domain-specific pre-trained word embeddings and character-level representation can improve the performance of the LSTM-RNN models. On the GM corpus, we achieve comparable performance compared with other systems using complex hand-crafted features. Considering the JNLPBA corpus, our model achieves the best results, outperforming the previously top performing systems. The source code of our method is freely available under GPL at https://github.com/lvchen1989/BNER .

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 136 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 136 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 24 18%
Student > Ph. D. Student 20 15%
Researcher 17 13%
Student > Bachelor 14 10%
Student > Doctoral Student 6 4%
Other 22 16%
Unknown 33 24%
Readers by discipline Count As %
Computer Science 52 38%
Engineering 13 10%
Agricultural and Biological Sciences 5 4%
Medicine and Dentistry 4 3%
Chemistry 4 3%
Other 18 13%
Unknown 40 29%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 31 October 2017.
All research outputs
#15,329,366
of 23,577,761 outputs
Outputs from BMC Bioinformatics
#5,159
of 7,418 outputs
Outputs of similar age
#196,281
of 329,975 outputs
Outputs of similar age from BMC Bioinformatics
#76
of 129 outputs
Altmetric has tracked 23,577,761 research outputs across all sources so far. This one is in the 32nd percentile – i.e., 32% of other outputs scored the same or lower than it.
So far Altmetric has tracked 7,418 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 5.4. This one is in the 26th percentile – i.e., 26% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 329,975 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 37th percentile – i.e., 37% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 129 others from the same source and published within six weeks on either side of this one. This one is in the 37th percentile – i.e., 37% of its contemporaries scored the same or lower than it.