↓ Skip to main content

An evaluation of GO annotation retrieval for BioCreAtIvE and GOA

Overview of attention for article published in BMC Bioinformatics, January 2005
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Good Attention Score compared to outputs of the same age (77th percentile)
  • Good Attention Score compared to outputs of the same age and source (75th percentile)

Mentioned by

blogs
1 blog

Citations

dimensions_citation
116 Dimensions

Readers on

mendeley
56 Mendeley
citeulike
10 CiteULike
connotea
2 Connotea
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
An evaluation of GO annotation retrieval for BioCreAtIvE and GOA
Published in
BMC Bioinformatics, January 2005
DOI 10.1186/1471-2105-6-s1-s17
Pubmed ID
Authors

Evelyn B Camon, Daniel G Barrell, Emily C Dimmer, Vivian Lee, Michele Magrane, John Maslen, David Binns, Rolf Apweiler

Abstract

The Gene Ontology Annotation (GOA) database http://www.ebi.ac.uk/GOA aims to provide high-quality supplementary GO annotation to proteins in the UniProt Knowledgebase. Like many other biological databases, GOA gathers much of its content from the careful manual curation of literature. However, as both the volume of literature and of proteins requiring characterization increases, the manual processing capability can become overloaded. Consequently, semi-automated aids are often employed to expedite the curation process. Traditionally, electronic techniques in GOA depend largely on exploiting the knowledge in existing resources such as InterPro. However, in recent years, text mining has been hailed as a potentially useful tool to aid the curation process. To encourage the development of such tools, the GOA team at EBI agreed to take part in the functional annotation task of the BioCreAtIvE (Critical Assessment of Information Extraction systems in Biology) challenge. BioCreAtIvE task 2 was an experiment to test if automatically derived classification using information retrieval and extraction could assist expert biologists in the annotation of the GO vocabulary to the proteins in the UniProt Knowledgebase. GOA provided the training corpus of over 9000 manual GO annotations extracted from the literature. For the test set, we provided a corpus of 200 new Journal of Biological Chemistry articles used to annotate 286 human proteins with GO terms. A team of experts manually evaluated the results of 9 participating groups, each of which provided highlighted sentences to support their GO and protein annotation predictions. Here, we give a biological perspective on the evaluation, explain how we annotate GO using literature and offer some suggestions to improve the precision of future text-retrieval and extraction techniques. Finally, we provide the results of the first inter-annotator agreement study for manual GO curation, as well as an assessment of our current electronic GO annotation strategies.

Mendeley readers

The data shown below were compiled from readership statistics for 56 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Brazil 1 2%
Canada 1 2%
Iceland 1 2%
Spain 1 2%
United States 1 2%
Unknown 51 91%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 6 11%
Researcher 6 11%
Professor > Associate Professor 3 5%
Other 2 4%
Student > Bachelor 2 4%
Other 3 5%
Unknown 34 61%
Readers by discipline Count As %
Agricultural and Biological Sciences 9 16%
Medicine and Dentistry 5 9%
Arts and Humanities 2 4%
Environmental Science 2 4%
Economics, Econometrics and Finance 1 2%
Other 2 4%
Unknown 35 63%

Attention Score in Context

This research output has an Altmetric Attention Score of 6. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 27 June 2007.
All research outputs
#2,713,403
of 12,373,386 outputs
Outputs from BMC Bioinformatics
#1,112
of 4,576 outputs
Outputs of similar age
#2,632,881
of 11,793,653 outputs
Outputs of similar age from BMC Bioinformatics
#1,112
of 4,579 outputs
Altmetric has tracked 12,373,386 research outputs across all sources so far. Compared to these this one has done well and is in the 78th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 4,576 research outputs from this source. They receive a mean Attention Score of 4.9. This one has done well, scoring higher than 75% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 11,793,653 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 77% of its contemporaries.
We're also able to compare this research output to 4,579 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 75% of its contemporaries.