↓ Skip to main content

DockQ: A Quality Measure for Protein-Protein Docking Models

Overview of attention for article published in PLOS ONE, August 2016
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (90th percentile)
  • High Attention Score compared to outputs of the same age and source (90th percentile)

Mentioned by

news
1 news outlet
blogs
1 blog
twitter
2 X users

Citations

dimensions_citation
233 Dimensions

Readers on

mendeley
194 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
DockQ: A Quality Measure for Protein-Protein Docking Models
Published in
PLOS ONE, August 2016
DOI 10.1371/journal.pone.0161879
Pubmed ID
Authors

Sankar Basu, Björn Wallner

Abstract

The state-of-the-art to assess the structural quality of docking models is currently based on three related yet independent quality measures: Fnat, LRMS, and iRMS as proposed and standardized by CAPRI. These quality measures quantify different aspects of the quality of a particular docking model and need to be viewed together to reveal the true quality, e.g. a model with relatively poor LRMS (>10Å) might still qualify as 'acceptable' with a descent Fnat (>0.50) and iRMS (<3.0Å). This is also the reason why the so called CAPRI criteria for assessing the quality of docking models is defined by applying various ad-hoc cutoffs on these measures to classify a docking model into the four classes: Incorrect, Acceptable, Medium, or High quality. This classification has been useful in CAPRI, but since models are grouped in only four bins it is also rather limiting, making it difficult to rank models, correlate with scoring functions or use it as target function in machine learning algorithms. Here, we present DockQ, a continuous protein-protein docking model quality measure derived by combining Fnat, LRMS, and iRMS to a single score in the range [0, 1] that can be used to assess the quality of protein docking models. By using DockQ on CAPRI models it is possible to almost completely reproduce the original CAPRI classification into Incorrect, Acceptable, Medium and High quality. An average PPV of 94% at 90% Recall demonstrating that there is no need to apply predefined ad-hoc cutoffs to classify docking models. Since DockQ recapitulates the CAPRI classification almost perfectly, it can be viewed as a higher resolution version of the CAPRI classification, making it possible to estimate model quality in a more quantitative way using Z-scores or sum of top ranked models, which has been so valuable for the CASP community. The possibility to directly correlate a quality measure to a scoring function has been crucial for the development of scoring functions for protein structure prediction, and DockQ should be useful in a similar development in the protein docking field. DockQ is available at http://github.com/bjornwallner/DockQ/.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 194 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 194 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 29 15%
Student > Ph. D. Student 27 14%
Student > Bachelor 26 13%
Student > Master 19 10%
Student > Doctoral Student 11 6%
Other 23 12%
Unknown 59 30%
Readers by discipline Count As %
Biochemistry, Genetics and Molecular Biology 56 29%
Agricultural and Biological Sciences 15 8%
Computer Science 13 7%
Chemistry 12 6%
Engineering 5 3%
Other 26 13%
Unknown 67 35%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 19. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 07 November 2022.
All research outputs
#1,717,854
of 23,577,761 outputs
Outputs from PLOS ONE
#21,857
of 202,084 outputs
Outputs of similar age
#31,777
of 342,498 outputs
Outputs of similar age from PLOS ONE
#427
of 4,281 outputs
Altmetric has tracked 23,577,761 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 92nd percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 202,084 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 15.3. This one has done well, scoring higher than 89% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 342,498 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 90% of its contemporaries.
We're also able to compare this research output to 4,281 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 90% of its contemporaries.