↓ Skip to main content

Efficiency of different measures for defining the applicability domain of classification models

Overview of attention for article published in Journal of Cheminformatics, August 2017
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
48 Dimensions

Readers on

mendeley
86 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Efficiency of different measures for defining the applicability domain of classification models
Published in
Journal of Cheminformatics, August 2017
DOI 10.1186/s13321-017-0230-2
Pubmed ID
Authors

Waldemar Klingspohn, Miriam Mathea, Antonius ter Laak, Nikolaus Heinrich, Knut Baumann

Abstract

The goal of defining an applicability domain for a predictive classification model is to identify the region in chemical space where the model's predictions are reliable. The boundary of the applicability domain is defined with the help of a measure that shall reflect the reliability of an individual prediction. Here, the available measures are differentiated into those that flag unusual objects and which are independent of the original classifier and those that use information of the trained classifier. The former set of techniques is referred to as novelty detection while the latter is designated as confidence estimation. A review of the available confidence estimators shows that most of these measures estimate the probability of class membership of the predicted objects which is inversely related to the error probability. Thus, class probability estimates are natural candidates for defining the applicability domain but were not comprehensively included in previous benchmark studies. The focus of the present study is to find the best measure for defining the applicability domain for a given binary classification technique and to determine the performance of novelty detection versus confidence estimation. Six different binary classification techniques in combination with ten data sets were studied to benchmark the various measures. The area under the receiver operating characteristic curve (AUC ROC) was employed as main benchmark criterion. It is shown that class probability estimates constantly perform best to differentiate between reliable and unreliable predictions. Previously proposed alternatives to class probability estimates do not perform better than the latter and are inferior in most cases. Interestingly, the impact of defining an applicability domain depends on the observed area under the receiver operator characteristic curve. That means that it depends on the level of difficulty of the classification problem (expressed as AUC ROC) and will be largest for intermediately difficult problems (range AUC ROC 0.7-0.9). In the ranking of classifiers, classification random forests performed best on average. Hence, classification random forests in combination with the respective class probability estimate are a good starting point for predictive binary chemoinformatic classifiers with applicability domain. Graphical abstract .

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 86 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 86 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 20 23%
Student > Ph. D. Student 19 22%
Student > Master 7 8%
Student > Bachelor 7 8%
Student > Doctoral Student 4 5%
Other 12 14%
Unknown 17 20%
Readers by discipline Count As %
Chemistry 24 28%
Computer Science 7 8%
Pharmacology, Toxicology and Pharmaceutical Science 7 8%
Biochemistry, Genetics and Molecular Biology 5 6%
Engineering 4 5%
Other 14 16%
Unknown 25 29%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 14 August 2017.
All research outputs
#19,631,015
of 24,143,470 outputs
Outputs from Journal of Cheminformatics
#860
of 891 outputs
Outputs of similar age
#248,131
of 320,947 outputs
Outputs of similar age from Journal of Cheminformatics
#10
of 10 outputs
Altmetric has tracked 24,143,470 research outputs across all sources so far. This one is in the 10th percentile – i.e., 10% of other outputs scored the same or lower than it.
So far Altmetric has tracked 891 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 10.7. This one is in the 1st percentile – i.e., 1% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 320,947 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 12th percentile – i.e., 12% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 10 others from the same source and published within six weeks on either side of this one.