↓ Skip to main content

Avoiding Catch-22: validating the PainDETECT in a population of patients with chronic pain

Overview of attention for article published in BMC Neurology, June 2018
Altmetric Badge

About this Attention Score

  • Good Attention Score compared to outputs of the same age (67th percentile)
  • Good Attention Score compared to outputs of the same age and source (77th percentile)

Mentioned by

twitter
6 X users

Citations

dimensions_citation
14 Dimensions

Readers on

mendeley
81 Mendeley
Title
Avoiding Catch-22: validating the PainDETECT in a population of patients with chronic pain
Published in
BMC Neurology, June 2018
DOI 10.1186/s12883-018-1094-4
Pubmed ID
Authors

Hans Timmerman, André P. Wolff, Ewald M. Bronkhorst, Oliver H. G. Wilder-Smith, Marcel J. Schenkels, Nick T. van Dasselaar, Frank J. P. M. Huygen, Monique A. H. Steegers, Kris C. P. Vissers

Abstract

Neuropathic pain is defined as pain caused by a lesion or disease of the somatosensory nervous system and is a major therapeutic challenge. Several screening tools have been developed to help physicians detect patients with neuropathic pain. These have typically been validated in populations pre-stratified for neuropathic pain, leading to a so called "Catch-22 situation:" "a problematic situation for which the only solution is denied by a circumstance inherent in the problem or by a rule". The validity of screening tools needs to be proven in patients with pain who were not pre-stratified on basis of the target outcome: neuropathic pain or non-neuropathic pain. This study aims to assess the validity of the Dutch PainDETECT (PainDETECT-Dlv) in a large population of patients with chronic pain. A cross-sectional multicentre design was used to assess PainDETECT-Dlv validity. Included where patients with low back pain radiating into the leg(s), patients with neck-shoulder-arm pain and patients with pain due to a suspected peripheral nerve damage. Patients' pain was classified as having a neuropathic pain component (yes/no) by two experienced physicians ("gold standard"). Physician opinion based on the Grading System was a secondary comparison. In total, 291 patients were included. Primary analysis was done on patients where both physicians agreed upon the pain classification (n = 228). Compared to the physician's classification, PainDETECT-Dlv had a sensitivity of 80% and specificity of 55%, versus the Grading System it achieved 74 and 46%. Despite its internal consistency and test-retest reliability the PainDETECT-Dlv is not an effective screening tool for a neuropathic pain component in a population of patients with chronic pain because of its moderate sensitivity and low specificity. Moreover, the indiscriminate use of the PainDETECT-Dlv as a surrogate for clinical assessment should be avoided in daily clinical practice as well as in (clinical-) research. Catch-22 situations in the validation of screening tools can be prevented by not pre-stratifying the patients on basis of the target outcome before inclusion in a validation study for screening instruments. The protocol was registered prospectively in the Dutch National Trial Register: NTR 3030 .

X Demographics

X Demographics

The data shown below were collected from the profiles of 6 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 81 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 81 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 16 20%
Student > Bachelor 10 12%
Researcher 6 7%
Student > Ph. D. Student 6 7%
Student > Doctoral Student 5 6%
Other 15 19%
Unknown 23 28%
Readers by discipline Count As %
Medicine and Dentistry 20 25%
Nursing and Health Professions 20 25%
Psychology 3 4%
Social Sciences 3 4%
Arts and Humanities 2 2%
Other 7 9%
Unknown 26 32%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 5. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 03 July 2018.
All research outputs
#6,157,542
of 23,323,574 outputs
Outputs from BMC Neurology
#694
of 2,490 outputs
Outputs of similar age
#105,503
of 329,923 outputs
Outputs of similar age from BMC Neurology
#7
of 27 outputs
Altmetric has tracked 23,323,574 research outputs across all sources so far. This one has received more attention than most of these and is in the 73rd percentile.
So far Altmetric has tracked 2,490 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.9. This one has gotten more attention than average, scoring higher than 71% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 329,923 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 67% of its contemporaries.
We're also able to compare this research output to 27 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 77% of its contemporaries.