↓ Skip to main content

Improving risk prediction accuracy for new soldiers in the U.S. Army by adding self-report survey data to administrative data

Overview of attention for article published in BMC Psychiatry, April 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (80th percentile)
  • Good Attention Score compared to outputs of the same age and source (67th percentile)

Mentioned by

blogs
1 blog
twitter
1 X user
facebook
1 Facebook page

Citations

dimensions_citation
9 Dimensions

Readers on

mendeley
118 Mendeley
Title
Improving risk prediction accuracy for new soldiers in the U.S. Army by adding self-report survey data to administrative data
Published in
BMC Psychiatry, April 2018
DOI 10.1186/s12888-018-1656-4
Pubmed ID
Authors

Samantha L. Bernecker, Anthony J. Rosellini, Matthew K. Nock, Wai Tat Chiu, Peter M. Gutierrez, Irving Hwang, Thomas E. Joiner, James A. Naifeh, Nancy A. Sampson, Alan M. Zaslavsky, Murray B. Stein, Robert J. Ursano, Ronald C. Kessler

Abstract

High rates of mental disorders, suicidality, and interpersonal violence early in the military career have raised interest in implementing preventive interventions with high-risk new enlistees. The Army Study to Assess Risk and Resilience in Servicemembers (STARRS) developed risk-targeting systems for these outcomes based on machine learning methods using administrative data predictors. However, administrative data omit many risk factors, raising the question whether risk targeting could be improved by adding self-report survey data to prediction models. If so, the Army may gain from routinely administering surveys that assess additional risk factors. The STARRS New Soldier Survey was administered to 21,790 Regular Army soldiers who agreed to have survey data linked to administrative records. As reported previously, machine learning models using administrative data as predictors found that small proportions of high-risk soldiers accounted for high proportions of negative outcomes. Other machine learning models using self-report survey data as predictors were developed previously for three of these outcomes: major physical violence and sexual violence perpetration among men and sexual violence victimization among women. Here we examined the extent to which this survey information increases prediction accuracy, over models based solely on administrative data, for those three outcomes. We used discrete-time survival analysis to estimate a series of models predicting first occurrence, assessing how model fit improved and concentration of risk increased when adding the predicted risk score based on survey data to the predicted risk score based on administrative data. The addition of survey data improved prediction significantly for all outcomes. In the most extreme case, the percentage of reported sexual violence victimization among the 5% of female soldiers with highest predicted risk increased from 17.5% using only administrative predictors to 29.4% adding survey predictors, a 67.9% proportional increase in prediction accuracy. Other proportional increases in concentration of risk ranged from 4.8% to 49.5% (median = 26.0%). Data from an ongoing New Soldier Survey could substantially improve accuracy of risk models compared to models based exclusively on administrative predictors. Depending upon the characteristics of interventions used, the increase in targeting accuracy from survey data might offset survey administration costs.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 118 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 118 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 18 15%
Student > Master 14 12%
Student > Bachelor 13 11%
Student > Doctoral Student 9 8%
Student > Ph. D. Student 8 7%
Other 19 16%
Unknown 37 31%
Readers by discipline Count As %
Psychology 26 22%
Medicine and Dentistry 16 14%
Nursing and Health Professions 13 11%
Social Sciences 8 7%
Mathematics 3 3%
Other 12 10%
Unknown 40 34%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 10. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 28 April 2018.
All research outputs
#3,078,587
of 23,035,022 outputs
Outputs from BMC Psychiatry
#1,125
of 4,752 outputs
Outputs of similar age
#65,266
of 329,113 outputs
Outputs of similar age from BMC Psychiatry
#30
of 91 outputs
Altmetric has tracked 23,035,022 research outputs across all sources so far. Compared to these this one has done well and is in the 86th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 4,752 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 11.9. This one has done well, scoring higher than 76% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 329,113 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 80% of its contemporaries.
We're also able to compare this research output to 91 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 67% of its contemporaries.