↓ Skip to main content

Are physician assistant and patient airway assessments reliable compared to anesthesiologist assessments in detecting difficult airways in general surgical patients?

Overview of attention for article published in Perioperative Medicine, November 2017
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
2 tweeters

Citations

dimensions_citation
1 Dimensions

Readers on

mendeley
6 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Are physician assistant and patient airway assessments reliable compared to anesthesiologist assessments in detecting difficult airways in general surgical patients?
Published in
Perioperative Medicine, November 2017
DOI 10.1186/s13741-017-0077-0
Pubmed ID
Authors

Erin Payne, Jacqueline Ragheb, Elizabeth S. Jewell, Betsy P. Huang, Angela M. Bailey, Laura M. Fritsch, Milo Engoren

Abstract

Airway management remains one of the most important responsibilities of anesthesiologists. Prediction of difficult airway allows time for proper selection of equipment, technique, and personnel experienced in managing patients with difficult airway. Face to face preoperative anesthesia interviews are difficult to conduct as they necessitate patients traveling to the clinics, and, in practice, are usually conducted in the morning of the procedure by the anesthesiologist, when identification of predictors of difficult intubation may lead to schedule delays or case cancelations. We hypothesized that an airway assessment tool could be used by patients or physician assistants to accurately assess their airways. We administered an airway assessment tool, which had been constructed in consultation with a psychometrician and revised after non-medical layperson feedback, to 215 patients presenting to the preoperative clinic for evaluation. Separately, patients had the airway exam performed by a physician assistant and an anesthesiologist. Agreement was compared using kappa. We found good agreement between observers only on "can you put three fingers in your mouth?" (three-way kappa = .733, p < 0.001) and poor agreement on Mallampati classification (three-way kappa = .195, p < 0.001) and "Can you fit three fingers between your chin and your Adam's Apple?" (three-way kappa = .216, p < 0.001). The agreements for the other questions were mostly fair. Agreements between patients and anesthesiologists were similar to those between physician assistants and anesthesiologists. Neither the patients' self-assessments nor the physician assistants' assessments were adequate to substitute for the anesthesiologists' airway assessments.

Twitter Demographics

The data shown below were collected from the profiles of 2 tweeters who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 6 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 6 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 2 33%
Researcher 1 17%
Lecturer > Senior Lecturer 1 17%
Other 1 17%
Unspecified 1 17%
Other 0 0%
Readers by discipline Count As %
Nursing and Health Professions 3 50%
Medicine and Dentistry 2 33%
Unspecified 1 17%

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 23 November 2017.
All research outputs
#9,817,790
of 12,819,713 outputs
Outputs from Perioperative Medicine
#84
of 115 outputs
Outputs of similar age
#254,913
of 387,297 outputs
Outputs of similar age from Perioperative Medicine
#14
of 23 outputs
Altmetric has tracked 12,819,713 research outputs across all sources so far. This one is in the 20th percentile – i.e., 20% of other outputs scored the same or lower than it.
So far Altmetric has tracked 115 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 5.9. This one is in the 14th percentile – i.e., 14% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 387,297 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 28th percentile – i.e., 28% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 23 others from the same source and published within six weeks on either side of this one. This one is in the 30th percentile – i.e., 30% of its contemporaries scored the same or lower than it.