↓ Skip to main content

Comparing the coverage, recall, and precision of searches for 120 systematic reviews in Embase, MEDLINE, and Google Scholar: a prospective study

Overview of attention for article published in Systematic Reviews, March 2016
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • Among the highest-scoring outputs from this source (#33 of 2,242)
  • High Attention Score compared to outputs of the same age (98th percentile)
  • High Attention Score compared to outputs of the same age and source (97th percentile)

Mentioned by

news
1 news outlet
blogs
3 blogs
twitter
147 X users
peer_reviews
1 peer review site
facebook
1 Facebook page

Citations

dimensions_citation
118 Dimensions

Readers on

mendeley
293 Mendeley
citeulike
2 CiteULike
Title
Comparing the coverage, recall, and precision of searches for 120 systematic reviews in Embase, MEDLINE, and Google Scholar: a prospective study
Published in
Systematic Reviews, March 2016
DOI 10.1186/s13643-016-0215-7
Pubmed ID
Authors

Wichor M. Bramer, Dean Giustini, Bianca M. R. Kramer

Abstract

Previously, we reported on the low recall of Google Scholar (GS) for systematic review (SR) searching. Here, we test our conclusions further in a prospective study by comparing the coverage, recall, and precision of SR search strategies previously performed in Embase, MEDLINE, and GS. The original search results from Embase and MEDLINE and the first 1000 results of GS for librarian-mediated SR searches were recorded. Once the inclusion-exclusion process for the resulting SR was complete, search results from all three databases were screened for the SR's included references. All three databases were then searched post hoc for included references not found in the original search results. We checked 4795 included references from 120 SRs against the original search results. Coverage of GS was high (97.2 %) but marginally lower than Embase and MEDLINE combined (97.5 %). MEDLINE on its own achieved 92.3 % coverage. Total recall of Embase/MEDLINE combined was 81.6 % for all included references, compared to GS at 72.8 % and MEDLINE alone at 72.6 %. However, only 46.4 % of the included references were among the downloadable first 1000 references in GS. When examining data for each SR, the traditional databases' recall was better than GS, even when taking into account included references listed beyond the first 1000 search results. Finally, precision of the first 1000 references of GS is comparable to searches in Embase and MEDLINE combined. Although overall coverage and recall of GS are high for many searches, the database does not achieve full coverage as some researchers found in previous research. Further, being able to view only the first 1000 records in GS severely reduces its recall percentages. If GS would enable the browsing of records beyond the first 1000, its recall would increase but not sufficiently to be used alone in SR searching. Time needed to screen results would also increase considerably. These results support our assertion that neither GS nor one of the other databases investigated, is on its own, an acceptable database to support systematic review searching.

X Demographics

X Demographics

The data shown below were collected from the profiles of 147 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 293 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Portugal 1 <1%
Switzerland 1 <1%
Norway 1 <1%
Australia 1 <1%
Canada 1 <1%
Denmark 1 <1%
Spain 1 <1%
United States 1 <1%
Unknown 285 97%

Demographic breakdown

Readers by professional status Count As %
Librarian 60 20%
Researcher 36 12%
Student > Ph. D. Student 29 10%
Student > Master 29 10%
Other 23 8%
Other 57 19%
Unknown 59 20%
Readers by discipline Count As %
Medicine and Dentistry 80 27%
Social Sciences 28 10%
Nursing and Health Professions 19 6%
Psychology 19 6%
Computer Science 18 6%
Other 59 20%
Unknown 70 24%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 123. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 17 November 2022.
All research outputs
#344,423
of 25,663,438 outputs
Outputs from Systematic Reviews
#33
of 2,242 outputs
Outputs of similar age
#6,010
of 313,216 outputs
Outputs of similar age from Systematic Reviews
#1
of 47 outputs
Altmetric has tracked 25,663,438 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 98th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 2,242 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 13.2. This one has done particularly well, scoring higher than 98% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 313,216 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 98% of its contemporaries.
We're also able to compare this research output to 47 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 97% of its contemporaries.