Altmetric Blog

Fieldwork: The Alt-Metrics Literature Year in Review (II)

Ernesto Priego, 19th December 2012

Alt-Metric 2012 Literature Review IIIn the first of this 2-part series we looked at 4 research articles and 1 overview on alt-metrics published during 2012 within the “Altmetrics Collection” of the Public Library of Science (PLOS).

In this second part we look at 7 more scholarly records (not only articles) that caught our eye during 2012, and provide a glimpse of some of the discussed articles’ current Altmetric scores.

 Tracking Impact

“Users, Narcissism and Control —Tracking the Impact of Scholarly Publications in the 21st Century” is a report published by the SURF foundation in February 2012.

In the report, Paul Wouters and Rodrigo Costas explain how scholarship is undergoing a transformation into digital, networked forms, and explain how “these developments have created new possibilities and challenges in the evaluation of the quality of research.” As Wouters and Costas explain, engaging in new mechanisms to assess the impact and quality of research is of interest to funders and to individual researchers. A sample of 16 different tools is assessed, concluding that Web-based academic publishing is producing a variety of new filters, allowing researchers “to make some sort of limited self-assessment with respect to the response to his/her work.”

The report addresses the existing resistance to using alt-metrics to legitimately assess the quality and impact of academic research, arguing the tools they analyse have not sufficiently been validated to be ready for adoption. The report recommends “to start a concerted research programme in the dynamics, properties, and potential use of new web based metrics which relates these new measures to the already established indicators of publication impact.”

Validation and Correlation

In “Validating online reference managers for scholarly impact measurement” (published May 2012), Xuemei Li, Mike Thelwall and Dean Giustini looked at a  sample of 1,613 papers published in Nature and Science during 2007 to assess whether CiteULike and Mendeley “are useful for measuring scholarly influence”.

The authors compared traditional bibliometrics (journal citations) from the Web of Science with the number of users who had bookmarked the articles in 1 of the 2 studied services. The study confirmed correlations between user counts and citation counts of the same articles.

The authors warn us that there are not enough scholars using CiteUlike and Mendeley yet for the metrics of their engagement on these tools to pose a real challenge to bibliometrics. Nevertheless, their study provides yet another indicator that article bookmarks can count to characterise a type of scholarly  influence which is not unrelated in to more traditional citation-based scholarly impact.

In “Decoupling the scholarly journal” (published April 2012), Jason Priem and Bradley M. Hemminger offer a critical yet propositive and innovative interrogation of the current models of scholarly publishing and peer-review. The paper anticipates the interconnections between alt-metrics research and the transformation of peer-review as the traditional method for research validation.

In another paper, a pre-print on arXiv, “Altmetrics in the Wild: Using Social Media to Explore Scholarly Impact” (submitted on March 2012), Jason Priem, Heather A. Piwowar and Bradley M. Hemminger provide more evidence of the existing correlations between online mentions of scholarly publications and traditional citations.

The authors studied a sample of 24,331 articles published by the Public Library of Science and found that approximately 5% of the articles they looked at were cited in Wikipedia and almost 80% had been included in at least 1 Mendeley library. They also found that 25% of the articles were not mentioned in the different sources they studied. The paper concludes that “correlation and factor analysis suggest citation and altmetrics indicators track related but distinct impacts, with neither able to describe the complete picture of scholarly use alone.”

Like Li et al, this paper found moderate correlations between Mendeley and Web of Science citations, and found that articles could be grouped under “five different impact flavors” which qualitatively indicate that different articles have different types of impacts on different audiences.

Online Visibility and Footprint

Priem also co-authored another paper currently available on ArXiv: “Beyond Citations: Scholars’ Visibility on the Social Web” (with Judit Bar-Ilan, Stefanie Haustein, Isabella Peters, Hadas Shema and Jens Terliesner; submitted on May 2012).

This paper, accepted for presentation at the 17th International Conference on Science and Technology Indicators (STI) in September 2012, focused on the “Web footprint”, publication and citation counts of 57 researchers who presented at the 2010 STI conference.

As such this paper offers an example of how alt-metrics research can also offer insight about academic events and their participants. The authors found that the presenters’ Web presence included home pages (84%) LinkedIn profiles (70%), Google Scholar profiles (23%) and Twitter (16%). Comparing social reference manager bookmarks to Scopus and Web of Science citations, the study shows that Mendeley covers more than 80% of the sampled articles, and that Mendeley bookmarks have a significant correlation to Scopus citation counts, offering another benchmark to compare to the results offered in “Altmetrics in the Wild” and Li et al 2012.

Alt-Metrics and Librarians

In the poster “Altmetrics and Librarians: How Changes in Scholarly Communication will affect our Profession” (deposited October 2012) presented at the Indiana University Bloomington Libraries In-House Institute, Stacey R. Konkiel introduces the concept of alt-metrics and offers preliminary findings “based upon research into the three most popular web services that provide altmetrics data to third parties: ImpactStory, Altmetric, and Plum Analytics.”

The poster identifies the strengths and weaknesses of each of the these services and discusses the usefulness of using them in institutional repositories on 3 platforms: Digital Commons, Eprints, and Dspace, and compares the types of metrics already reported by the three repository platforms and each alt-metrics service.

Konkiel concludes that the metrics that should be provided by repositories should track scholarly impact, taking impact in this case to mean page/abstract views, downloads, citations (Scopus, PubMed Central), online bookmarking (Mendeley, CiteULike), Faculty of 1000 reviews, mentions of articles on research blogs, and what the author calls popular impact: “mentions on Wikipedia, Bit.ly clicks and shares, Facebook, Delicious bookmarks, Reddit mentions, Twitter and blog mentions (general interest blogs), and news outlet mentions.”

The poster warns that the barriers to participation are those of cost, technical support and the inability to incorporate tools into proprietary platforms, limited DOI implementation in most repositories and “the political implications of displaying nonexistent metrics for relatively unpopular materials.” The poster also suggests surveying the attitudes towards alt-metrics of academic staff, librarians and university administrators.

Alt-metrics: Towards Scholarly Acceptance

A Nature Materials editorial published online in October 2012 offers a critical yet propositive stance on alt-metrics.

Though the piece welcomes alt-metrics services as “complementary evaluation tools”, it is emphatic that they will not replace peer review. It also distinguishes between “shorter-term and longer-term metrics”:

[Shorter-term metrics], embodied by blog posts, news pieces, tweets and likes, display a short life span and decay fast, reflecting the transient nature of popularity. By contrast, longer-term online metrics such as download, readers, comment numbers, albeit collected at a much slower pace, may be more meaningful. They could act as a proxy for quality while going beyond the traditional h-index or impact factor metrics.

Due to space limitations, it is impossible to properly review in 2 short posts the entirety of the literature on alt-metrics, webometrics and bibliometrics that was made available during 2012. Nevertheless it is possible to summarise that the scholarly record discussing and/or employing alt-metrics published this year offered auto-critical, both quantitative and qualitative arguments exploring innovative ways of addressing an ever-growing phenomenon. Though the 2012 literature still resorts to introductory paragraphs defining and redefining alt-metrics, we can already perceive a gradual but steady development towards more convincing case studies.

Some of the key themes arising have been validation and standardisation, correlation and co-participation of alt-metrics and citations, as well as researcher-led practical recommendations for immediate implementation in academic journals and repositories. The literature discussing the quantitative and qualitative assessment of engagement with scholarly outputs on social media or web-based services could, in the near future, inform broader, more effective and validated assessments of impact that could officially complement (not substitute) traditional bibliometrics.

4 Responses to “Fieldwork: The Alt-Metrics Literature Year in Review (II)”

[...] Altmetric, the second and final part of my 2012 alt-metrics literature review, here. TwitterLinkedInGoogle [...]

[...] review continues in part II). google facebook twitter i Fieldwork ñ 2012, Alt-metrics, Literature Review [...]

Susan Ariew (@EdLib on twitter)
December 20, 2012 at 12:00 am

Thanks for the overview of recent trends, ideas and discussions about altmetrics. --SAA

Martin J Sallberg
February 28, 2014 at 12:00 am

While altmetrics in its current form is insufficient to replace peer review, it can be modified into something that is up to the task. One very efficient way to postpublication review articles would be by creating a database with reviews that have already refuted claims, so that critics can postpublication review by posting links to those articles wherever the refuted nonsense arguments crop up (no need to write the refutation as a new comment each time). New refutations can constantly be added to the database, including refutations that expose flaws in previous refutations. This may be the way to replace expensive, slow and partly arbitrary prepublication review that refuses to touch theories/hypotheses just because where they happened to be initially published. Some of the theories/hypotheses that have already been published in the supposedly "wrong" places may be true.

Leave a Reply

Your email address will not be published. Required fields are marked *