Webinar summary: Traditional journal bibliometrics meets newcomer altmetrics
On Tuesday afternoon, I joined in to listen to the Elsevier Connect webinar, “Traditional journal bibliometrics meets newcomer altmetrics”. Brief presentations from Paul Groth (VU University of Amsterdam), Mike Taylor (Elsevier Labs), and Sweitze Roffel (Elsevier) were followed by a Q&A session moderated by Hannah Foreman (Elsevier).
Aimed specifically at editors of scholarly journals, the webinar provided a nice overview of altmetrics (and article-level metrics), available tools (including Altmetric) and the altmetrics movement in general. The presenters also defined the current state of affairs with respect to evaluation, emphasising that altmetrics are not yet suitable for research impact assessment but that they will be in the future. The prevailing message was that altmetrics are becoming increasingly relevant indicators of “research usage”.
A recording of the webinar can be found here.
My thoughts on the webinar
Overall, I found that this short webinar was a nice introduction to the growing field of altmetrics. I felt that the Q&A session, driven by questions from the online audience, was particularly rich and thought-provoking. There were a few points and audience questions that came up during the webinar that I’d like to address.
1. Clarification: Altmetric collects data from LinkedIn
For some information about our LinkedIn data, check out this blog post, which initially announced LinkedIn support by Altmetric.
2. How can news reports about research be tracked if links to the original papers weren’t provided by the journalists?
Accurately tracking mentions of scholarly papers in news reports can be a challenge when links to the papers themselves are not provided in the text. Altmetric has attempted to get around this issue by devising the “News Tracker” mechanism, which uses text-mining to search through a news report and determine which scholarly article is being discussed. As such, for English language newspapers and magazines, Altmetric does not need to see a link in order to be able to associate a paper with its news report. For more information about the News Tracker, read this blog post and see it in action for this recent article published in Cell (click on the News tab).
3. Whose responsibility is it to promote research via social media?
Are authors or publishers responsible for communicating about new research? Given the accessibility of online tools nowadays, I’d say that both parties should participate in the promotion of research via social media. This does, however, link to another question of whether or not it’s right for an author to be actively “advertising” his or her own work on social media. Presenter Sweitze Roffel did a good job of addressing this issue in the webinar, and said that promoting research on social media is simply using a digital way of doing something that had always been done before. His advice was that social media promotion should be done with a certain level of taste.
But wait, how do you know if you’re promoting tastefully? One strategy, previously described by Ernesto Priego, is to promote both your work and the works of others within a social network of academics. In a great blog post called “Strategies to Get Your Research Mentioned Online”, he wrote: “Promote other people’s research as you would like your own work to be promoted”. Indeed, the intent of online promotion should be to share new knowledge with a relevant audience, not to game a metric.
4. How can we guarantee that academic merit is being assessed by altmetrics?
Since social media is used by both academics and members of the general public to communicate about research, some academics have been concerned that the impact conveyed by altmetrics (which take into account social media attention) is only reflective of popularity and not of academic merit. Presenter Paul Groth provided a great response to this question by stating that currently, altmetrics data must be interpreted on a case-by-case basis and should not yet be used for academic assessment. Groth also discussed the importance of “defining the social network space around researchers”, which can then provide an idea of whether the altmetrics surrounding a particular research output reflect its “popularity impact” or “research impact”.
The two kinds of impact that Groth mentioned – popularity and research – are very distinct, and I’ve discussed these ideas previously in this blog. Examples of papers with high “popularity impact” according to their altmetrics are those which concern popular subjects, such as domestic pets. In contrast, an example of a paper that had demonstrable “research impact” was one that concerned the use of equations in theoretical biology.) At this point in the development of altmetrics, it’s up to users to decide what kinds of impact are being demonstrated. I would argue that there isn’t really any need to “guarantee” that academic merit is being assessed, since altmetrics are generally meant to convey a broader sense of impact, in part by demonstrating the visibility of research in beyond academia.
5. What’s the first step for integrating article-level metrics into a journal platform?
In the webinar, the presenters answered this question by suggesting that journal editors take a look at the Altmetric sidebar app for Scopus and creating reports in ImpactStory. These tools, along with the Altmetric Bookmarklet, are great starting points for jumping into altmetrics. Editors who are thinking of integrating article-level metrics into their journal platforms could also take a look at other platforms that have already implemented altmetrics into their article pages. For some inspiration, editors could browse through the Altmetric API Gallery for some existing altmetrics implementations we’ve helped with, including the altmetrics for nature.com (see an example).