The use of “alternative metrics” (or alt-metrics) for assessing scholarly research impact was a hot topic in live and remote conversations surrounding this year’s SpotOn London conference. For the first time ever, there was an entire session devoted entirely to a discussion about the bourgeoning field. The session, called “Altmetrics beyond the numbers”, was run by Sarah Venis (Medicins sans Frontieres), Marie Boran (Digital Enterprise Research Institute), Euan Adie (from Altmetric), and Martin Fenner (PLOS). (An archived live-stream of the session can be viewed here.)
If you were following the live discussion and the concurrent Twitter conversations tagged with #solo12alt it would have been obvious that people had diverse opinions, questions, and concerns about alt-metrics. Topics that came up included their uses and potential as well as the future of research impact assessment.
Here we’ve compiled a few of the thoughts and questions from the session and presented our take on some of the points that were raised.
What we mean when we say…
1. Alt-metrics vs article level metrics
Alt-metrics is the umbrella term for new ways (both qualitative and quantitative) of measuring different forms of impact. Impact can mean different things to different people so not everybody takes the same approach to alt-metrics.
The data used in alt-metrics is typically high volume and quick to accrue. We use alt-metrics to mean both new sources of this data and any metrics derived from it. The term article level metrics has come to mean the alt-metrics surrounding a scholarly paper.
A commonly held assumption about alt-metrics is that they are meant to replace traditional measures of research impact like citation counts. Actually most in the field (us included) think that alt-metrics should complement traditional metrics, not eliminate them altogether.
— Marta Rolak (@martarolak) November 11, 2012
Having “metric” in the term alt-metrics implies that the most important component of alt-metrics is the quantitative aspect – the numbers. Certainly the numbers are important – as is being able to put those numbers in context – but they are not the be all and end all. In fact, arguably the main strength of article level metrics as they stand now is that the collected data can make qualitative assessments easier.
Alt-metrics means different things to different people
Alt-metrics are a means to an end. Different people have different views of what kind of impact matters most to them – researchers may be interested in influencing their peers, funders may care about re-use or public engagement, institutions may care about relative rankings – and so it’s inevitable that alt-metrics data and methodologies are used in a variety of different ways to suit a variety of different uses. We think that’s how it should be.
At Altmetric.com we’re most interested in providing others with the data to power their own metrics, the Altmetric score notwithstanding. We focus on collecting all of the conversations around scholarly works and then enriching that data, adding context and helping to characterise the attention paid to them.
— @RNajmanovich (@RNajmanovich) November 11, 2012
The types of science-related conversations that people engage in online paint interesting pictures about research impact. It’s this theme that we’ve been exploring in the Interactions series of this blog. In these posts we’ve examined how online discussions about papers spread the word about the research itself, and can help broaden some kinds of impact by helping to reach new audiences.
For example, we’ve seen that insightful online conversations about papers affect scientists in their work lives (see “It’s OK, Scientific Stupidity is Normal”), while the rapid-fire re-tweeting and sharing of certain papers provide important information to particular groups of people after a natural disaster (see “Conversations About Disaster”).
A different perspective can be gained by using alt-metrics to create filters for the firehose of scientific articles produced each year, revealing articles that have been shared by unusually large numbers of scientists or by particular demographic groups.
— Matt Hodgkinson (@mattjhodgkinson) November 11, 2012
Questions from the session
Q. With respect to a good point brought up by structural biologist Stephen Curry during the SpotOn Altmetrics session: what if a scholarly article that is scientifically rubbish (e.g., the controversial arsenic life paper) has a high Altmetric score?
— Vibhuti J. Patel (@VibhutiJPatel) November 11, 2012
A. When considering the high Altmetric score in this situation it’s important to recognise that the score reflects the quality and quantity of online attention paid to the article, not the quality of the research.
Whether the overall response to the paper is positive or negative can only be determined by delving into the data behind the metric.
Part of the session’s title (“beyond the numbers”) hints at the way Altmetric exposes alt-metric data from news sources & social media platforms through the Altmetric Explorer and apps like Altmetric for Scopus. Browsing through the collected content (the posts on Facebook, tweets, Reddit threads, blog posts, etc.) and reading the actual conversations that are taking place is currently the best way to use alt-metrics to assess the impact and/or quality of a specific paper.
In the case of the arsenic life paper, you can view its Altmetric data here. It certainly does have a high Altmetric score (420 as of 14 November), but it’s worth noting that most of the blog coverage focuses on the issues raised about the paper’s results.
This isn’t a new problem: in fact in extreme cases it’s more of a problem for traditional citations which have traditionally been used as a proxy for quality. The arsenic life paper, for example, has 154 citations according to Google Scholar (as of 14 November). Without knowing the back story or having a simple way to assess whether these citations are positive or negative, you could very easily form the wrong impression of the paper’s impact.
— Zen Faulkes (@DoctorZen) November 11, 2012
Q. An excellent point that was raised by an audience member during the SpotOn Altmetrics discussion was: what happens if the social media platforms of today (e.g., Twitter) are not used in the conversations of tomorrow?
— franknorman (@franknorman) November 11, 2012
A. With respect to actual metrics we don’t have a great answer for this yet! It’s something that anybody creating new metrics from alt-metrics data needs to bear in mind. We’ve already come across this kind of thing in the past year, adding LinkedIn and Pinterest mentions and removing Connotea bookmarked counts from Altmetric’s database.
One solution may be to focus on benchmarks: if a hundred people on Twitter tweeted favourably about your paper in 2012 and you keep the context (i.e., “this is far more than the average article could expect to receive”) then looking back 20 years later you can still compare it to contemporary benchmarks, even if Twitter has long since disappeared.
Q. Who benefits from tracking alt-metrics?
A. Short answer: everyone who is involved in or interested in research. Some of the benefits that were brought up during the SpotOn session were measurement of public engagement and research output. These pertained mostly to publishers, funders, and researchers, but librarians, marketers, social media experts, journalists, bloggers, and people with a general interest in research can also benefit from seeing the conversations around scholarly articles.
If funders care about public engagement, tweets can be an indicator of this; metrics don’t all need to correlate with citations #solo12alt
— Matt Hodgkinson (@mattjhodgkinson) November 11, 2012
#solo12alt another reason to put links/doi in press releases/news stories/blogs – alt metrics. Big wins for funders if we get this right.
— Henry Scowcroft (@oh_henry) November 11, 2012
— Andy MacLeod (@nailest) November 11, 2012