Last week, I was delighted to take part in a Reddit AMA (Ask Me Anything) on the r/AskScience subreddit. It was awesome to speak with so many people from all around the world about research impact metrics, and it got even better when I learned that, for a brief moment, we even made it onto the Reddit homepage! Here’s what we covered.
Are metrics at all useful to understand true impact?
It was clear from the questions asked that many redditors are as concerned as I am about the abuse and misinterpretation of research metrics. I tried my best to communicate one of my biggest beliefs: that no metric can be used in isolation.
Altmetrics aren’t magic. They’re currently prone to many of the same weaknesses as citations (lack of context, misinterpretation, etc). But if we can get ahead of the bus, so to speak, and openly acknowledge what they are good for (measures of attention and in some cases indicators of downstream impact), perhaps we can start to use them more consciously and responsibly than we use citation-based metrics.
Part of “better use” lies in following some simple principles. It’s much better to use “baskets of metrics” and to contextualize numbers using percentiles. Mostly, we should be using metrics as a signal, to help us understand where true impact (life-changing research) can be found. Several times during the AMA, I found myself pointing folks towards Indiana University’s new policy on the use of research metrics for evaluation, which lays out a fantastic framework for anyone interested in promoting the same at their institution.
What are the sociotechnical barriers that exist to better tracking my research’s impact upon the world?
It was pointed out that a major flaw in using research metrics lies in assuming that “events” are always equal:
The primary difficulty in making altmetrics functional is that they count up mentions of the article so an article in the New York Post counts the same as the New York Times, or a tweet from a leader in the field counts the same as a tweet by the lead researcher’s mother.
To which I heartily agree! Raw numbers can be problematic for that reason. We’ve tried to tackle that issue at Altmetric by scoring attention using a weighted algorithm: for example, a mention in the New York Times is weighted more than most other newspapers, due to its reach and influence.
However, single-number scores pose a danger, which was also brought up in the chat: these numbers are all too easy to misinterpret as representing quality rather than simple attention (which is what our score measures).
By focusing so much of the research metrics debate on the faults of single number indicators, though, we are missing the larger opportunity: to have discussions on how to accurately track and (dare I say it?) directly measure impact or quality. The fields of computational linguistics and scientometrics are advancing every day; I truly believe that at some point we’ll be able to understand true impact using algorithms and metrics. But as I explained in the AMA, though altmetrics in their current form bring us closer to being able to do so, we’re still a far ways away.
John Oliver’s recent rant about the misrepresentation of research by the mainstream media also came up in a few questions. In a sensationalist science news environment, how can we suss out the truly impactful research from the “sexy”? Ultimately, we decided upon several options to encourage better journalistic practices: giving scientists the ability to better engage with the public (by recognizing engagement’s value in the incentives system in science), making sure university press officers are properly trained in interpreting and communicating results, and (if all else fails) banning repeat offenders from reporting on science news. 😉
What’s been your experience as a librarian?
It tickled me that several questions pertained to “librarianish” things, like what my favorite books are, how I got interested in library science in the first place, and even our profession’s gender gap. I rarely get to talk about these sorts of topics in a professional context, so it was cool to get the chance to do so with such a nice and thoughtful bunch!
My first time diving into Reddit was a great one, and both scientists and laypeople alike seemed keen on the idea that we should be taking charge of the responsible use of research metrics, so that all researchers can get the credit they deserve. Many thanks to r/AskScience subreddit moderator nate for organizing the chat!