TL;DR: The whole point of altmetrics is to move away from a single way of assessing a single type of impact (citations & influence on other articles) and to provide a variety of different options that give you a broader view of impact. Also: yes, people are going to try gaming new metrics.
Jeffrey Beall has written a pretty negative blog post about altmetrics – it’s worth reading and balances out some of the more pro-altmetrics articles floating around.
It does lump all possible (and some imaginary) forms of article level metrics together, which ironically annoys me slightly in the same way that I am annoyed when people lump all OA publishers together.
That said I wanted to write up a response, mainly because the post includes points worth answering but also frankly because I have a lot of respect for Jeffrey and the work he does with Beall’s List. I think the post raises some interesting questions about the relative usefulness – or lack thereof – of altmetrics even if his post title is over the top (ill-conceived and meretricious?! Dude, harsh!). I was hoping to chat to Jeffrey in person at ISMTE earlier this week about it but unfortunately our paths didn’t cross.
So, anyway, I’m obviously biased, and I don’t have all the answers, but in my opinion the main problem with Jeffrey’s post is that it takes too narrow a view of impact. The four key points I took away from the post were:
1) “Numerical values like page views will be shamelessly gamed”
2) “popularizing article-level metrics means articles about Bigfoot and about astrology will likely register a greater impact than articles about curing cancer and discovering the nature of dark matter, for there are many more people interested in popular topics than there are interested in scientific ones”
3) “As a way to measure the impact of scientific work, the journal impact factor still has great value. Indeed, the true impact of science is measured by its influence on subsequent scholarship, not on how many times it gets mentioned on Entertainment Tonight or how many Facebook likes it gets in the Maldives”
4) “Article-level metrics reflect a naïve view of the scholarly publishing world. The gold open-access model has introduced much corruption into the process of scholarly communication, so we should learn from this and avoid any system that is prone to gaming, corruption, and lack of transparency, such as article-level metrics.”
The field of altmetrics actually covers things like non-traditional outputs and contributions, and the end goal is by no means always assessment, but the post focuses on article level measures of impact so that’s what we’ll focus on here too.
I can’t speak for everybody but can for Altmetric. Here are my comments:
They’ll be shamelessly gamed
Yes. I don’t have a pat answer to this, I think people will try to abuse altmetrics systems (any system!) too. It’s a topic that comes up frequently at altmetrics conferences and some more developed platforms like SSRN have been dealing with it for years.
We try to identify gaming with a combination of manual and automatic approaches but we’re still developing our processes for it and stuff is going to slip through: in these cases we rely on a user flagging the article up.
How do we expect people to do this? At Altmetric we’ve got a
golden broken only for one exception rule: only show numbers for which we can show the underlying raw data (the exception is Mendeley, in case you’re wondering). This is why we don’t track Facebook Likes or private shares, we wouldn’t be able to show you where the count came from as the data isn’t publicly accessible. The numbers are just a way into and easy to digest support for the underlying data.
Popularizing article-level metrics means articles about Bigfoot and about astrology will likely register a greater impact than articles about curing cancer
Probably not, but I get it.
It’s because if you include public attention in your definition of impact then popular science articles probably did have a greater impact – that’s why it’s ‘popular’ science. A real life example is PLoS One’s fellatio in bats paper, which was widely shared but has accumulated few scholarly citations.
But I can give you many, many other examples: articles about breastfeeding get more attention than they do cites. So do many articles about archaeology. So do articles about the effects of ionizing radiation on human health. Cystic fibrosis patient advocate groups share papers on Facebook far more often than they write research articles. NGOs tweet, they don’t submit to The Lancet.
Maybe you don’t care about social media, but you do care about whether or not your article has influenced local government policy, or appears in World Health Organization recommendations. Maybe you’re a university that cares about how many textbooks reference papers from people in their chemistry department. Maybe you don’t care about Entertainment Tonight but do care about F1000 reviews or the number of times people have saved your article to Mendeley. This is the point of altmetrics. Modern article level metrics are more than a download or likes count.
Sometimes you want to measure if a paper has reached an audience beyond other people who write papers. Other times you don’t care. Some funders, fields and researchers are more concerned with this than others.
How much emphasis you place on public attention – or any other form of attention – is up to you.
The whole point of altmetrics is to move away from a single way of assessing one type of impact (citations & scholarly impact) and provide a variety of different options that give you a broader view of impact, not to force some new single way of doing things onto people.
Altmetrics isn’t an alternative to citations, it’s an alternative to only using citations.
For the record at altmetric.com we calculate a metric – the Altmetric score – that measures attention. Not quality, not good attention, not non-spammer attention (though bots & spammers are very heavily penalized)… attention. We want you to use it in cases where you think it’s useful to know how much attention a paper got in relation to its peers. If you want to assess quality then read the paper itself and, again, the data behind the numbers. If you want to get an idea of how much attention came from other scientists vs members of the public you can jump into the data and work it out.
The general public lacks the credentials needed to judge or influence the impact of scientific work, and any metric that relies even a little bit on public input will prove invalid
I think this betrays a lack of awareness of where altmetrics data actually comes from. It’s not the single, amorphous “general public” that shares or discusses the vast majority of scientific articles (Winnie the Pooh and cannabis papers aside). It is other scholars or specific non-academic audiences.
If you write a paper about the best ways to help people stick to an antiviral drug regimen in South Africa then an HIV charity in Durban probably does have the credentials to influence its impact. If it’s a paper on the benefits of breastfeeding then a parents club on Facebook has a decent stake. If it’s something outlining how best to distract kids in a pediatric ward then pediatric nurses are worth listening to.
I’m not saying that every article has nice, clear examples of non-traditional impact, because they don’t. However, some do, and it seems like a no-brainer to want to be made aware of them as a reader or author.
As a way to measure the impact of scientific work, the journal impact factor still has great value.
Sure. Altmetrics isn’t about replacing the impact factor. It’s about recognizing that the impact factor isn’t a one size fits all solution.
Altmetrics of some form – incorporating citation data or citation related indicators – should certainly replace the impact factor on an article level, for example. Plainly not all articles in Nature or Science are of excellent quality and great impact (at least in retrospect). Very high impact papers occasionally appear in smaller journals too.
Indeed, the true impact of science is measured by its influence on subsequent scholarship
Again, it all depends on your definition of ‘impact’ and whether or not you feel citations capture all of the ways that scholars are influenced.
Article-level metrics reflect a naïve view of the scholarly publishing world [..] avoid any system that is prone to gaming, corruption, and lack of transparency, such as article-level metrics
No, this is absolutely wrong – it’s exactly the other way around. Relying only on the impact factor is what’s naïve. All other issues aside, measures based on citation counts alone plainly fail to take the true impact of scientific work into account.
You also certainly can’t have this both ways and declare that despite being prone to gaming, corruption and not being properly transparent the impact factor is OK, but that unspecified new metrics aren’t!
The sentiment is noble, though, and the principles are shared by DORA – several altmetrics groups were amongst the original signers (us included, though it’s taking us some time to live up to it).
I think it’s great to see a bit of debate on this kind of stuff, and welcome comments or corrections.