You’re probably already familiar with the Altmetric score (that handy number seen inside an Altmetric “donut”). The score is a weighted count of all of the mentions Altmetric has tracked for an individual research output, and is designed as an indicator of the amount and reach of the attention an item has received.
We have now decided to formally name this indicator the Altmetric Attention Score.
The new name will make it easier for newcomers to altmetrics to understand what our score indicates (the volume and likely reach of research’s attention, not quality or impact), at a glance. We’re also aiming to make it clearer that this is a number calculated and assigned by Altmetric, the company, as opposed to an ‘altmetrics’ score.
If you refer to the “Altmetric score” in presentations, LibGuides, or anywhere else, we encourage you to update your resources as soon as possible.
Why do we have the score?
Occasionally people ask why we use a score at all, and internally we’ve discussed it a few times. The strength and weakness of the score is that it simplifies something quite handwavy: what kind of attention did this research output receive? The strength comes from allowing you to rank and quickly understand attention. The potential weakness is that it might encourage people to oversimplify, or use a single number somewhere that quantitative analysis is called for.
There are many things that make the score useful, and it is for these reasons that we continue to apply it:
It allows for ranking
The score helps users see at a glance which pieces of research have received a lot of attention – providing a useful indicator of where there might be attention and discussions that are worth exploring further. That activity might be positive or negative, but it can help readers at least begin to decide what altmetrics data to dive into first.
It allows for context
Of course not all research outputs should be compared to one another – there are huge discrepancies in the way research is shared and discussed between disciplines, formats and geographies. What the score does enable users to do, between articles published in the same journal or same timeframe, is to benchmark and see how the amount of attention for one item compares with that of another. Such information can be useful for authors, for example, in determining which journal to publish in (although of course you should always also check what type of attention the research is receiving, and why!).
Identifying trends and hot topics is easier
Because the score is so easy to track it’s possible to monitor the uptake of many items at once – similar to the way download counts might be used to monitor popular content. A rapidly rising score might bring an item to the attention of the press office or editors, who may other not have been aware of its resonance amongst a broader audience.
How is it calculated?
The Altmetric Attention Score is a weighted count of all of the online attention Altmetric have found for an individual research output. This includes mentions in public policy documents and references in Wikipedia, the mainstream news, social networks, blogs and more. Click here to view a full list of the sources we track for mentions of research.
Mendeley readers, CiteUlike bookmarks and Scopus citations do not count towards the score because we are unable to show you all of the details of who saved or cited the item.
More detail on the weightings of each source and how they contribute to the attention score is available here.
How should researchers, publishers and institutions use the score?
We’d always advise that you use the score only as a very preliminary indicator of the amount of attention an item has received. It can help you identify where there are ‘mentions’ that would be worth digging into, and signifies where an item has achieved a high level of engagement. In CVs, performance reviews or grant applications, it might be useful to provide information such as:
“This item has received an Altmetric Attention Score of 150, putting it in the top 5% in terms of attention for papers published in this journal. Coverage included stories in the Washington Post and Der Spiegel, as well as commentary from several leading bloggers. A full summary of the attention score and record of all of the online mentions can be found here” (and include a link to the associated details page)
At the institutional level attention score makes it easy to identify where there is a lot of activity taking place – to help find influencers or departments that are doing a particularly good job of communicating their research more broadly, or where they may need support from a scholarly communications office or similar. It can highlight where there is opportunity for improvement – for example a great piece of research that has not received the attention that it deserves or a subject area that the institution is aiming to become more established in. Having the immediate and real-time feedback that the score represents also makes for an engaging starting point in discussions between the library or research support office and faculty.
Publishers can use the attention score to identify their most popular content and keep on eye on how their publications are being received – something that suddenly attracts a lot of attention (causing the score to rise) might warrant further investigation. Press, marketing and editorial teams can then use this information to monitor the activity, respond if necessary, and get an idea of which content resonates most amongst their audience.