Altmetric Blog

Assessing UK research with new metrics: Altmetric's perspective

Euan Adie, 17th October 2014

hello

Here in the UK, HEFCE (the Higher Education Funding Council for England, which distributes central funding to English universities) is currently running an independent review on the role of metrics in research assessment.

As part of that, a couple of weeks ago the review panel convened a workshop at the University of Sussex: In Metrics We Trust? Prospects & pitfalls of new research metrics. I was lucky enough to attend and thought it was a really useful day, not least because it was a chance to hear some pretty compelling points of view.

I’m excited that altmetrics are in the mix of things being considered, and that time is being taken to carefully assess where metrics in general may be able to help with assessment as well as, probably more importantly, where they can’t.

How can altmetrics be used in a REF like exercise?

write_a_story

Before anything else, here’s my perspective on the use of altmetrics data in the context of REF style formal assessment exercise (there are lots of other uses within an institution, which we shouldn’t forget. Research isn’t all about the post publication evaluation, even if it sometimes feels that way).

When I say “altmetrics data” I mean the individual blog posts, newspaper stories, policy documents etc. as well as their counts, the number of readers on Mendeley etc. Not just numbers.

  • If we’re going to look at impact as well as quality, we must give people the right tools for the job
  • Numbers don’t need to be the end goal. They can be a way of highlighting interesting data about an output that is useful for review, with the end result being a qualitative assessment. Don’t think ‘metrics’ think ‘indicators’ that a human can use to do their job better & faster
  • On that note, narratives / stories seem like a good way of addressing a broad concept of impact.
  • Altmetrics data can help inform and support these stories in two main ways.
  • Figuring out which articles have had impact and in what way, then finding supporting evidence for it manually takes a lot of effort. How do you know what direction to take the story in? Automatically collected altmetrics indicators could save time and effort, showing areas that are worth investigating further. Once you have discovered something interesting altmetrics can help you back up a story with the quantitative data.
  • They may also highlight areas you wouldn’t otherwise have discovered without access to the data. For example, altmetrics data may surface attention from other countries, sources or subject areas that you wouldn’t have thought to search for
Using altmetrics data to inform & support: an example

Alice is an impact officer at a UK university. She identifies a research project on, say, the contribution of climate change to flood risk in the UK that is a good candidate for an impact case study.

She enters any outputs – datasets, articles, software, posters – into an altmetrics tool, and gets back a report on the activity around them.

On a primary research paper:

http://www.altmetric.com/details.php?citation_id=269428

… she can quickly some uptake in the mainstream media (the Guardian, the New York Times) and magazines (Time, New Scientist). She can see some social media activity from academics involved in the HELIX climate impacts project at Exeter, a Nature News correspondent, the science correspondent for Le Monde and the editor for CarbonBrief.org.

Switching to the policy side she can see that there are two citations tracked from government / NGO sources: a report from the Environment Agency and one from Oxfam.

These are documents from UK organizations that Alice’s institution may have already been tracking manually. But research, even research specifically about the UK, can be picked up worldwide:

http://www.altmetric.com/details.php?citation_id=1903399

For example above by the AWMF, which is similar to NICE in the UK.

Alice can support her assessment of what it all means with other indicators: by checking to see if it’s normal for papers on anthropogenic climate change and flood risks to get picked up by the international press. She can see how the levels of attention compare to other articles in the same journal.

She can do all this in five minutes. It doesn’t help with the next, more important part: Alice now needs to go and investigate if anything came of that attention, how the report from the Environment Agency used the article (in this case, only to show that research is still in the early stages), if the report was used if at all, whether or not anything came out of the interest from journalists. She still needs to speak to the researcher and do the follow up. The altmetrics data, though, gave her some leads and a running start.

Because she’s supported by the right tools and data she can get relevant data in five minutes.

As time goes on and the relevant tools, data sources and our understanding of what kinds of impact signals can be picked up and how improves, so will the usefulness of altmetrics.

Why would it ever be useful to know how many Facebook shares an article got?

In the example above we talk about news mentions and policy documents. Facebook came up in the panel discussion.

If you have a ten papers and the associated Facebook data it would be a terrible, terrible idea for almost any impact evaluation exercise to use metrics as an end point and, say, rank them by the number each one was shared, or their total Altmetric score or something. On this we should all be agreed.

However, if nine papers have hardly any Facebook data associated with them, and one has lots, you should check that out and see what the story is by looking at who is sharing about it and why, not ignore the indicator on the principle that you can’t tell the impact of a work from a number. The promise of altmetrics here is that they may help you discover something about broader impact that you wouldn’t otherwise have picked up on, or to provide some ‘hard’ evidence to back up something you did pick up on some other way.

There are lots of ways in which indicators and the underlying data they point to can be used to support and inform assessment. Equally there are many ways you can use metrics inappropriately. In my opinion it would be a terrible waste – of potential, but also time and money – to lump these together with the valid uses and suggest that there is no room in assessment for anything except unsupported (by tools and supplementary data) peer review.

What’s in a name? That which we call a metric…

One opening statement at the workshop that particularly struck a chord with me was from Stephen Curry – you can find a written version on his blog. Stephen pointed out that ‘indicators’ would be a more honest word than ‘metrics’ considering the semantic baggage it carries:

I think it would be more honest if we were to abandon the word ‘metric’ and confine ourselves to the term ‘indicator’. To my mind it captures the nature of ‘metrics’ more accurately and limits the value that we tend to attribute to them (with apologies to all the bibliometricians and scientometricians in the room).

I’ve changed my mind about this. Before I would have suggested that it didn’t really matter, but I now agree absolutely. I still think that debating labels can quickly become the worst kind of navel gazing…. but there is no question that they shape people’s perceptions and eventual use (believe me, since starting a company called “Altmetric” I have become acutely aware of naming problems).

Another example of names shaping perception came up at the 1:AM conference: different audiences use the word “impact” in different ways, as shorthand for a particular kind of influence, or as the actual, final impact that work has in real life, or for citations, or for usage.

During the workshop Cameron Neylon suggested that rather than separate out “quality” and “impact” in the context of REF style assessment we should consider just the “qualities” of the work, something he had previously expanded on in the PLoS Opens blog:

Fundamentally there is a gulf between the idea of some sort of linear ranking of “quality” – whatever that might mean – and the qualities of a piece of work. “Better” makes no sense at all in isolation. Its only useful if we say “better at…” or “better for…”. Counting anything in isolation makes no sense, whether it’s citations, tweets or distance from Harvard Yard. Using data to help us understand how work is being, and could be, used does make sense.

I really like this idea but am not completely sold – I quite like separating out “quality” as distinct to other things because frankly some qualities are more equal than others. If you can’t reproduce or trust the underlying research then it doesn’t matter what audience it reached or how it is being put into practice (or rather it matters in a different way: it’s impact you don’t want the paper to have).

Finally, I belatedly realized recently that when most people involved with altmetrics talk about “altmetrics” they mean “the qualitative AND quantitative data about outputs” not “the numbers and metrics about outputs”, but that this isn’t true outside of the field and isn’t particularly intuitive.

We’ve already started talking internally about how to best tackle the issue. Any suggestions are gratefully received!

19 Responses to “Assessing UK research with new metrics: Altmetric's perspective”

Euan Adie (@Stew)
October 17, 2014 at 12:00 am

I wrote a blog post about @altmetric and potential uses for the impact element of the REF: http://t.co/rUqGijSDg1

Lou Woodley (@LouWoodley)
October 17, 2014 at 12:00 am

Accessing UK research with new metrics - @altmetric's perspective: http://t.co/Y1aOLS3UEz <-- change term to "alternative indicators"?

Impactstory (@Impactstory)
October 17, 2014 at 12:00 am

#Altmetrics can and should be used for evaluations like #REF, argues @Stew http://t.co/TMota9NOtd

@david_colquhoun
October 17, 2014 at 12:00 am

@Stew @Impactstory I was reading http://t.co/6rxjndHqYV The title is "Assessing UK research" with #altmetrics

Fabrice Leclerc (@leclercfl)
October 18, 2014 at 12:00 am

Top #openedu story: Assessing UK research with new metrics: Altmetric’s perspec… http://t.co/UktkFq20M7, see more http://t.co/BLfaziYYRV

Digital Science (@digitalsci)
October 18, 2014 at 12:00 am

Assessing UK research with new metrics: @altmetric’s perspective http://t.co/B1mKFMkuYT

Katy alexander (@KLA2010)
October 18, 2014 at 12:00 am

My name is alternative metrics @altmetric @Stew considers role #altmetrics can/should play in assessing #ukresearch http://t.co/Xsl9sdmAMT

James Wilsdon (@jameswilsdon)
October 19, 2014 at 12:00 am

‘Assessing UK research with new metrics’: @altmetric’s @stew reflects on the recent #HEFCEmetrics workshop
http://t.co/AT4bk5oq7T

Richard Tol (@RichardTol)
October 19, 2014 at 12:00 am

AltMetrics are great for tracking media attention, but that is not how HEFCE defines impact -- and rightly so.

Digital Science (@digitalsci)
October 19, 2014 at 12:00 am

Assessing UK research with new metrics: @altmetric’s perspective http://t.co/iru7qT2CDB

Xavier Lasauca (@xavierlasauca)
October 19, 2014 at 12:00 am

Assessing UK #research with new metrics: @Altmetric’s perspective http://t.co/JYSzfIurAO #Twitter #SocialMedia

@AWTaylor83
October 20, 2014 at 12:00 am

#Altmetrics- time for a name change? Think indicators, not metrics http://t.co/w7nm8DODfe by @Stew #hefce

[…] #Altmetrics- time for a name change? Think indicators, not metrics http://t.co/w7nm8DODfe by @Stew #hefce  […]

@janetinkler
October 20, 2014 at 12:00 am

A nice blog from @altmetric's Euan Adie (@Stew) on how he sees altmetrics being used in the REF http://t.co/5d4QIJIaAC

@laegran
October 20, 2014 at 12:00 am

On how altmetrics could be used as alternative metrics in the REF - what do you think @CRFRtweets @HelenChambers http://t.co/RkARJI9ip0 …

Sri Amudha S (@Sriamudha1)
October 30, 2014 at 12:00 am

Yes, unless one reads a little further, its difficult to get what Altmetrics is, from the term itself unlike Sciento+metrics or Biblio+metrics! Many consider ALTmetrics as ALTernative metrics which will supersede traditional metrics. Adopting the term 'Indicator' would be apt and sensible considering the current scenario as altmetrics today is the amalgamation of the online scholarly mentions or engagements. We are yet to find what altmetrics data(indicators )is capable of as far as the naming issue is concerned. With these indicators, we might come up with new information service or a methodology to measure impact or influence or performance or whatever in the near future. We never know what future holds! So as of now I would up-vote the term 'Indicators'.

Sri Amudha S (@Sriamudha1)
October 30, 2014 at 12:00 am

Its not for certain but, probably the misconception mentioned in the blog--they mean “the qualitative AND quantitative data about outputs” not “the numbers and metrics about outputs”-- was due to one of the older webpages of Altmetric.com that explained 'How is the Altmetric score calculated?', which sort of gave a view that it deals with both quantitative and quantitative data of research outputs.

[…] may also seen have seen Altmetric Founder Euan Adie’s blog post from last year, where he discussed how the term ‘metrics’ itself can seem to promise a false solution. […]

[…] and I’m convinced there are better, more holistic, ways to judge research performance. Others agree. Yes, there are those who argue that metrics are not very useful4. Like any number they can be […]

Leave a Reply

Your email address will not be published. Required fields are marked *