Altmetric Blog

What could altmetrics add to the REF exercise?

Stacy Konkiel, 21st May 2015

In England, the recent national Research Excellence Framework (REF) exercise is using “real world” impact for the first time to determine how much money institutions will be allocated from the Higher Education Funding Council for England. In this post, we examine the REF results and discuss the possibilities for documenting such impacts using altmetrics.

At Altmetric, we have a keen interest in understanding the public impacts of research. We’ve been following the UK Research Excellence Framework assessment exercise closely since it was piloted in the late 2000s (just a few years before Altmetric was formally founded). There’s overlap in the types of indicators of impact we collect (called “altmetrics” in the aggregate and including media coverage, mentions of research in policy documents, and more) and types of impact that were reported by institutions for the REF (impact on culture, health, technology, and so on).

So, when the REF results were announced in March, we naturally asked ourselves, “What (if anything) could altmetrics add to the REF exercise?”

In this post, I’ll give a brief background on the REF and its implementation, and then dive into two juicy questions for us at Altmetric (and all others interested in using metrics for research evaluation): What can research metrics and indicators (both altmetrics and citation metrics) tell us about the “real world” impact of scholarship? And can they be used to help institutions prepare for evaluation exercises like the REF?

First, let’s talk about how the REF works and what that means for research evaluation.

The REF wants to know, “What have you done for taxpayers lately?”

REF 2014 logoMany countries have national evaluation exercises that attempt to get at the quality of research their scholars publish, but the REF is slightly different. In the REF, selected research is submitted by institutions to peer review panels in various subject areas, which evaluate it for both its quality and whether that research had public impacts.

There is an enormous cost to preparing for the REF, which requires institutions to compile comprehensive “impact case studies” for each of their most impactful studies, alongside reports on the number of staff they employ, the most impactful publications their authors have published, and a lot more.

Some have proposed that research metrics–both citation-based and altmetrics–may be able to lessen the burden on universities, making it easier for researchers to find the studies that are best suited for inclusion in the REF. And a previous HEFCE-sponsored review on the use of citations in the REF found that they could inform but not replace the peer review process, as indicators of impact (not necessarily evidence of impact themselves).

But using metrics for evaluation is still a controversial idea, and some have spoken out against the idea of using any metrics to inform the REF. HEFCE convened an expert panel to get to the bottom of it, the results of which are expected to be announced formally in July 2015. (The results have already been informally announced by review chair James Wilsdon, who says that altmetrics–like bibliometrics–can be useful for informing but not replacing the peer review process.)

screenshot of the cover of the Kings College and Digital Science REF report

Until then, there is rich data to be examined in the REF2014 Impact Case Studies web app (much of which is available for analysis using its API) and this excellent, thorough summary of the REF impact case study results (pictured at right). We decided to do some informal exploration of our own, to see what Altmetric data–which aim to showcase “real world” impacts beyond the academic sphere–and citation count data could tell us about the publications selected for inclusion in REF impact case studies.

What can altmetrics tell us about “real world” research impacts?

Going into this thought experiment, I had two major assumptions about what Altmetric and citation data could tell me (and what it couldn’t tell me) about the impact of publications chosen for REF impact case studies:

  • Assumption 1: Altmetric data could find indicators of “real world” impacts in key areas like policy and public discussion of research. That’s because there’s likely overlap between the “real world” attention data we report and the kinds of evidence universities often use in their REF impact case studies (i.e. mentions in policy documents or mainstream news outlets).

  • Assumption 2: Citation counts aren’t really useful for highlighting research to be used in the impact case study portion of the REF. Citations are measures of impact among scholarly audiences, but are not useful for understanding the effects of scholarship on the public, policy makers, etc. Hence, they’re not very useful here. That said, citation counts are the coin of the realm in academia, so it’s possible that faculty preparing impact case studies may use citations to help them select what research is worthy of inclusion.

These assumptions led to some questions that guided my poking and prodding of the data:

    1. Are there differences between what universities think have the highest impact on the “real world” (i.e. are submitted as REF ICSs) and what’s got the highest “real world” attention as measured by Altmetric? If so, what relevant things can Altmetric learn from these differences?
    2. If “impact on policy” is one of the most popular forms of impact submitted to the REF, do articles with policy impacts (as reported by Altmetric) match what’s been submitted in REF impact case studies?
    3. Can citation counts serve as a good predictor of what will be submitted to with REF impact case studies?

I decided to dive into impact data for a very small sample of publications from two randomly chosen universities: The University of Exeter and The London School of Hygiene and Tropical Medicine.

I created three groups of publications to compare for each university, six groups total for comparing across both universities:

    1. Top ten articles by overall attention for each institution, as measured by Altmetric’s attention score;
    2. Top ten articles by attention for each institution that were submitted with a non-redacted REF impact case study; and
    3. Top ten articles by Scopus citation count for each institution*.

Though the REF impact case studies included publications primarily released between 2008-2013 (as well as older research that underpins the more recent publications), the publications I used were limited to those published online between 2012 and 2013, when the most comprehensive Altmetric attention data would be available.

I also used Altmetric to dig into the qualitative data underlying the pure numbers, to see if I could discover anything interesting about what the press, members of the public or policymakers were saying about each institution’s research, how it was being used, and so on.

Before going any further, I should acknowledge some limitations to this exercise that you should bear in mind when reading through the conclusions I’ve drawn. First and foremost is that my sample size was too small to draw any conclusions about the larger body of publications produced across the entirety of England. In fact, while this data may show trends, it’s unclear whether these trends could hold up across the each institution’s entire body of research. Similarly, using publication data from the 2012-2013 time period alone means that I’ve examined only a small slice of what was submitted with REF impact case studies overall. And finally, I used the Altmetric attention score as means of selecting the highest attention articles for inclusion in this thought exercise. It’s a measure that no doubt biased my findings in a variety of ways.

With that in mind, here’s what I found out.

Are there differences between what universities think have the highest impact on the “real world” (i.e. are submitted as REF impact case studies) and what’s actually got the highest “real world” attention (as measured by the Altmetric score)?

In the Altmetric Explorer, you can learn what the most popular articles are in any given group of articles you define. As seen above, by default articles are listed by their Altmetric score: the amount of attention–both scholarly and public–that they receive overall.

You can also use the Explorer to dig into the different types of attention a group of articles has received and filter out all but specific attention types (mentions in policy documents, peer reviews, online discussions, etc).

So, I fed the list of articles from 2012-2013 that were submitted with each institution’s REF impact case studies into the Explorer, and compared their Altmetric attention data with that of with the overall lists of publications from each institution during the same time period (sourced from Scopus). I then used the Altmetric score to determine what the top ten highest attention articles from the REF were, and did the same for the overall list of articles from each institution. Those “top ten” lists were then compared, with unexpected results.

There is no real overlap in articles that are simply “high attention” (i.e. articles that have the highest Altmetric scores) and what was submitted with REF impact case studies. That’s likely because the Altmetric score measures both scholarly and public attention, and gives different weights to types of attention that may not match up with the types of impact documented in impact case studies.

However, when you drill down into certain types of attention–in this case, what’s been mentioned in policy documents–you do see some overlaps in the “high attention” articles with that type of attention from each institution, and what was submitted with REF impact case studies.

Even though the Altmetric score alone can’t always help choose the specific articles to submit with REF impact case studies, altmetrics in general may be able to help universities choose themes for the case studies. Here’s why: for both universities, the disciplines of publications submitted for the REF impact case studies (primarily public health, epidemiology, and climate change) closely matched the disciplines of overall “high attention” publications, as measured by the Altmetric Explorer.

So–we don’t yet have precise, predictive analytics powered by altmetrics yet, but altmetrics data can potentially help us begin to narrow down the disciplines whose research has the most “real world” implications.

If “impact on policy” is one of the most popular forms of impact submitted to the REF, do articles with policy impacts (as reported by Altmetric) match what’s been submitted in REF ICSs?

Yes. We found many more articles with mentions in policy documents than were chosen for inclusion in REF impact case studies for each institution. Yet, it’s still likely that a human expert would be required to select which Altmetric-discovered policy document citations are best to submit as impact evidence.

To find articles with policy impacts, I plugged in all publications published between 2012 and 2013 for both universities into the Explorer app. Then, I used Explorer’s filters to create a list of articles for each institution that were cited in policy documents.

Of all articles published between 2012 and 2013, thirty publications from University of Exeter and fifty-five articles from London School of Hygiene and Tropical Medicine had been mentioned in policy documents.

But how many of those articles were included in REF impact case studies? Turns out, the impact case studies from University of Exeter only included five articles of the thirty total that were found by our Explorer app to have policy impacts. And LSHTM impact case studies only included one of the fifty-five total articles that we found to have policy impacts.

I think there are two possible reasons for this discrepancy. The first is that each university only selected a small subset of their scholarship that has policy impacts in order to showcase only the work with potentially the most lasting mark on public policy, or that they wanted to submit research with only certain types of impacts (e.g. technology commercialisation). The other is that those researchers compiling impact case studies for their universities simply weren’t aware that these other citations in policy documents existed.

However, this is currently only speculation. We’ll need to talk with university administrators to know more about how impact case studies are selected. (More on that below.)

Can citation counts serve as a good predictor of what will be submitted to the REF?

No. Neither university submitted any articles with their impact case studies that were in the “top ten by citation” list, presumably because citations measure scholarly–not public–attention. However, that’s not to say that other universities would not use citation counts to select what to submit with REF impact case studies.

So what does this mean in practice?

Generally speaking, these findings suggest that Altmetric data can be useful in helping universities determine themes to the “real world” impact their research has, and the diversity of attention that research has received beyond the academy. This data could be useful when building persuasive cases about the diverse impacts of research. For example, it can help scholars discover the impact that they’ve had upon public policy.

However, it’s unclear whether Altmetric data could help researchers choose specific publications to submit with impact case studies for their university. We’ll be doing interviews soon with university administrators to better understand how their selection process worked for REF2014, and whether Altmetric would be useful in future exercises.

There’s more digging to be done

In getting up close and personal with the Altmetric data during the course of this exercise, I came to realize that I had another assumption underlying my understanding of the data:

  • Assumption 3: There are probably differences in the types of research outputs that are included REF impact case studies and what outputs get a lot of attention overall, as measured by Altmetric. There are also probably differences in the types of attention they receive online. I’ve guessed that Open Access publications were more likely to be included in impact case studies (as all REF-submitted documents must be Open Access by the time REF2020 rolls around), that the most popular articles overall saw more love from Twitter than chosen-for-REF articles, and that the most popular articles overall on Altmetric had orders of magnitude more attention than chosen-for-REF articles.

And that assumption led to three more questions:

    1. Are there differences between what’s got the highest scholarly attention (citations),  the highest “real world” attention (as measured by Altmetric), and what’s been submitted with REF impact case studies? If so, what are they?
    2. What are the common characteristics of the most popular (as measured by Altmetric) outputs submitted with REF impact case studies vs. the overall most popular research published at each university?
    3. What are the characteristics of the attention that’s been received by REF-submitted impact case studies outputs and high-attention Altmetric articles?

So, I’m rolling up my sleeves and getting to work.

Next month, I’ll share the answers I’ve found to the questions above, and hopefully also the perspectives of research administrators who prepared for the REF (who can tell me if my assumptions are on the mark or not).

In the meantime, I encourage you to check out the REF impact case study website and the Kings College London and Digital Science “deep dive” report, which offers a 30,000-foot view of REF impact case studies’ themes.

* The Altmetric data and Scopus citation information for these three groups of articles has been archived on Figshare.

25 Responses to “What could altmetrics add to the REF exercise?”

InvestigaUNED (@InvestigaUNED)
May 21, 2015 at 12:00 am

What could altmetrics add to the REF exercise? http://t.co/vbJ6F6m0tt

Mogens H. Sørensen (@mhsorens)
May 21, 2015 at 12:00 am

"What could #altmetrics add to the REF exercise?" by @skonkiel @altmetric http://t.co/zHttco4qZK #impact

Terry Bucknell (@terrybucknell)
May 21, 2015 at 12:00 am

What could altmetrics add to the REF exercise? http://t.co/LA2bIafIxn

Altmetric (@altmetric)
May 21, 2015 at 12:00 am

'What could #altmetrics add to the REF exercise?' by @skonkiel: http://t.co/XJ1nkRarUk #ref2014 #highered

nataliafay (@nataliafay)
May 21, 2015 at 12:00 am

New blog post by @skonkiel on #ref2014 impact case studies and #altmetrics http://t.co/9Joxx73JER

Valeria Scotti (@ValeriaScotti6)
May 21, 2015 at 12:00 am

What could altmetrics add to the REF exercise? http://t.co/QEO604BjaF

Scott Matthewman (@scottm)
May 21, 2015 at 12:00 am

"What could altmetrics add to the REF exercise?" by my @altmetric colleague @skonkiel http://t.co/AdWyxnENfR

Impactstory (@Impactstory)
May 21, 2015 at 12:00 am

Can (and should) #altmetrics inform the REF? Very significant implications either way. @skonkiel explores it well: http://t.co/YrF21IROOQ

Jean Peccoud (@peccoud)
May 22, 2015 at 12:00 am

[@altmetric blog] What could altmetrics add to the REF exercise? http://t.co/RiLYFurWpG

Digital Science (@digitalsci)
May 22, 2015 at 12:00 am

What could #altmetrics add to the REF exercise? - @skonkiel on the @altmetric blog http://t.co/89di9hdgfd #ResearchImpact #REF14 #REF2014

Richard Poynder (@RickyPo)
May 22, 2015 at 12:00 am

What could altmetrics add to the REF exercise? http://t.co/wOYLqpV5cy #rickypo

AlisonMcNab (@AlisonMcNab)
May 22, 2015 at 12:00 am

What could altmetrics add to the REF exercise? http://t.co/1uexxcpIJs

scholastica (@scholasticahq)
May 22, 2015 at 12:00 am

What could altmetrics add to the REF exercise? http://t.co/54FgRF3sCK @altmetric via @RickyPo #altmetrics #highered

@sn_will
May 23, 2015 at 12:00 am

Did universities highlight breadth of impact in #REF2014? Case studies chosen vs high @altmetric scores by @skonkiel http://t.co/tr7EjTlcnl

Digital Science (@digitalsci)
May 25, 2015 at 12:00 am

What could #altmetrics add to the REF exercise? http://t.co/89di9hdgfd #ResearchImpact #REF14 #REF2014

@janetinkler
May 26, 2015 at 12:00 am

Really interesting on how @altmetric data could add evidence to the REF impact case studies http://t.co/fVkm8ga8fa #HEFCEmetrics

OA Tracking Project (@oatp)
May 26, 2015 at 12:00 am

What could altmetrics add to the REF exercise? | http://t.co/m3OXonGvxO: "In England, the recent national R... http://t.co/Vfd1h3doDJ #OA

Social Media QUB (@SocialMediaQUB)
May 26, 2015 at 12:00 am

What could altmetrics add to the REF exercise? http://t.co/mzb60LpM6a #impact #research

@TheSocReview
May 26, 2015 at 12:00 am

What could altmetrics add to the REF exercise? http://t.co/dgHCyorlQ0

Sneha Kulkarni
May 27, 2015 at 12:00 am

I feel the idea of using altmetrics for large-scale studies such as the impact case studies submitted for the REF exercise is worth considering. It would be interesting to know whether the studies submitted for REF assessment and those that received a high Altmetric score were based on similar or different topics.

@Eleni_Der
June 11, 2015 at 12:00 am

What could altmetrics add to the REF exercise? http://t.co/cs341MrPuT

Altmetric (@altmetric)
June 24, 2015 at 12:00 am

.@skonkiel is up now, discussing her research into #altmetrics and the #ref2014: http://t.co/mIRg1h1ndw #dswebinar

Stacy Konkiel (@skonkiel)
June 24, 2015 at 12:00 am

If those who attended #dswebinar are interested in learning more about my study, check out the blog post http://t.co/u9Jvza6fDs Part 2 soon!

[…] May, I shared some interesting results from a small-scale exercise I ran, comparing what was submitted with REF impact case studies with indicators of “real world” […]

[…] Haustein’s paper on the content and sentiment of tweets sharing academic articles Stacy’s article on altmetrics and the Research Excellent Framework (REF) in the UK On Andrew Wakefield and the […]

Leave a Reply

Your email address will not be published. Required fields are marked *