Altmetric Blog

Altmetrics & REF2014: characteristics and contrasts

Stacy Konkiel, 8th July 2015

In May, I shared some interesting results from a small-scale exercise I ran, comparing what was submitted with REF impact case studies with indicators of “real world” impact sourced from Altmetric data. My findings suggested that Altmetric can help find specific mentions of research in public policy documents that would otherwise go unreported in REF impact case studies. Altmetric may also help find overall themes to the research that has the most “real world” impact (i.e. epidemiology, climate change, etc) in a way that citations can’t.

The data also raised some questions about differences between articles with high citation counts and those with a lot of Altmetric attention:

  1. Are there differences between what’s got the highest scholarly attention (citations),  the highest “real world” attention (as measured by Altmetric), and what’s been submitted with REF impact case studies?
  2. What are the common characteristics of the most popular (as measured by Altmetric) outputs submitted with REF impact case studies vs. the overall most popular research published at each university?
  3. What are the characteristics of the attention that’s been received by REF-submitted impact case studies outputs and high-attention Altmetric articles?

In today’s post, I’ll dig into these questions, with an eye towards shedding more light on themes that might make the REF preparation work of researchers, impact officers, and other UK university administrators easier. The articles data used in this analysis can be found on figshare.

Are there differences between what’s got the highest scholarly attention (citations), the highest “real world” attention via Altmetric, and what was submitted with REF impact case studies?

Short answer: yes.

There seems to be little correlation between the number of citations and the overall Altmetric score received by the publications I looked at. That’s in line with other studies that have examined citation correlations.

And articles with many citations and high Altmetric scores aren’t any likelier to be submitted with REF impact case studies, as found in the previous post.

In other words, when looking at simple attention data (citations and Altmetric scores), there doesn’t seem to be overlap with what’s been chosen for submission with REF impact case studies. But when you drill down into the characteristics of the Altmetric attention data–and the characteristics of the articles themselves–themes do emerge.

What are the common characteristics of the most popular (as measured by Altmetric) outputs submitted with REF impact case studies vs. the overall most popular research published at each university?

There are some common themes found in the types of articles that appear in each university’s “top ten” groups of publications.

Journal of publication: Highly cited publications by London School of Hygiene and Tropical Medicine (LSHTM) authors only appeared in two journals: The Lancet and New England Journal of Medicine. However, this homogeneity of journal title didn’t hold up for highly cited publications by University of Exeter (UE) authors, which were published in a greater variety of journals.

Authorship numbers: Highly-cited LSHTM publications were also more likely to have many authors than the university’s REF-submitted or high-attention publications, or than any “top ten” publications published by UE authors. For highly-cited LSHTM publications, the minimum number of authors was eleven.

Publication date: Overall, highly-cited articles were more likely to be published in 2012 than 2013, which makes sense, given citation delays caused by the publication process. Altmetrics, sourced primarily from text mining and APIs, tend to accumulate much more quickly than citations.

Open Access: LSHTM articles were more likely to be Open Access if highly cited or submitted with a REF impact case study than those that were simply high attention. UE articles were only slightly more likely to be Open Access if highly cited, and were equally as likely to be OA if submitted with a REF impact case study or of high attention overall.

Type of publication: Editorials published by LSHTM authors were more likely to be of high attention, and research articles were more likely to be highly cited or submitted with REF impact case studies. UE publications overall were more likely to be research articles, whether highly cited, of high attention, or submitted with REF impact case studies. No other types of research outputs were among the highest attention, submitted-to-REF, or highly cited lists.

Subject: No matter the type of attention a publication received, they were more likely to focus on public health, epidemiology, or climate change if they appeared in any “top ten” group for both universities.

Of all the themes uncovered, publications’ subjects seem to be the most useful benchmark for researchers to use to select and compile impact case studies. But to know for sure, we’ll have to ask researchers and impact officers themselves how they settled upon particular case studies, and whether they even considered using citation data to inform those themes.

What are the characteristics of the attention that has been received both by outputs submitted with REF impact case studies and by high-attention Altmetric articles from each institution?

Overall, REF-submitted outputs were less likely to have news coverage, appear on Reddit, and surprisingly slightly less likely to appear in policy documents (though this may be due to the selection of policy documents that we currently track) than articles that simply had a lot of attention (as measured by the Altmetric score). In terms of demographics (as sourced from Twitter and Mendeley data), both universities had the most impact among audiences in Great Britain and the United States.

Wikipedia: All articles from both groups were just as likely to have a Wikipedia mention (1 of 10 articles in both groups were cited on the platform).

Twitter: Articles from both groups were just as likely to have attention on Twitter, although outputs from the REF had on average ten times fewer tweets than the highest-attention articles did. Ironically, University of Exeter’s REF outputs were also slightly more likely than highest-attention outputs to be highly shared among scientists than members of the public, whereas LSHTM’s REF outputs had more diverse “top tweeters” overall, including members of the public, scientists, practitioners, and science communicators.

Qualitative data: At Altmetric, we consider the auditable qualitative data we provide to be just as important (if not more important) than the numbers we report. So, I took a look at who was saying what (and where) for each publication. Two points and one example stood out to me:

  • Often, Facebook posts are made by groups promoting research that’s relevant to their audiences. For example, an autism awareness organization might share an article on recent developments in neurological research. Alternatively–in a worst case scenario for many scientists–scholarship is sometimes also shared by groups like climate change denialists that misunderstand the science described. However, having access to what’s being said in both types of groups provides a valuable opportunity for the authors to engage with the members of the public who are reading their work and, in some cases, misinterpreting it.
  • For the LSHTM “highest-attention” article, “The Future of the NHS–Irreversible privatization?”, it was fascinating to read through the commentaries that accompanied the public Facebook and Google+ shares it received. Like the article itself, many were critical of the idea of privatizing the United Kingdom’s National Health Service. It was shared by many biomedical journals, professional groups, and patients’ rights organization.
  • One final example of “public discussion” set me to thinking: researchers have been known to scoff at the idea that it’s valuable to have laypersons discussing their work. For example, it may seem trivial to some that this group on training for cyclists has posted some recommendations on cycling performance and nitrite consumption that cite research on the topic. Yet, is that not also a small form of public impact? If research is advancing knowledge among the public, by and large that’s a good thing.

So what does this mean in practice?

The themes in publications uncovered above may little bearing on how researchers choose articles for submission with REF impact case studies based on variables like journal title, number of authors, and so on. (Which is a very good thing, as such variables have little bearing on the quality or impact of an article itself.)

However, as I found in my previous post on this topic, it’s possible that Altmetric attention data may be useful in choosing which subjects to base selections for impact case studies upon. But Dr. Rosa Scobles–Acting Director of Planning at Brunel University, who helped coordinate her university’s REF2014 submission–is not so sure.

“Altmetrics might be useful in helping to prepare REF impact case studies if online attention is a step on the road to impact,” Scobles recently explained to me via telephone. “That might be true especially if the steps you’ve taken towards engagement have impact. For example, if you’re doing a public health campaign and part of that campaign is to raise awareness using social media, you could say that measurable engagement (retweets, followers, Twitter impressions) points to true impact (behavior change). Or if your research informs public policy, and you can read that policy online, that might be evidence of impact. But most types of ‘impact’ are very difficult to measure in general.”

Another academic who helped prepare for REF2014–Dr. Amy Gibbons, a Faculty Impact Officer at Lancaster University–recently emailed to offer her thoughts on the process, specifically the question of why universities who have a lot of citations in policy documents might not include that information in their REF impact case studies:

“A third possibility for why only a small percentage of the policy impact articles were mentioned in case studies, is that the researcher felt that the study/research was not longitudinal or in-depth enough yet for a case study but instead individual instances may have been submitted to their department/faculty for inclusion in the impact template (detailing overall types of impact activity to accompany the case studies for submission to a panel – worth 20% of the impact submission in REF 2014).”

She makes an excellent point: the value of research changes over time, with what we consider “REF-worthy” impact possibly only being measurable in the longer term.

Overall, what’s most relevant for the REF is the qualitative Altmetric attention data that’s now available. In terms of public engagement, qualitative Altmetric data can be used to help scientists connect with their audiences, and that’s an important “real world” impact by my estimation. And, once articles are chosen for submission with REF impact case studies, researchers and impact officers can now document who is saying what about their university’s research, and whether that discussion is happening in the news or in a policy document.

Looking towards the future

Reading through REF impact case studies has made one thing clear: universities have varied motivations for selecting what to submit to the REF, and those motivations aren’t necessarily reflected in the “one size fits all” approach that Altmetric and other altmetrics services take when building reports. I wonder if market demands will drive altmetrics services like ours and others towards being more flexible and modular. Theoretically, universities could pick and choose the modules that would help them uncover impacts like “technology commercialization” or “policy impact” (if those were important to them), or any other number of impact flavors that help them better communicate their value to the world.

Tomorrow, the much-anticipated results of the HEFCE metrics review (which looked at whether altmetrics might be useful for evaluation purposes in the next REF) will be announced. I’m eager to see the recommendations of the scholars who led in the review, and whether they agree that altmetrics can be indicators (rather than evidence) of potential impact.

Did you help prepare for REF2014 and want to share your thoughts on the role of altmetrics in assessment? If so, leave them in the comments below!

8 Responses to “Altmetrics & REF2014: characteristics and contrasts”

Altmetric (@altmetric)
July 8, 2015 at 12:00 am

#altmetrics & #ref2014: characteristics and contrasts, from @skonkiel: http://t.co/cj0KsIYs1i #highered #libchat #impact #metrics

InvestigaUNED (@InvestigaUNED)
July 8, 2015 at 12:00 am

Altmetrics & REF2014: characteristics and contrasts http://t.co/ncV0pa6ezw

"Altmetrics & REF2014: characteristics and contrasts"→ http://t.co/62piW7dhLP #altmetrics

Ricardo Mairal Usón (@rmairal)
July 8, 2015 at 12:00 am

InvestigaUNED: Altmetrics & REF2014: characteristics and contrasts http://t.co/AJEdq7n4f6 https://t.co/BpVMmnL7ht

Jean Peccoud (@peccoud)
July 9, 2015 at 12:00 am

[@altmetric blog] Altmetrics & REF2014: characteristics and contrasts http://t.co/yONxTBbBNz

dominique chalono (@domchalono)
July 9, 2015 at 12:00 am

Altmetrics & REF2014: characteristics and contrasts: In May, I shared some interesting results from... http://t.co/L9CDQTQ03F #altmetric

Fabrice Leclerc (@leclercfl)
July 9, 2015 at 12:00 am

Top #openedu story: Altmetrics & REF2014: characteristics and contrasts | Altme… http://t.co/wvpS7yGCas, see more http://t.co/vtUf0W3rg2

Digital Science (@digitalsci)
July 10, 2015 at 12:00 am

#Altmetrics and #REF2014: characteristics and contrasts - @skonkiel on the @altmetric blog http://t.co/NhnzGE4HQx

Leave a Reply

Your email address will not be published. Required fields are marked *