Altmetric Blog

Riding the waves of the Metric Tide

Stacy Konkiel, 10th July 2015

Metric tideWe’ve just put down our virtual copies of The Metric Tide, a report compiled by an expert panel of academics, scientometricians, and university administrators on the role of bibliometrics and altmetrics in research assessment (including the UK’s next REF). What an excellent read.

In the last 24 hours, many smart people have published articles dissecting various aspects of the report. To this thoughtful and productive discussion, we’d like to add our voice.

Below, we’ve teased out what we believe to be the most important themes from the report, and supplement some of the current discussions with our thoughts on how altmetrics in particular can play a role in research assessment in the next REF.

 

Altmetrics as a complement, not replacement

We’ve long said that altmetrics should be a complement to, not a replacement for, citation-based metrics and expert peer-review. So it was heartening to see that same sentiment echoed in The Metrics Tide report (“Metrics should support, not supplant, expert judgement”).

Altmetrics, by and large and like citation-based metrics, cannot tell us much about research quality (though many people often assume they’re intended to). They do have some distinct advantages over citation-based metrics, however.

Altmetrics are faster to accumulate than citations, so it’s possible to get a sense of an article’s potential reach and influence even it was only recently published. (This is useful in an exercise like the REF, where you might be compiling impact evidence that includes articles only published in the previous year – or even month) Altmetrics are also much more diverse than citations, in that they can measure research attention, help flag up routes to impact, and (occasionally, looking at some sources) quality among scholars and also members of the public, practitioners, policy makers, and other stakeholder audiences.

These differences make altmetrics a powerful complement to citations, which measure attention from a scholarly audience. And they’re a useful complement to expert peer review, in that they can quickly bring together data about and reactions to the article the reviewer may not have noticed themselves. (You can find the full review criteria used by the REF panels here).

Indicators rather than metrics

The Metrics Tide report didn’t spend a lot of time talking about what we consider to be an important consideration when using altmetric data for assessment: relevant altmetrics data is typically an indicator of impact rather than a measure of it. It points to and potentially provides evidence for a route to impact.

That’s why we believe it’s very important for altmetrics providers to report on “who’s saying what about research” in a way that’s easy for others to discover and read. Computers can’t consistently, correctly parse sentiment from documents or online mentions, and they can’t (yet?) figure out what policy gets actually put into practice, or who actually acts on research they’ve cited or mentioned. it’s necessary for humans to parse the data and potentially investigate further. The data alone doesn’t give us enough to, say, write a full impact case study automatically.

That’s the downside (for anybody expecting a quick metric win). The upside is that by removing the burden of systematic data collection from a wide variety of data sources that are all different kinds of indicators we can make it much easier for a human to find and create promising impact stories. Hopefully by doing the heavy lifting of looking at policy outputs, at educational material, at books and datasets as well as articles we can help remove some potential biases too.

We’d encourage the community to read The Metrics Tide report with this in mind. We should see indicators and the related, qualitative “mention” data as two, inseparable sides of the same coin.

Our metrics should be as diverse as reality

The Metrics Tide pointed to two main areas in which attention to diversity is very important: disciplinary differences for what’s considered “research impact” and being mindful of how metrics, when not contextualized, can perpetuate systemic inequalities based on gender, career stage, language, and more.

We have little to add to the point that disciplinary differences in what constitutes “impact” are many. Indeed, we vigorously agree that creating discipline-specific “baskets of metrics” is a much better approach to using metrics for assessment than attempting to use citation-based metrics alone to understand research impact across all fields.

In all disciplines, these “baskets of metrics” should be carefully constructed to include the “alternative indicators that support equality and diversity” that the report hints at. We’d suggest that such “diversity” indicators for the impact of research should include metrics sourced from platforms that highlight regional (especially non-Western) impacts (like Sina Weibo, VK, and others), as well as data features like geolocation and language that can do the same.

All metrics should always be contextualized using percentiles, as well, not only allowing one to more accurately compare articles published in the same year and discipline to each other, but also the research of female academics (which studies show tends to be cited less often than comparable work of male academics), early career researchers, and so on.

Transparency is necessary

We vigorously agree with the report’s statement that “there is a need for greater transparency in the construction and use of indicators”. Transparency is necessary in terms of both how data is collected (what’s considered a “mention”? where is this mention data collected from?) and displayed.

To that point, we think The Metrics Tide could go one step further in its recommendations for what constitutes “responsible metrics”. In addition to metrics being robust, humble, transparent, diverse, and reflective of changes in technology and society, we believe metrics should also be auditable. If somebody says an article has 50 citations you should be able to see those 50 citations.

Such transparency is paramount if we are to conscionably use metrics for assessment, a decision that will have huge effects on individuals and universities alike.

More tangible change is required before we can fully rely on metrics

The final (and somewhat sobering) theme of The Metrics Tide came in the form of many reminders of just how far scholarly communication has left to go before we can actually implement metrics for assessment in a useful and fair way.

In particular, the lack of usage of DOIs and other persistent identifiers when discussing research online is a major stumbling block to the accurate tracking of metrics for all research. Moreover, a scholarly communication infrastructure that, for the most part, does not interoperate makes compiling information for assessment exercises like the REF a very difficult and costly endeavor.

(Interestingly this itself speaks to diversity and disciplinary differences: we see many smaller journals or publishing platforms in BRIC and the developing world eschew DOIs because of the cost. DOIs are more prevalent in STM than in the arts & humanities. Some outputs suit DOIs as they are born and live digitally – others, like concerts or sculptures – don’t. But the great shouldn’t be the enemy of the good)

Luckily, there’s a fairly clear path forward: The Metrics Tide calls for governments and funders to offer more investments in the research information infrastructure, especially to increase interoperability with systems like ORCID. We’d add to that and call for the private sector (including other metrics aggregators) to kill information silos in favor of building or allowing technologies that “play well” with others even if they’re commercial. An API for Google Scholar would be great – but it requires publishers to allow it.

The report also proposes the establishment of the Forum for Responsible Metrics, which we applaud and intend to support in any way we can moving forward (as we’ve done in the past with other common sense, metrics-related initiatives like DORA and the Leiden Manifesto).

We look forward to seeing how the recommendations made in The Metrics Tide play out in the coming years, both here in the UK and in academia worldwide, and to working together towards a future where metrics are used intelligently as part of a much wider scholarly agenda.

17 Responses to “Riding the waves of the Metric Tide”

InvestigaUNED (@InvestigaUNED)
July 10, 2015 at 12:00 am

Riding the waves of the Metric Tide http://t.co/w9nCJHoBl4

Stacy Konkiel (@skonkiel)
July 10, 2015 at 12:00 am

.@Altmetric's response to the #HEFCEmetrics report: this is wonderful, let's even go further http://t.co/qaXQi8e0LN #altmetrics

News Roundup | LJ INFOdocket
July 10, 2015 at 12:00 am

[…] Altmetric and JISC Comment on Release of The Metric Tide Report by UK’s Higher Education Funding […]

James Wilsdon (@jameswilsdon)
July 10, 2015 at 12:00 am

'Our metrics should be as diverse as reality': positive, thoughtful response to #HEFCEmetrics by @skonkiel @altmetric http://t.co/SauY7Kkqv1

Xavier Lasauca (@xavierlasauca)
July 10, 2015 at 12:00 am

+1! Riding the waves of the Metric Tide, by @altmetric http://t.co/hxfG0OvGVA #altmetrics #science20 #research #impact

Digital Science (@digitalsci)
July 10, 2015 at 12:00 am

Riding the waves of the Metric Tide - @skonkiel on the @altmetric blog http://t.co/00snhQQxV6 #HEFCEmetrics #altmetrics

Ricardo Mairal Usón (@rmairal)
July 10, 2015 at 12:00 am

InvestigaUNED: Riding the waves of the Metric Tide http://t.co/MllY8sCrnv https://t.co/u5pJq8UYj3

Jean Peccoud (@peccoud)
July 11, 2015 at 12:00 am

[@altmetric blog] Riding the waves of the Metric Tide - We’ve just put down our virtual copies of The Metric Tide,... http://t.co/j9GYSOPR7Q

"Riding the waves of the Metric Tide" http://t.co/lAWu5Fhswi #altmetrics

Fabrice Leclerc (@leclercfl)
July 11, 2015 at 12:00 am

Top #openedu story: Riding the waves of the Metric Tide | http://t.co/uJaoIyzmKq http://t.co/nh2eOFd2Dj, see more http://t.co/vtUf0W3rg2

Pilar (@mptoro)
July 13, 2015 at 12:00 am

Riding the waves of the Metric Tide http://t.co/mGNzL6xBHT via @altmetric #altmetrics #researchassesment #repositoriosalud

[…] Altmetrics are faster to accumulate than citations, so it’s possible to get a sense of an article’s potential reach and influence even it was only recently published. (This is useful in an exercise like the REF, where you might be compiling impact evidence that includes articles only published in the previous year – or even month) Altmetrics are also much more diverse than citations, in that they can measure research attention, help flag up routes to impact, and (occasionally, looking at some sources) quality among scholars and also members of the public, practitioners, policy makers, and other stakeholder audiences. Read … […]

Digital Science (@digitalsci)
July 13, 2015 at 12:00 am

Diverse & transparent indicators. Complementing, not replacing. - @skonkiel on the @altmetric blog http://t.co/oFKhhPsPEf

Stacy Konkiel (@skonkiel)
July 13, 2015 at 12:00 am

ICYMI: "Metrics should be as diverse as reality" and other recommendations for #HEFCEmetrics http://t.co/qaXQi8e0LN #altmetrics

scholastica (@scholasticahq)
July 14, 2015 at 12:00 am

"Riding the waves of the #Metric Tide" http://t.co/Fn3nAoEWQ4 reflections on #HEFCEmetrics @altmetric #highered

Impactstory (@Impactstory)
July 14, 2015 at 12:00 am

.@skonkiel sums up most of our reactions to #hefcemetrics in this great post: http://t.co/RUAGSmxmbm #altmetrics

Leave a Reply

Your email address will not be published. Required fields are marked *