Altmetric Blog

Subject area benchmarking, altmetrics, and responsible metrics

Stacy Konkiel, 22nd October 2018

In this post, Stacy Konkiel, Director of Research Relations at Altmetric, examines the evaluation uses and limitations of the new article-level subject data within Altmetric Explorer.    

Last week, we introduced an exciting new feature in the Altmetric Explorer: article-level subject classifications.

In this post, I want to explain what this means in practice for those who use the Explorer for evaluation: how it’s an improvement upon current subject classification practices used by other bibliometrics data providers, but also its current limitations.

There are a number of bibliometrics products and data providers that claim to offer subject area classification for articles, but they have one major weakness (in our view, at least): they mostly assign subjects to articles based on journal-level subject classifications, neglecting the rich information often available in the form of titles, abstracts, and full-text.

Using a journal-only approach would mean that an interdisciplinary article on bioethics published in a medical journal would be labeled “Medical sciences”, rather than “Philosophy”. It would also mean that in most bibliometrics products, the article’s metrics would be compared to other articles in “Medical sciences”, rather than those in “Philosophy”.

Those journal and article-level disciplinary discrepancies present a pretty big problem for evaluators who want to understand the influence of an article, when one considers the fact that citation patterns and altmetrics can differ wildly from discipline to discipline. In the above example, such discrepancies are bad for the bioethicist when her article is compared to other medical articles (which tend to have higher altmetrics and citation rates than philosophy articles), but great when it is compared to other philosophy articles.

These discrepancies are why we introduced article-level subject classifications to publication records in the Altmetric Explorer. The classifications are powered by data from Dimensions, which uses algorithms to automatically sort journal articles into particular subject areas. The assigned subject areas are based on the Fields of Research classification codes that were developed by the Australian government to aid in the country’s national evaluation program.

While there will still be disciplinary differences in altmetrics, the hope is that by offering article-level subject classification (determined by what’s actually in the article), we can make it easier for evaluators to at least make accurate comparisons.

In theory, these improvements should mean that research administrators can now feel free to use the Altmetric Explorer to analyze and benchmark their researchers’ work or prepare department-level reports for REF2021 and the like. After all, we now have the available data to do subject-level benchmarking, so let’s use it in evaluation!

Not so fast. There are some important limitations to our data that anyone looking to use these metrics responsibly will need to take into account.

For background: I want to make the distinction here between “formative” and “summative” evaluation practices:

  • “Formative” evaluation is when you use data to help you improve your work or measure your progress. For example, a researcher learning could use subject-level altmetrics data as a new way to find collaborators in her field, or to understand how she could improve her online engagement with her research.
  • “Summative” evaluation is when you use data measure someone’s performance or rank them relative to their peers. An example would be a researcher who’s up for a job being compared to other applicants, based on his h-index and his recent research’s Altmetric Attention Scores.

We’ve always been big proponents of the use of altmetrics data in formative evaluation practices at the researcher, department, institution, and publisher levels. Altmetrics have been and will continue to be a good way to understand the influence of your research so you can improve your outreach and engagement practices, and the data is now even more useful given its disciplinary enrichment.

However, there are some caveats that apply to using data for summative evaluation. First and foremost is coverage. 59% of outputs in Altmetric have subject area classification data–that means that 41% of research records in the Explorer lack this information. (The numbers improve slightly when you look only at journal articles: 67% of articles have disciplinary data.) These numbers will certainly change over time as we find new ways to classify the research we track, and the Dimensions team are actively working on improvements here. But nonetheless coverage has an obvious bearing on disciplinary benchmarking, especially for disciplines where journal articles are not the preferred means of communication.

Related to coverage is disciplinary skew. Across the entire body of 11 million outputs we track, the majority with disciplinary data are classified as being in the sciences. For smaller sub-disciplines in the humanities, arts, and social sciences, that skew may make disciplinary analysis difficult. We’re currently investigating to determine any gaps that may exist.

Then there is the Fields of Research classification. Like any classification system, it reflects the values of the organization that created it (cf the Library of Congress classification system, which has several top-level classifications for military science and American history). So the classifications that have been applied to research in Explorer may not necessarily be the same as what the authors would provide.

Finally, there’s accuracy. The machine-learning approach to classifying documents against a classification can be extremely complex (you can learn more about this process in this recent paper from the Dimensions team). Although we’re careful to quality check the outputs, we do sometimes find examples of incorrectly labelled research. We’re working with Dimensions to correct these mistakes as we find them, but we know that these anomalies can affect benchmarking.

I’ve talked a lot so far about all the ways you shouldn’t use this data. So, how you can you use it?

  • To stay up to date with interesting research in specific subject areas
  • To identify potential new collaborators and rising stars active in a discipline
  • To create effective outreach strategies for your (or your institution’s) research, based on disciplinary trends
  • …and in any other scenario where you want to use Altmetric data to understand how you can improve how you do research and communicate it to others!

To summarize, we’re proud of the new subject area data available in Altmetric Explorer and will continue to fine-tune our data in collaboration with the Dimensions team. We believe that having this data available can help improve scholarly communication practices at several levels, and can be applied in many different kinds of formative evaluation scenarios. However, we want to make it clear that this new feature should only be used carefully–if used at all–in summative evaluation scenarios.

We hope this post will help those who want to further the responsible use of metrics in their own organizations.

Questions or comments? Email support@altmetric.com.

Leave a Reply

Your email address will not be published. Required fields are marked *