Earlier this month, I had a great experience at the Triangle Scholarly Communication Institute, where I had the chance–along with a team of brilliant humanities researchers and librarians–to think through what “humane” metrics (HuMetrics) for the humanities and social sciences might look like.
What we discussed at this meeting has been a revelation. Though it’s an idea that’s still in its infancy, the concept of HuMetrics is starting to change the way I think about how metrics should be selected and applied in academia.
Simply put, I’m starting to see that academia’s been approaching evaluation metrics from the wrong angle: most institutions simply measure what can be easily counted, rather than using carefully chosen data to measure their progress towards embodying important scholarly values.
That choice has huge consequences. It’s obvious to many that academia needs a change, and that offering up any simple metrics–even emerging metrics–divorced from values and context can harm academia by promoting corrosive values.
What are scholarly values?
Over the course of TriangleSCI, the HuMetrics team come up with an initial, exploratory list of five core values that we believe are common amongst humanists (and most other scholars and disciplines, as well): equity, openness, collegiality, quality, and community. (Note that these proposed values are preliminary and will require further research and verification.)
These core values may be underpinned by more specific values:
Image CC-BY Nicky Agage / Medium
Team HuMetrics believes that these values should drive the work of all scholars–and not just their research either, but also teaching, mentoring, service, and other acts.
When metrics aren’t aligned with values
Here’s an example: let’s say the Lilliput University College of Arts & Humanities releases a grand strategic plan, one that emphasizes their commitment towards “openness”. In it, they focus mostly upon encouraging their researchers to publish Open Access monographs and in Open Access journals.
Here’s their plan: they’ll put OA journal articles on equal footing with toll-access journals in evaluation scenarios for promotion and tenure! They’ll establish an OA publishing fund, to defray the costs of article processing charges! They’ll encourage the library to support the Open Library of the Humanities!
And that’s it.
Don’t get me wrong, those actions are very worthwhile and should absolutely be encouraged! But the College doesn’t go as far as it could in promoting Openness among their scholars. Specifically, they continue to tie evaluation and related metrics to a system that disincentivizes Open Research practices.
When it comes to promotion and tenure, they’ll continue to assign more value to articles published in “prestigious” and primarily toll-access journals from the European Reference Index for the Humanities. Open research practices like blogging and sharing code won’t count even minimally towards tenure, no matter how influential they’ve been. Collaborative research projects will continue to be frowned upon–after all, it’s the solo-authored monograph that truly shows one’s brilliance, isn’t it? And metrics showcasing public engagement (tweets, media mentions, and so on) will be looked down upon, in favor of citation-based metrics.
For those reasons, researchers will continue to publish in toll access journals, to not “waste” their time blogging or making their code openly available, to turn down collaboration opportunities in favor of striking out on their own in the field. They’ll also continue to only use citation-based metrics to showcase the impact of their work.
From this simple example, it’s clear: when evaluation practices aren’t aligned with values, there are consequences.
When metrics are aligned with values
What if the creators of Lilliput’s strategic plan had taken a different approach to incentivizing openness? One that considers openness “not only in terms of outputs…but also openness in practices and attitudes towards other members of the community”, as my HuMetrics teammate Simone Sacchi has also pondered?
In addition to encouraging Open Access publishing, such an approach could encourage:
- Open annotation via Hypothes.is or the MLA Commons, which would enrich web-based humanities research for all;
- Open coding via GitHub, saving others countless hours that would be otherwise spent coding their own text analysis scripts;
- Open data via Figshare, allowing other scholars to reuse painstakingly digitized texts;
- More collaboration with researchers worldwide and across disciplines, leading to new insights;
- …as well as lots of other “open” scholarly practices.
As Rebecca Kennison has already put it so well (and has Impactstory has proven with the recent launch of their #OAscore), we know we can change behavior by changing the incentives–so let’s use value-aligned evaluation practices to incentivize the “enriching” practices we want to encourage, and do away with metrics (and related values) that corrode our academic lives.
The HuMetrics team reconvenes later today to tackle our next steps: validating our current list of values against existing disciplinary norms. Beyond that, we hope to share our vision more widely and perhaps even promote adoption of values-based research evaluation within US higher ed. Watch this space for updates!
Many thanks to my HuMetrics team members, Paolo Mangiafico and the rest of the TriangleSCI organizing committee, and the Mellon Foundation for bringing HuMetrics to life. More to come!