Back in 2013 I wrote a blog post mentioning that NISO was bringing together interested parties to try to define some standards around altmetrics – what they mean, how they can and should be used etc.
At the time I was a bit skeptical that it wouldn’t be too much of a distraction but we definitely wanted to be involved. Over the past year or so – after some preliminary scoping work led ably by Martin Fenner – we split into three groups to help define data quality, use cases and output types & identifiers. Our Product Manager Jean joined the use cases group while I joined the data quality one (the third group is looking at identifiers and output types, I believe).
I’m pleased to say that the first output from the project is out: a draft code of conduct for altmetrics providers & aggregators. It’s available for comment on the NISO site and we’ve already signalled our full support. It lays out three sensible principles for altmetrics data:
On a tangent I’m not sure why everybody in altmetrics comes up with three bullet point recipes for success. Ours has been “auditable” “meaningful” and “accurate”. Adam Dinsmore from the Wellcome Trust spoke about “consistent” “transparent” and “available” back in 2014 at the 1:AM conference. Luckily there’s overlap between all three sets.
Obviously these headings are a bit vague, so the document goes into a bit more detail about what providers are expected to do to meet requirements. This seems like a fairly simple task but gets difficult fairly quickly. To take an extreme example, how transparent can a download statistic be? Do you mean transparency in terms of how it’s collected, or transparency in the sense that you want to be able to see the log files from which the numbers are extracted? For replicability how do you handle things like tweet deletions, where contractually the record of the tweet ever appearing should be expunged?
We’re glad we’ve taken part (and will continue to take part – this code of conduct is still a draft and the other groups are still to present their own outputs). It has been a positive experience overall. To be completely honest it has also been pretty painful at times, not because of the people (who were all smart, well informed and pragmatic) or the subject area (which is important) or NISO (their organizational skills are great) but because group phone calls suck. This being the worst problem in the process is probably a good thing though.
If you’re providing metrics or qualitative data that’s going to be used in assessment then you have a duty to be responsible with how you present and interpret that data, and the code of conduct helps to set some robust but achievable goals. Users of the data also need to be responsible, of course, which is why we should all be doing everything we can to push the message of documents like the Leiden Manifesto with researchers, librarians and other staff at institutions.