On Friday 9th October I attended the altmetrics15 workshop in Amsterdam with several of my colleagues. This was the last of three days of conferences, as the 2:AM altmetrics conference had taken place on the Wednesday and Thursday of the same week.
At the 2:AM conference, the delegates had wrestled with some of the larger overarching issues to do with altmetrics – how do we define impact? How can we measure the impact of Public Engagement with Science initiatives? How can we arrive at standards for altmetrics? At the end of the two days, a panel consisting of Jason Priem, Cameron Neylon, Dario Taraborelli and Paul Groth called for “more sources, more data, more research and more theory”. As I watched the morning’s events unfold at Altmetrics15, I wondered if the panel felt they had got their wish. Altmetrics15 provided a platform for academics and altmetrics specialists to present their work in concise ten minute talks.
Mojisola Erdt from Nanyang Technological University in Singapore kicked off proceedings with a very “meta” presentation on “the altmetrics of altmetrics literature”. Mojisola and her team had conducted a Scopus search with the keyword “altmetrics” and had used the Altmetric API to view the Altmetric data for the 391 results. The team stated that “research literature on altmetrics has been growing at a fast pace” since the term was first coined in 2010. Of the literature they examined, 90% of the articles had been mentioned on Twitter, and 77% had Mendeley readers, suggesting highly developed altmetrics practices in those communities. Next, Valeria Scotti presented a paper about altmetrics for biomedical research. Her team suggested that more formal validation of altmetrics was needed for greater uptake in the biomedical community. Following this, Judit Bar-Ilan presented research on citation counts for Altmetric literature, applying bibliometric measures to altmetric outputs. The team found that on average, altmetrics research defined as “discussion” had attracted more citations than pure “research” papers. They also found that the citation counts decreased significantly for more recently published papers.
The first group of sessions focused heavily on the quantitative data, and the raw counts of Altmetric mentions and citations from different sources. By contrast, the later sessions focused more on the underlying qualitative data, and on practical applications for it. Ad Prins and Jack Spaapen called for better altmetrics coverage across different types of research output, to make altmetrics useful for social sciences and humanities departments for research evaluation purposes. Following on from this, Altmetric’s very own Stacy Konkiel presented the results of a survey designed to shed some light on the librarian use case for altmetrics. The survey found that librarians are currently unlikely to use altmetrics in collection development and tenure promotion decisions.
Session three consisted of talks concerning the “quality” of altmetrics data. Zohreh Zahedi and her team highlighted discrepancies in data from Mendeley, Lagotto and Altmetric, while Rodrigo Costas and Grischa Fraumann discussed trends in the way research outputs are discussed in blog and news sources. William Gunn raised the point that altmetrics providers need to support identifiers for multiple document versions, and decide whether to disambiguate the metrics that accumulate for the different versions.
The first three sessions suggested that in the last five years, we’ve learnt a lot about the limitations of the data for research and research evaluation purposes. Because there are different altmetrics providers with different ways of collecting the data, it’s very difficult to uncover the “true” numbers and use them to come to any concrete conclusions about research dissemination practices. One of the later talks I enjoyed the most came from Cameron Neylon, as he stressed that the data still has the potential to tell us interesting stories. Cameron stated that the online mentions of a piece of research are digital footprints, and that we can use these footprints to create a pathway for a certain type of impact. For example, can we plot a pathway to academic impact if someone tweets a paper, then reads it on Mendeley, then cites it in a paper of their own? I also enjoyed Stephanie Haustein’s talk on whether it is possible to perform sentiment analysis on tweets. Haustein and her team found that across a survey of 270 randomly selected tweets, hardly any of the tweeters expressed a positive or negative sentiment about the research. This suggests Twitter is more for pure research dissemination than opinion-based posts.
Overall, I thought altmetrics15 was a great success. The organizers had managed to get altmetrics specialists from across the globe in one room, and the format allowed lots of researchers to share and discuss their findings with like-minded academics. It’s been five years since the Altmetrics Manifesto was first published, and it’s clear that five years in, the data is still throwing up a lot of questions and generating a lot of debate in the academic community. However, the research presented at altmetrics15 suggested that we do have some answers. We now know that although there are some impacts we can’t possibly hope to measure, we can use altmetrics to gain an impression of usage and attention that is missing from the picture offered by bibliometrics. As Kim Holmberg and colleagues were saying in their session, the question now is about how to develop a gold standard for these metrics, make them more sophisticated, and think about possible ways of aggregating the numbers and categorising types of impact.
The full schedule from the altmetrics15 workshop is available here.