Mingling with the altmetrics community
Last week, a few Altmetric team members (our founder Euan Adie, our CTO Paul Mucur, and myself) were in San Francisco for an important altmetrics-related meeting: the 2013 PLOS ALM Workshop. (Euan also attended and spoke at the NISO Alternative Assessment Metrics (Altmetrics) Project Meeting, which took place the day before the PLOS Workshop.)
A number of excellent summaries of the two altmetrics meetings have been popping up this week. We recommend that you check out Peter Brantley‘s summary post on PWxyz, Ian Mulvany‘s summaries of the PLOS ALM Workshop (day 1 and day 2), Marcus Banks‘ summary post of the NISO and PLOS meetings, Carly Strasser‘s thoughtful write-up on the LSE Impact of Social Sciences Blog, Martijn Roelandse‘s Storify of #alm13 tweets, and Eva Amsen‘s write-up of her project at the PLOS ALM Data Challenge. This page lists all of the slides for each presentation at the PLOS ALM Workshop, including the brief introductory slides for our session.
How should we talk about article-level metrics?
This year, Euan, Paul, and I decided to moderate a discussion session, which we hoped would be a lively and useful one. In the recent months, we’ve all watched altmetrics gain more acceptance from numerous user groups, including publishers, academics, librarians, and funders. However, it’s as important as ever for advocates of altmetrics (and article-level metrics or ALMs) to be able to clearly explain why such data are valuable pieces to the impact puzzle.
Altmetrics still provoke outspoken opposition from many people, who assert that such metrics are trivial (often due to their social media components), prone to being gamed, and will be harmful when used for the assessment and ranking of researchers. Some of this resistance could arise from a lack of understanding of how altmetrics and ALMs can realistically be used right now, and to a lesser extent, what merits online communication platforms (like Twitter or blogs) hold. ALM toolmakers and altmetrics advocates need to be able to respond clearly to criticism and facilitate thoughtful, productive discussion. With the help of end-users, such as researchers, our tools can get better.
Therefore, for our workshop session, we decided to tackle the question “How should we, as a community, talk about article-level metrics?” Below is a summary of what our initial panelists said, as well as the two major themes explored in the ensuing discussions.
Entering the fishbowl
We chose an interesting format for our discussion session called the “fishbowl“. This involved placing five chairs in a circle, roughly in the centre of the room. We seated four “initial panelists” – Euan Adie (Altmetric), Amy Brand (Harvard University), Martijn Roelandse (Springer), and Adam Dinsmore (Wellcome Trust). According to the rules of the fishbowl discussion format, the fifth chair was to be left empty at all times. If someone from the audience wanted to speak, they could go and sit in the empty chair. At the next available opportunity, one of the current panelists had to relinquish their chair and return to the audience.
Euan kicked off the discussion by pointing out that at Altmetric, we have de-emphasised the metrics to an extent and focused on the conversations that surround scholarly content instead. Amy Brand, who works at Harvard University in tenure and promotion, agreed with this, saying that “decoupling the data” at this early stage was important because the idea that altmetrics can be used for scoring and ranking researchers is still premature. Brand went on to point out that academic review can only benefit from more and different impact, which can be conveyed by altmetrics.
Martijn Roelandse drew upon his experience as an editor at Springer, stating that he regularly promotes altmetrics at editorial board meetings but has found that most editors are still fixated on Impact Factors. He noted that it was useful to look at the value of an article in any dimension, a point that Adam Dinsmore of the Wellcome Trust echoed. Dinsmore said that social media aren’t the focus, and that ALMs are a big data approach to looking at how people work, thereby providing meaningful information about scholarly interactions.
Theme 1: Definitions
After the initial panelists finished speaking, an audience member piped up with an important question: “What is the difference between article-level metrics and altmetrics?”
We’ve blogged about the definitions of these terms before, writing that “altmetrics” is the umbrella term for new ways (both qualitative and quantitative) of measuring different forms of impact. The term “article-level metrics” has colloquially come to mean the altmetrics surrounding a scholarly paper, although technically it could include traditional bibliometrics, such as citations. Generally, the participants in the PLOS ALM Workshop seemed to agree on these definitions. Jason Priem (ImpactStory), who coined the word “altmetrics” originally, explained his initial thought process, stating that “article-level metrics” didn’t completely cover what he felt should be conveyed. Priem provided his own definition of altmetrics, stating that he felt they were “metrics of impact drawn from activity on online tools and environments”.
Others expanded on the existing definitions of altmetrics. Kaitlin Thaney (Mozilla Science Lab) suggested that altmetrics tracked “a researcher’s footprint in the community”. This line of thought was continued by Michael Habib (Elsevier) who brought up the interesting notion of tracking impact for a blog post that was cited in a high-impact journal such as Nature.
Theme 2: The directions of the “altmetrics movement”
Next, the rotating panels shifted to the topic of how altmetrics are perceived by the wider scholarly community. The utility (or harm) in promoting altmetrics as a “movement” was also discussed. First, the group discussed the notion that altmetrics measure “academically trivial” things, such as social media mentions and blog posts. Interestingly, Geoffrey Bilder (CrossRef) had a statistic that seemed to indicate that these non-traditional outputs might not be so trivial: he told the audience that as of 2013, 1 in 13 scholarly citations in journal articles pointed to a plain URL. Even so, Bilder argued that the reward structure for academics might lead them to think that they shouldn’t care about what the public thinks. He continued, saying that there exists a misconception that the public doesn’t talk about published literature, and therefore the interaction between the scholarly sphere and the broader non-academic community needs to be addressed.
Cameron Neylon of PLOS brought up an interesting point about how the difficulties in explaining altmetrics mirror the difficulties in explaining open access – both issues have been quite contentious. “Initially”, he said, “altmetrics were a bit of a protest.” But things are different now, and Neylon suggested that we are “struggling with the terms because they were coined with a political motivation that was a fringe activity.” His assertion was that we need to talk to different people in a different way, because altmetrics have entered the mainstream. Finally, he spoke of marketing, and how altmetrics advocates are now speaking with new audiences who are not as interested in disruption and change. “They need to know [altmetrics] are complementary [to bibliometrics].”
We were pleased to have 16 excellent speakers rotate in and out of our fishbowl panel. Although we didn’t end up with a formal consensus of best practices for speaking about article-level metrics and altmetrics to new user groups, we did participate in a valuable discussion about the current ideas that define our field. We were also able to focus on how altmetrics and article-level metrics have changed in the recent months by becoming more “mainstream”. While politicised language seemed to serve the altmetrics community well at the outset, a different, softer approach is perhaps necessary as altmetrics mature into an important aspect of scholarly communication.