In our previous Fieldwork post we posed some initial questions about how humanities and social sciences scholars and journals might benefit from alt-metrics data. In this Fieldwork post we take a quick look at the Library Science journals currently available through the Altmetric Explorer. You can view and download the dataset we discuss on this post on Figshare, here.
In A Reading Diary: A Passionate Reader’s Reflections on a Year of Books, Alberto Manguel (2011) wrote that he thought the books on his shelves had no knowledge of his existence. “They come to life because I open them and turn their pages, and yet they don’t know that I am their reader.” Social media is changing the way readers and the publications they read interact. Alt-metrics can offer some insight on what articles, when, with whom and in what way scholars in Library Science and related fields are sharing the most.
We looked at 196 research papers published in a sample of 15 Library Science journals from 7 different publishers (Emerald, Elsevier, Taylor & Francis, University of Chicago Press, University of Toronto Press, Ingenta Connect and the Association for Library and Information Science Education) which were mentioned on line during the last year.
The journals in the sample are:
- Library Management
- Library Review
- Library & Information Science Research
- Electronic Library
- Program: Electronic Library & Information Systems
- Journal of Library Metadata
- International Information & Library Review
- Library Quarterly
- Journal of Library and Information Services in Distance Learning
- Public Library Quarterly
- Library Collections, Acquisitions, & Technical Services
- Canadian Journal of Information and Library Science
- Library History
- Journal of Education for Library & Information Science
These are all peer-reviewed journals presently available online through paid subscription models. Through the Explorer we were able to look at which articles had been mentioned in news stories, tweets, Facebook walls, blog posts, Google + and Reddit.
Journals with articles with less than one mention were not taken into account. This means that the sampled journals featured different numbers of articles, which varied from 41 mentioned articles to the minimum of 1.
At the moment there is not enough data around online mentions of Library Science journals to reliably take into account specific journals’ or articles’ Altmetric scores, especially because we lack sufficient parameters (from a larger sample of journals and publishers for example) for the score to be meaningful as an element informing a full comparison of online activity around journals of this discipline. The Altmetric score is a quantative measure of the quality and quantity of attention that a scholarly article has received. It takes into account three main factors:
- Volume. The score for an article rises as more people mention it. The Explorer only counts one mention from each person per source, so if someone tweet about the same paper more than once Altmetric will ignore everything but the first.
- Sources. Each category of mention contributes a different base amount to the final score. For instance, a newspaper article contributes more than a blog post which contributes more than a tweet. Altmetric looks at how often the author of each mention talks about scholarly articles, whether or not there’s any bias towards a particular journal or publisher and at who their audience is.
- Authors. For example, a scholar sharing a link with other scholars counts for far more than a journal account pushing the same link out automatically.
However, these same main factors can be relevant when looking at the articles from this data set that were most mentioned on line. A mention on a research blog does suggest a more complex scholarly engagement than for example a tweet, and this quality of engagement can contribute to a higher score.
Taking into account this is a relatively small sample from the data we have access to, we are aiming to characterise the data, hoping it might create interest in new avenues for research and or scholarly behaviour online. In order to even contemplate a qualitative assessment of the scores, volume, sources and authors these need to be considered in a holistic fashion, particularly when the volume of online mentions is low.
We should not need to emphasise that having some data about how many mentions a research paper is getting in no way suggests a direct, inequivocal qualitative assessment of the mentioned articles; in other words, the argument is not that the most-mentioned articles represent necessarily research of higher quality. Instead, the point to take home is that according to the existing available data, we can observe that some research articles (and therefore some journals) are getting more online attention than others and indeed more online attention in some platforms than others.
Twitter remains the online platform of choice, in some cases the only service in which some articles were mentioned. It keeps a wide distance from Facebook, which is in turn followed by Google+ (though note that Altmetric can only track public posts on all three services). One of the reasons why blog coverage is low is because Altmetric only tracks blogs on a manually curated whitelist, and LIS blogs have been added only recently. Therefore as the whitelist grows over time, it is expected we might be able to get more mentions on research blogs.
Having said that, mentions in blog posts and Reddit remain very low if not completely absent, and no mentions were found on any of the nearly 40 news outlets parsed by the Explorer:
Four of the top five most mentioned journals had customised share widgets. (Might this provide an indicator of how less friction to share encourages or not online mentions?). The five articles with more mentions had all been tweeted for the first time by either their own authors, publishers or fellow researchers not directly involved as authors or publishers of the mentioned paper:
— Liz McCarthy (@mccarthy_liz) November 26, 2012
What do you know about Mendeley? Library Management paper aims to highlight its productivity and collaborative features bit.ly/RP3VK4
— Emerald Group Pub (@EmeraldLibrary) October 23, 2012
— Picturepark (@Picturepark_DAM) October 7, 2012
— Emerald Group Pub (@EmeraldLibrary) October 23, 2012
— Anne Welsh (@AnneWelsh) September 18, 2012
Most of the mentions on tweets are not actual discussions of the articles in question but descriptive retweets or direct tweet shares from and to the articles. In the top five most-mentioned articles, it was one tweet that started a series of other tweets and retweets. It is logical -but it still needs to be said- to observe that someone needs to get the ball rolling if increasing mentions is the goal; it will not happen by itself.
Perhaps unsurprisingly four of the five most-mentioned articles cover different aspects of purely digital scholarship: markup, online reference management, search engine indexing and ranking and ebooks.
The graph below shows the frequency distribution for the top 30 words appearing in our sample of 196 Library Science article titles. The 10 most-used words in titles were library (43 occurrences), libraries (39), information (26), academic (18), digital (17), university (16), public (12), case (11), web (11) and literacy (11).
Without access to each journal’s download data it is impossible to even attempt any discussion of correlation between social media mentions of articles and article downloads. Nevertheless, this cursory glance at the data we have access to could perhaps begin to provide indicators of, for example, traits of scholarly online behaviour, effects of certain sharing or design features on the journals’ user interfaces and and a profile of key influencers in the field.
Even though demographics can be obtained around online mentions datasets to get a view of who (member of the public? researcher?) is mentioning what articles when and for how long (usually the day the article is first published?) and where (in what country?), the current data will not be reliable as such until there is a more widespread culture of reliable geolocation and metadata amongst academic social media users (account profiles that contain updated and trustworthy data about academic positions and geolocation). At the moment, this kind of research requires specialised, close attention to detail in order to “manually” assess these questions.
More than easy answers what arises is many more questions. Can availability of indicators of the absence or presence of online mentions of academic articles on news outlets, blogs and social media platforms could encourage further online sharing within an heterogeneous discipline like Library Science? Or, on the other end of the spectrum, could it discourage enthusiasts and provide fuel to those who see online sharing of research outputs as a waste of time? Only continuing the ongoing close observation of article-level metrics over a significant period of time could begin to provide some answers.