Numbers behind Numbers: The Altmetric Score and Sources Explained

In the last blog post in our researcher series, we included some perspectives on Altmetric from some metrics-savvy researchers. One of the responses was from Jean Peccoud, who commented on the Altmetric score, saying it “can [sometimes] feel a little like black magic”.

This isn’t the first time we’ve heard this, or similar, and we appreciate that people are keen to understand more about what goes on in the background to calculate the score for each research output. Our aim for this blog post, therefore, is to provide more detail around the Altmetric scoring system, and to offer insight into the weighting we give to each source we’re tracking.

We hope this post will help to answer some of the questions researchers new to altmetrics may have about how Altmetric collects and displays attention data. For those who are already familiar with Altmetric and use it to monitor the attention for their research, we hope this post will refresh their memories and provide a bit more context around the data.

Where can I find the Altmetric score? donut
The Altmetric score appears in the middle of each Altmetric donut, which is our graphical representation of the attention surrounding a research output.  It can often be found on publisher article pages, and also appears when a user is using any of our apps, or using the Altmetric Bookmarklet.

The colours of the donut represent the different sources of attention for each output:       

colours

                                  

Why do Altmetric assign a score for articles at all? 

The Altmetric score is intended to provide an indicator of the attention surrounding a research output. Although it may be explorerstraightforward enough to monitor the attention surrounding one research output, for example, it becomes harder to identify where to focus your efforts when looking at a larger set. The number alone can of course not tell you anything about what prompted the attention, where it came from, or what people were saying, but it does at least give you a place to start – “is there online activity around this research output that would be worth investigating further?”

We work with a lot of publishers and institutions who want to be able to see which articles are getting the most (or indeed the least) attention. They’re interested in monitoring the attention of not only single articles, but to be able to place that measure within the context of the journal the article comes from, or in comparison with other publications from peers. Again, we’d always encourage anyone looking at our data to also click through to the Altmetric details page for each output content of the mentions and see what people are saying about the item, rather than using the arbitrary numbers to draw conclusions about the research.

How is the score calculated?
The Altmetric score is an automatically calculated, weighted algorithm. It is based on 3 main factors:

1. The volume of the mentions (how many were there?)
2. The source of the mentions (were they high-profile news stories, re-tweets, or perhaps a Wikipedia reference?)
3. The author of the mentions (was it the journal publisher, or an influential academic?)

Screen Shot 2015-05-26 at 13.33.32

Combined, the score represents a weighted approximation of all the attention we’ve picked up for a research output, rather than a raw total of the number of mentions. You can see this in the example on the right - the article has been mentioned in 2 news outlets, 2 blogs, 6 Facebook posts, 84 tweets, 1 Google + posts and 1 Reddit post. However, the score is 85, not 116.

That said, each source is assigned a default score contribution – as detailed in the list below:

Screen Shot 2015-05-26 at 13.42.11

These default scores are designed to reflect the reach and level of engagement of each source: a news story, for example, is for the most part likely to be seen by a far wider audience than a single tweet or Facebook post. It’s also worth mentioning that social media posts are scored per user. This means that if someone tweets about the same research output twice, only the first tweet will count. Blog posts are scored per feed; if two posts that were stored in the same RSS feed link to the same article, only the first post will be counted.

You’ll have noticed that the Altmetric Score for any individual research output is always a whole number – so each time a new mention is picked up the score is rounded to the nearest whole number. For example, a single Facebook post about an article would contribute 0.25 to the score, but if there was only one post, the score for that article would be 1. However, if there were four Facebook posts mentioning a research output, this would still only contribute 1 to the overall score.

Weighting the score
Beyond tracking and calculating based on these default score contributions, another level of filtering is applied to try to more accurately reflect the type and reach of attention a research output has had. This is where the ‘bias’ and ‘audience’ of specific sources plays a further part in determining the final score.

News outlets
News sites are each assigned a tier, which determines the amount that any mention from them will contribute to the score, according to the reach we determine that specific news outlet to have. This means that a news mention from the New York Times will contribute more towards the score than a mention from a niche news publication with a smaller readership, such as 2Minute Medicine. Each mention is counted on the basis of the ‘author’ of the post – therefore if a news source publishes two news stories about the same article, these would only be counted as one news mention.

Wikipedia 
In addition to the news weighting, scoring for Wikipedia is static. This means that if an article is mentioned in one Wikipedia post, the score will automatically increase by 3. However, if an article is mentioned in several Wikipedia posts, the score will still only increase by 3. The rationale behind this is that Wikipedia articles can reference hundreds of research outputs. As such, a mention of a paper as a reference alongside lots of other research, is not really comparable (in terms of reach and attention) to a mainstream news story that is only about one research paper. We consulted a Wikipedia expert when trying to decide on the appropriate scoring, and eventually decided to keep the score static to reduce the potential for gaming. We agreed that if we were to decide that score would increase with each Wikipedia mention, people could potentially game the scoring by manually adding their publications as references to old articles. This would mean that their scores were biased through illegitimate attention.

Policy Documents

The scoring for policy documents depends on the number of policy sources that have mentioned a paper. Mentions in multiple policy documents from the same policy source only count once. If, for example, a research output is mentioned in two policy documents from the same source, this will contribute 3 to the score. However, if two policy documents from two different policy sources mention the same research output, these would both count towards the score, so the score would increase by 6.

Social media posts
For Twitter and Sina Weibo, the original tweet or post counts for 1, but retweets or reposts count for 0.85, as this type of attention is more secondhand (and therefore does not reflect as much engagement as the initial post). Again, the author rule applies; if the same Twitter account tweets a the same link to a paper more than once, only the first tweet will actually count towards the score (although you’d still be able to see all of the tweets on the details page). For tweets, we also apply modifiers that can sometimes mean the original Tweet contributes less than 1 to an article score. These modifiers are based on three principles:

  • Reach – how many people is this mention going to reach? (This is based on the number of people following  the relevant account)
  • Promiscuity – how often does this person Tweet about research outputs? (This is derived from the amount of articles mentioned by this Twitter account in a given time period).
  • Bias – is this person tweeting about lots of articles from the same journal, thereby suggesting promotional intent?

These principles mean that if (for example) a journal Twitter account regularly tweets about papers they have just published, these tweets would contribute less to the scores for these articles than tweets from individual researchers who have read the article and just want to share it – again, here we are trying to reflect the true engagement and reach of the research shared. This can also work the other way; if (for example) a hugely influential figure such as Barack Obama were to tweet a paper, this tweet would have a default score contribution of 1.1, which could be rounded up to a contribution of 2.

Combating gaming
Gaming is often mentioned as a risk of altmetrics (as a principle, it is actually applicable to any kind of metric that can be influenced by outside behaviour). Researchers are keen to compare themselves to others and many in the academic world have taken to using numbers as a proxy for ‘impact’. Altmetric have taken steps to combat practices that could be suspected gaming or otherwise negatively influencing the score, including:

  • Capping measures for articles that have more than 200 Twitter or Facebook posts with the exact same content. For articles such as these, only the first 200 Twitter or Facebook posts will count towards the score, in order to prevent articles with lots of identical social media posts from having much higher scores than articles with examples of more legitimate, unique attention.
  • Flagging up and monitoring suspect activity: where an output sees an unusual or unexpected amount of activity, an alert is sent to the Altmetric team, who investigate to determine whether or not the activity is genuine.

The most powerful tool we have against gaming, however, is that we display all of the mentions of each output on the details page. By looking beyond the numbers and reading the mentions, it is easy to determine how and why any item has attracted the attention that it has – and therefore to identify whether or not it is the type of attention that you consider of interest.

What’s not included in the score?
Lastly, it’s useful to remember that some sources are never included in the Altmetric score. This applies to Mendeley and CiteULike reader counts (because we can’t show you who the readers are – and we like all of our mentions to be fully auditable), and any posts that appear on the “misc” tab on the details page (misc stands for miscellaneous).

We get asked about the misc tab quite a lot, so I thought it would be good to explain the rationale behind it. We add mentions of an article to the misc tab when they would never have been picked up automatically at the point when we are notified of them. This could have been because we’re not tracking the source, or because the mention did not include the right content for us to match it to a research output. By adding posts like this to the misc tab, we can still display all the attention we’re aware of for an article without biasing the score through excessive manual curation.

We hope that by posting this blog, we’ve managed to shed some light on the Altmetric score and the methods that go into calculating it. As always, any comments, questions or feedback are most welcome. Thanks for reading!

In England, the recent national Research Excellence Framework (REF) exercise is using “real world” impact for the first time to determine how much money institutions will be allocated from the Higher Education Funding Council for England. In this post, we examine the REF results and discuss the possibilities for documenting such impacts using altmetrics.

At Altmetric, we have a keen interest in understanding the public impacts of research. We’ve been following the UK Research Excellence Framework assessment exercise closely since it was piloted in the late 2000s (just a few years before Altmetric was formally founded). There’s overlap in the types of indicators of impact we collect (called “altmetrics” in the aggregate and including media coverage, mentions of research in policy documents, and more) and types of impact that were reported by institutions for the REF (impact on culture, health, technology, and so on).

So, when the REF results were announced in March, we naturally asked ourselves, “What (if anything) could altmetrics add to the REF exercise?”

In this post, I’ll give a brief background on the REF and its implementation, and then dive into two juicy questions for us at Altmetric (and all others interested in using metrics for research evaluation): What can research metrics and indicators (both altmetrics and citation metrics) tell us about the “real world” impact of scholarship? And can they be used to help institutions prepare for evaluation exercises like the REF?

First, let’s talk about how the REF works and what that means for research evaluation.

The REF wants to know, “What have you done for taxpayers lately?”

REF 2014 logoMany countries have national evaluation exercises that attempt to get at the quality of research their scholars publish, but the REF is slightly different. In the REF, selected research is submitted by institutions to peer review panels in various subject areas, which evaluate it for both its quality and whether that research had public impacts.

There is an enormous cost to preparing for the REF, which requires institutions to compile comprehensive “impact case studies” for each of their most impactful studies, alongside reports on the number of staff they employ, the most impactful publications their authors have published, and a lot more.

Some have proposed that research metrics–both citation-based and altmetrics–may be able to lessen the burden on universities, making it easier for researchers to find the studies that are best suited for inclusion in the REF. And a previous HEFCE-sponsored review on the use of citations in the REF found that they could inform but not replace the peer review process, as indicators of impact (not necessarily evidence of impact themselves).

But using metrics for evaluation is still a controversial idea, and some have spoken out against the idea of using any metrics to inform the REF. HEFCE convened an expert panel to get to the bottom of it, the results of which are expected to be announced formally in July 2015. (The results have already been informally announced by review chair James Wilsdon, who says that altmetrics–like bibliometrics–can be useful for informing but not replacing the peer review process.)

screenshot of the cover of the Kings College and Digital Science REF report

Until then, there is rich data to be examined in the REF2014 Impact Case Studies web app (much of which is available for analysis using its API) and this excellent, thorough summary of the REF impact case study results (pictured at right). We decided to do some informal exploration of our own, to see what Altmetric data–which aim to showcase “real world” impacts beyond the academic sphere–and citation count data could tell us about the publications selected for inclusion in REF impact case studies.

What can altmetrics tell us about “real world” research impacts?

Going into this thought experiment, I had two major assumptions about what Altmetric and citation data could tell me (and what it couldn’t tell me) about the impact of publications chosen for REF impact case studies:

  • Assumption 1: Altmetric data could find indicators of “real world” impacts in key areas like policy and public discussion of research. That’s because there’s likely overlap between the “real world” attention data we report and the kinds of evidence universities often use in their REF impact case studies (i.e. mentions in policy documents or mainstream news outlets).

  • Assumption 2: Citation counts aren’t really useful for highlighting research to be used in the impact case study portion of the REF. Citations are measures of impact among scholarly audiences, but are not useful for understanding the effects of scholarship on the public, policy makers, etc. Hence, they’re not very useful here. That said, citation counts are the coin of the realm in academia, so it’s possible that faculty preparing impact case studies may use citations to help them select what research is worthy of inclusion.

These assumptions led to some questions that guided my poking and prodding of the data:

    1. Are there differences between what universities think have the highest impact on the “real world” (i.e. are submitted as REF ICSs) and what’s got the highest “real world” attention as measured by Altmetric? If so, what relevant things can Altmetric learn from these differences?
    2. If “impact on policy” is one of the most popular forms of impact submitted to the REF, do articles with policy impacts (as reported by Altmetric) match what’s been submitted in REF impact case studies?
    3. Can citation counts serve as a good predictor of what will be submitted to with REF impact case studies?

I decided to dive into impact data for a very small sample of publications from two randomly chosen universities: The University of Exeter and The London School of Hygiene and Tropical Medicine.

I created three groups of publications to compare for each university, six groups total for comparing across both universities:

    1. Top ten articles by overall attention for each institution, as measured by Altmetric’s attention score;
    2. Top ten articles by attention for each institution that were submitted with a non-redacted REF impact case study; and
    3. Top ten articles by Scopus citation count for each institution*.

Though the REF impact case studies included publications primarily released between 2008-2013 (as well as older research that underpins the more recent publications), the publications I used were limited to those published online between 2012 and 2013, when the most comprehensive Altmetric attention data would be available.

I also used Altmetric to dig into the qualitative data underlying the pure numbers, to see if I could discover anything interesting about what the press, members of the public or policymakers were saying about each institution’s research, how it was being used, and so on.

Before going any further, I should acknowledge some limitations to this exercise that you should bear in mind when reading through the conclusions I’ve drawn. First and foremost is that my sample size was too small to draw any conclusions about the larger body of publications produced across the entirety of England. In fact, while this data may show trends, it’s unclear whether these trends could hold up across the each institution’s entire body of research. Similarly, using publication data from the 2012-2013 time period alone means that I’ve examined only a small slice of what was submitted with REF impact case studies overall. And finally, I used the Altmetric attention score as means of selecting the highest attention articles for inclusion in this thought exercise. It’s a measure that no doubt biased my findings in a variety of ways.

With that in mind, here’s what I found out.

Are there differences between what universities think have the highest impact on the “real world” (i.e. are submitted as REF impact case studies) and what’s actually got the highest “real world” attention (as measured by the Altmetric score)?

In the Altmetric Explorer, you can learn what the most popular articles are in any given group of articles you define. As seen above, by default articles are listed by their Altmetric score: the amount of attention–both scholarly and public–that they receive overall.

You can also use the Explorer to dig into the different types of attention a group of articles has received and filter out all but specific attention types (mentions in policy documents, peer reviews, online discussions, etc).

So, I fed the list of articles from 2012-2013 that were submitted with each institution’s REF impact case studies into the Explorer, and compared their Altmetric attention data with that of with the overall lists of publications from each institution during the same time period (sourced from Scopus). I then used the Altmetric score to determine what the top ten highest attention articles from the REF were, and did the same for the overall list of articles from each institution. Those “top ten” lists were then compared, with unexpected results.

There is no real overlap in articles that are simply “high attention” (i.e. articles that have the highest Altmetric scores) and what was submitted with REF impact case studies. That’s likely because the Altmetric score measures both scholarly and public attention, and gives different weights to types of attention that may not match up with the types of impact documented in impact case studies.

However, when you drill down into certain types of attention–in this case, what’s been mentioned in policy documents–you do see some overlaps in the “high attention” articles with that type of attention from each institution, and what was submitted with REF impact case studies.

Even though the Altmetric score alone can’t always help choose the specific articles to submit with REF impact case studies, altmetrics in general may be able to help universities choose themes for the case studies. Here’s why: for both universities, the disciplines of publications submitted for the REF impact case studies (primarily public health, epidemiology, and climate change) closely matched the disciplines of overall “high attention” publications, as measured by the Altmetric Explorer.

So–we don’t yet have precise, predictive analytics powered by altmetrics yet, but altmetrics data can potentially help us begin to narrow down the disciplines whose research has the most “real world” implications.

If “impact on policy” is one of the most popular forms of impact submitted to the REF, do articles with policy impacts (as reported by Altmetric) match what’s been submitted in REF ICSs?

Yes. We found many more articles with mentions in policy documents than were chosen for inclusion in REF impact case studies for each institution. Yet, it’s still likely that a human expert would be required to select which Altmetric-discovered policy document citations are best to submit as impact evidence.

To find articles with policy impacts, I plugged in all publications published between 2012 and 2013 for both universities into the Explorer app. Then, I used Explorer’s filters to create a list of articles for each institution that were cited in policy documents.

Of all articles published between 2012 and 2013, thirty publications from University of Exeter and fifty-five articles from London School of Hygiene and Tropical Medicine had been mentioned in policy documents.

But how many of those articles were included in REF impact case studies? Turns out, the impact case studies from University of Exeter only included five articles of the thirty total that were found by our Explorer app to have policy impacts. And LSHTM impact case studies only included one of the fifty-five total articles that we found to have policy impacts.

I think there are two possible reasons for this discrepancy. The first is that each university only selected a small subset of their scholarship that has policy impacts in order to showcase only the work with potentially the most lasting mark on public policy, or that they wanted to submit research with only certain types of impacts (e.g. technology commercialisation). The other is that those researchers compiling impact case studies for their universities simply weren’t aware that these other citations in policy documents existed.

However, this is currently only speculation. We’ll need to talk with university administrators to know more about how impact case studies are selected. (More on that below.)

Can citation counts serve as a good predictor of what will be submitted to the REF?

No. Neither university submitted any articles with their impact case studies that were in the “top ten by citation” list, presumably because citations measure scholarly–not public–attention. However, that’s not to say that other universities would not use citation counts to select what to submit with REF impact case studies.

So what does this mean in practice?

Generally speaking, these findings suggest that Altmetric data can be useful in helping universities determine themes to the “real world” impact their research has, and the diversity of attention that research has received beyond the academy. This data could be useful when building persuasive cases about the diverse impacts of research. For example, it can help scholars discover the impact that they’ve had upon public policy.

However, it’s unclear whether Altmetric data could help researchers choose specific publications to submit with impact case studies for their university. We’ll be doing interviews soon with university administrators to better understand how their selection process worked for REF2014, and whether Altmetric would be useful in future exercises.

There’s more digging to be done

In getting up close and personal with the Altmetric data during the course of this exercise, I came to realize that I had another assumption underlying my understanding of the data:

  • Assumption 3: There are probably differences in the types of research outputs that are included REF impact case studies and what outputs get a lot of attention overall, as measured by Altmetric. There are also probably differences in the types of attention they receive online. I’ve guessed that Open Access publications were more likely to be included in impact case studies (as all REF-submitted documents must be Open Access by the time REF2020 rolls around), that the most popular articles overall saw more love from Twitter than chosen-for-REF articles, and that the most popular articles overall on Altmetric had orders of magnitude more attention than chosen-for-REF articles.

And that assumption led to three more questions:

    1. Are there differences between what’s got the highest scholarly attention (citations),  the highest “real world” attention (as measured by Altmetric), and what’s been submitted with REF impact case studies? If so, what are they?
    2. What are the common characteristics of the most popular (as measured by Altmetric) outputs submitted with REF impact case studies vs. the overall most popular research published at each university?
    3. What are the characteristics of the attention that’s been received by REF-submitted impact case studies outputs and high-attention Altmetric articles?

So, I’m rolling up my sleeves and getting to work.

Next month, I’ll share the answers I’ve found to the questions above, and hopefully also the perspectives of research administrators who prepared for the REF (who can tell me if my assumptions are on the mark or not).

In the meantime, I encourage you to check out the REF impact case study website and the Kings College London and Digital Science “deep dive” report, which offers a 30,000-foot view of REF impact case studies’ themes.

* The Altmetric data and Scopus citation information for these three groups of articles has been archived on Figshare.

We spend a lot of time in the Altmetric office talking about the varied sources and different types of research outputs we track – but as the team has grown we’ve been having to work harder to keep track of them! As a handy guide (for you and for us!) here’s a summary of the events we’ll be speaking at or attending in the next few months:

MLA 2015
15th – 20th May, Austin, Texas
Product Specialist Sara Rouhi and Research Metrics Consultant Stacy Konkiel are in town today and tomorrow hosting altmetrics workshops. There’s still time to tweet them if you’d like to meet up!

ORCID-CASRAI Joint Conference
18th – 20th May, Barcelona, Spain
Altmetric Training and implementation Manager Natalia Madjarevic is there to give an overview of the various automated workflows and systems we’ve set up to help institutions easily implement our platform. Keep an eye out for for her and say hello if you get a chance!

CARA 2015 annual conference
24th – 27th May, Toronto, Canada
Digital Science rep Stuart Silcox will be attending and on hand to answer all of your altmetrics questions! Drop Stuart a line if you’d like to arrange to meet.

SSP
27th – 29th May, Arlington, VA
Phill Jones will be chairing an Altmetric panel, The Evaluation Gap: using altmetrics to meet changing researcher needs on Thursday the 28th of May. Join Phill and panelists Cassidy Sugimoto, Jill Rodgers, Terri Teleen, and Colleen Willis for an exciting discussion on the challenges and opportunities of these new metrics.

HASTAC
27-30th May, East Lansing, Michigan
Altmetric’s Research Metrics Consultant Stacy Konkiel will be attending the HASTAC conference this year. Stacy’s really interested in how we might further apply altmetrics to humanities disciplines, and is always up for discussing new ideas, so be sure to say hi!

Open Research Data: Implications for Science and Society
28th – 29th May, Warsaw, Poland
Product Specialist Ben McLeish will be giving a short presentation on “Digging for data: opportunities and challenges in an open research landscape” – and will be happy to meet to discuss any questions you might have.

NASIG
28th – 30th May, Washington DC
Sara Rouhi (based in Washington herself) will be speaking here as part of the Great Ideas Showcase. Sara will discuss our work with tracking attention to published research in public policy documents. She’ll look at some of the data we’ve gathered so far, and share feedback we’ve had from institutions who have been exploring it for themselves. Drop Sara a line if you’d like to chat, and be sure to stop by her session to find out more.

ARMA 2015
1st – 3rd June, Brighton, UK
A whole bunch of us are excited to be going to ARMA this year. There’ll be representatives from Digital Science, figshare and Symplectic (as well as us!). Register for the session we’re running in partnership with the University of Cambridge on the afternoon of the 2nd.

Digital Science Showcase
2nd June, Philadelphia
Altmetric founder Euan Adie will be speaking at this Digital Science event. Euan will discuss the opportunities for collaboration and showcasing which are gained from social, news and public policy attention. The overall theme for the day is “Technology Trends in Research Management, Showcasing Outputs & Collaboration”, and the line up is looking great!

Digital Science Showcase
4th June, Los Angeles
In the same week Euan will also speak at the Digital Science event being hosted in LA. The event will follow the same format as the Philadelphia day – with some excellent guest speakers presenting.

Impact of Science
4th – 5th June, Amsterdam, Netherlands
Altmetric COO Kathy Christian will be attending this event, which Altmetric are sponsoring for the first time. It promises to be an interesting couple of days, and do say hello to Kathy or get in touch if you’d like to arrange a chat.

ELAG annual conference
8th – 11th June, Stockholm, Sweden
Representatives from ETH Zurich will be presenting their motivations and experience of implementing Altmetric at their institutions, with support from Altmetric Product Specialist Ben McLeish. It’s shaping up to be a great session so do drop by if you can, or get in touch with Ben if you’d like to arrange a time to meet.

Open Repositories
8th – 15th June, Indianapolis
Stacy Konkiel will be attending on behalf of Altmetric, and she and the figshare team will be hosting a meetup on the Monday night (please email Stacy for details – free beer!). She’ll also be presenting a poster between 6-8pm on Tuesday the 9th, and is looking forward to participating in some thought-provoking sessions.

Symplectic UK User Conference
11th – 12th June, London, UK
We’ve very kindly been asked by our colleagues at Symplectic to present as part of their UK user day. Altmetric developer Shane Preece and Customer Support Exec Fran Davies will give an overview of our institutional platform, and discuss the API connector we’ve built for Symplectic Elements clients.

SLA 2015
14th – 16th June, Boston
SLA is going to be a busy one for us this year! Stacy will be there, and is presenting in the following sessions:

CERN Workshop on Innovations in Scholarly Communication (OAI9)
17th – 19th June, Geneva, Switzerland
Cat Chimes will be attending this workshop (alongside Digital Science’s Director of Research Metrics, Daniel Hook). Cat will be attending sessions and presenting a poster; “Understanding the impact of research on policy using Altmetric data”. Feel free to say hi or share any feedback or questions you might have.

ReCon
19th June, Edinburgh, UK
Euan will be speaking at the Edinburgh event, ‘Research in the 21st Century: Data, Analytics and Impact’. There’s also a hack day taking place on the 20th – get involved!

LIBER conference
24th – 26th June, London, UK
Our Training and Implementations Manager Natalia Madjarevic will be presenting alongside Manchester’s Scott Taylor. Natalia and Scott will be discussing our institutional platform – and Scott will share his experience so far of rolling it out amongst Manchester faculty.

ALA 2015
25th – 30 June, San Francisco
Altmetric’s Product Specialist Sara Rouhi has a packed schedule for ALA – but if you don’t have a chance to attend one of the sessions below feel free to get in touch if you’d like to chat with her.

  • Saturday June 27th: 
    8:30 am – 10:00 am – Exhibits Round Table Program: The application of altmetrics in library research support
  • Sunday June 28th: 10:30-11:30  Altmetrics and Digital Analytics Interest Group; “Navigating the Research Metrics Landscape: Challenges and Opportunities” (Marriot Marquis San Francisco Pacific Suite B)1:00 – 2:30pm ALCTS CMS Collection Evaluation and Assessment Interest Group meeting – Bookmetrix presentation with Springer

EARMA
28th June – 1st July, Leiden, Netherlands
Product Specialist Ben McLeish will be presenting alongside Juergen Wastl from the University of Cambridge, Ben will give and overview of altmetrics and Altmetric, and Juergen will discuss Cambridge’s motivations and experience of adopting the Altmetric institutional tool.

ICSTI workshop on “Innovation in Scientific and Technical Information”
4th July, Hanover, Germany
Ben McLeish will be speaking at this event hosted by the International Council for Scientific and Technical Information. He’ll be giving an introduction to Altmetric and an insight into how they can be applied to help institutions and researchers better position themselves and their research outputs.

Mini-symposium on measuring research impact for the SAPC Annual Conference
8th July, Oxford, UK
Altmetric Product Development Manager Jean Liu is presenting at this event. Jean will share some of the latest developments from Altmetric, and is keen to learn more about how the scholarly community view and evaluate the broader dissemination of their work.

… and somewhere in all of this, we’re hoping to find time for a team BBQ – fingers crossed for sunshine!

One of the things that our users seem to like most about our data is the fact that it’s easy to explore where a piece of academic research has been mentioned in a news outlet. Crucially, it helps authors to understand where in the world there is interest in their work, and offers an opportunity to gather insight into the public attention it is receiving beyond the academic sphere.

We’re therefore really excited to share that we have started working on a project that sees us partnering with news aggregator Moreover Technologies to provide further expanded mainstream media mentions tracking across the Altmetric database and tools.

At present, Altmetric monitors a manually curated list of over 1,300 news outlets from around the world, searching for mentions of research outputs. The new partnership with Moreover will help us to expand our coverage to over 80,000 news outlets – alongside the existing manually curated list.

Advanced and comprehensive reporting on the coverage academic research is receiving in the media.

We gather news mentions of research outputs from our manually curated outlets using a mixture of tracking HTML links and text-mining.

Building on this technology, Moreover’s daily feed of over 2 million news articles from over 100 countries will enable us to offer the most advanced and comprehensive reporting on the coverage that academic research is receiving in the media.

 

news

 

All news mentions will continue to be displayed on the ‘News’ tab of Altmetric details pages, along with a snippet of text from the article.

A unique mixture of manual and automated media tracking.

By displaying news mentions about scholarly content we hope to give authors, institutions, publishers, and funders a comprehensive picture of how a piece of academic work is being communicated to a broader audience – and to help them easily monitor, demonstrate and report on this attention.

Welcome to the April 2015 High Five here at Altmetric! In this blog post, I’ll be leading you on a tour of the top 5 peer-reviewed scientific articles this month according to Altmetric’s scoring system. On a monthly basis from here on out, my High Five posts will examine a selection of the most popular research outputs Altmetric have seen attention for that month.

 

General Physics Laboratory, Flickr.com

General Physics Laboratory, Flickr.com

 

Paper #1. Genetically Modifying Human Embryos – A Debate

Our first “High Five” paper sparked controversy both in the scientific community and the public sphere last month. As George Dvorsky wrote in iO9, geneticists in China made history by genetically modifying human embryos. The science itself wasn’t the only thing that made headlines however – the ethical debate surrounding this type of genetic engineering is intense.

After weeks of speculation, it can finally be confirmed that geneticists in China have modified the DNA of human embryos. It’s a watershed moment in biotech history, but the experiment may ultimately serve as a major setback in the effort to responsibly develop beneficial interventions involving the human germline. Rumors about the experiment had been circulating for weeks, prompting calls for oversight and even a moratorium on such work. – George Dvorsky

The paper, published in Protein & Cell, was originally rejected by Nature and Scienceapparently on ethical grounds,” Dvorsky writes. The journals declined to comment on this claim according to this Nature News article.

In the paper, researchers led by Junjiu Huang, a gene-function researcher at Sun Yat-sen University in Guangzhou, tried to head off such concerns by using ‘non-viable’ embryos, which cannot result in a live birth, that were obtained from local fertility clinics. The team attempted to modify the gene responsible for β-thalassaemia, a potentially fatal blood disorder, using a gene-editing technique known as CRISPR/Cas9. The researchers say that their results reveal serious obstacles to using the method in medical applications.

 

The team injected 86 embryos and then waited 48 hours, enough time for the CRISPR/Cas9 system and the molecules that replace the missing DNA to act — and for the embryos to grow to about eight cells each. Of the 71 embryos that survived, 54 were genetically tested. This revealed that just 28 were successfully spliced, and that only a fraction of those contained the replacement genetic material. “If you want to do it in normal embryos, you need to be close to 100%,” Huang says. “That’s why we stopped. We still think it’s too immature.” – David Cyranoski & Sara Reardon, Nature News

The paper (which IS open access) received widespread coverage in both the mainstream media and the science blogosphere. Carl Zimmer wrote about the paper on his National Geographic blog The Loom: “While these embryos will not be growing up into genetically modified people, I suspect this week will go down as a pivotal moment in the history of medicine.” In the blog post, Zimmer summarizes the history behind this research, what the researchers did and what the implications are.

There were other concerns about the paper as well, including an apparently short peer-review time period pointed out on Twitter. The journal, Protein & Cell, rejected these claims and wrote that “the editorial decision to publish this study should not be viewed as an endorsement of this practice nor an encouragement of similar attempts, but rather the sounding of an alarm to draw immediate attention to the urgent need to rein in applications of gene-editing technologies, especially in the human germ cells or embryos.”

It is probably worth discussing how peer review for the #CRISPR/human embryo study apparently only took 24 hours. pic.twitter.com/O71pqk9edY

— John Borghi (@JohnBorghi) April 22, 2015

The Australian Social Media Center collected expert reactions to the new research and published them here.

The research is highly controversial for a number of reasons.  Firstly, research on human embryos is heavily restricted in Australia, and in other countries some level of regulation occurs.  Secondly, the ethical justification that the Chinese group used for performing this research in human embryos was that they used embryos that would not be able to yield a viable pregnancy.  In this case, they used donor embryos from a fertility clinic which has been fertilised by multiple sperm (the egg is very effective at stopping the penetration of more than one sperm at fertilisation, but occasionally this mechanism fails, and a “2-sperm fertilised” embryo with too much DNA is formed – these are not viable).

 

Finally, and perhaps the most concerning part of the research is the report of a large number of “off target” effects, meaning that their cut and paste editing occurred in the wrong place in the DNA, which was completely out of their control.  It is my current opinion that this type of research in human embryos must advance with extreme caution and that the statement in the abstract that this technology “holds tremendous promise for…clinical research”, at least in reference to human embryos, is misleading and irresponsible at this point in time. – Dr. Hannah Brown, Researcher at the Robinson Research Institute, the School of Paediatrics and Reproductive Health at the University of Adelaide and the ARC Centre of Excellence for Nanoscale BioPhotonics, for Austrailian SMC

Read more:

 

 

 

Suw Charman-Anderson, Flickr.com.

Suw Charman-Anderson, Flickr.com.

 

Paper #2. More Evidence of a NO link between the MMR Vaccine and Autism

The next High Five paper also relates to a public sphere controversy involving science (or lack thereof). A study published in the Journal of the American Medical Association (JAMA) this month confirms the lack of any association between the measles-mumps-rubella (MMR) vaccine and autism spectrum disorders.

In this large sample of privately insured children with older siblings, receipt of the MMR vaccine was not associated with increased risk of ASD, regardless of whether older siblings had ASD. These findings indicate no harmful association between MMR vaccine receipt and ASD even among children already at higher risk for ASD. – Jain et al. 2015

The study was covered by several news outlets and science blogs, most of which featured headlines such as “No link between MMR and autism, major study concludes.” The Guardian wrote that this research involving a cohort of 95,000 children “is [the] latest research to contradict findings of discredited gastroenterologist Andrew Wakefield.” [Given that, however, I’m not sure why this Guardian article decided to run a header image of a sharp (scary) needle. But visual communication best practices are another topic.] I like Phil Plat’s Slate article about this study – he writes about why the public fears the MMR vaccine.

People simply don’t make decisions based on facts. That’s not how we’re wired. Fear is an incredibly strong motivator, and many of the anti-vax groups use it to their advantage. Look at the truly atrocious Australian Vaccination Skeptic Network, who actually and truly compare vaccination to sexual assault (and seriously, survivors of such assaults may want to have a care clicking that link; the AVSN graphic is abhorrent and brutal). – Phil Plat

For background information, Maki Naro at The Nib did a great cartoon last year on how vaccines work.

Despite over a decades’ worth of research that have found no association between the measles vaccine and autism, some parents still refuse to immunize their children. Well, here’s a new study from the Journal of the American Medical Association (JAMA) that says, again, there’s no link. And this time, they looked at insurance claims for more than 95,000 children, some of whom have older siblings with autism spectrum disorders (ASD). – Study With 95,000 Children Finds No Link Between Autism and Measles Vaccine, Even In High Risk Children, IFLScience

Read more:

 

 

 

USPS.

USPS.

 

Paper #3. The Brontosaurus is Back! 

Our third High Five paper is a bit more light – but still at the center of a scientific “controversy.” This is the controversy over Brontosaurus – first it was a distinct genus of dinosaur, then it wasn’t even a type of dinosaur at all – then it was again! But why did scientists change everything we thought we knew about Little Foot in The Land Before Time in the first place? Charles Choi at Scientific American provides a good overview of the history of Brontosaurus:

The first of the Brontosaurus genus was named in 1879 by famed paleontologist Othniel Charles Marsh. The specimen still stands on display in the Great Hall of Yale’s Peabody Museum of Natural History. In 1903, however, paleontologist Elmer Riggs found that Brontosaurus was apparently the same as the genus Apatosaurus, which Marsh had first described in 1877. In such cases the rules of scientific nomenclature state that the oldest name has priority, dooming Brontosaurus to another extinction.

 

Now a new study suggests resurrecting Brontosaurus. It turns out the original Apatosaurus and Brontosaurus fossils appear different enough to belong to separate groups after all. “Generally, Brontosaurus can be distinguished from Apatosaurus most easily by its neck, which is higher and less wide,” says lead study author Emanuel Tschopp, a vertebrate paleontologist at the New University of Lisbon in Portugal. “So although both are very massive and robust animals, Apatosaurus is even more extreme than Brontosaurus.” – Scientific American

Brontosaurus infographic (by PeerJ) CC-BY.

Brontosaurus infographic (by PeerJ) CC-BY.

 

The paper, published in PeerJ (which I think is cool in and of itself, as PeerJ is a leader in terms of fast peer-review and open access digital publishing), is titled A specimen-level phylogenetic analysis and taxonomic revision of Diplodocidae (Dinosauria, Sauropoda) and authored by Emanuel Tschopp, Octavio Mateus and Roger Benson. It is nearly 300 pages long and chock full of measurement tables of and photos of fossils.

Tschopp didn’t set out to resurrect the Brontosaurus when he started analysing different specimens of diplodocid — the group to which ApatosaurusDiplodocus and other giants belong. But he was interested in reviewing how the fossils had been classified and whether anatomical differences between specimens represented variation within species, or between species or genera. Tschopp and his colleagues analysed nearly 500 anatomical traits in dozens of specimens belonging to all of the 20 or so species of diplodocids to create a family tree. They spent five years amassing data, visiting 20 museums across Europe and the United States. – Ewen Callaway, Nature News

Study author Dr. Emanual Tschopp appeared in an informative video interview with “This Week in Science.” Tschopp says that while scientists have uncovered an Apatasaurus skull, there are only rumors of a Brontosaurus skull.

Having only the fossils, we cannot make tests of interbreeding to see if they can have fertile offspring. So really very detailed comparison of the anatomy of the bones is the only way we can address this question. But it also takes, like the initial idea of Charles Darwin, that new species are formed by the accumulation of new traits. And these statistical approaches that we used have this as a basic idea. So the more different two skeletons, or also two groups of skeletons which can be species or genera, the more different they are, the more distantly related are they. – Dr. Emanual Tschopp, in This Week in Science Interview

But while every news outlet under the sun was focusing on the return of the name Brontosaurus – indicative of the fact that scientific nomenclature CAN come to embody cultural meaning – some researchers pointed to other aspects of the PeerJ paper as having perhaps more scientific significance.

nataliedee.com

nataliedee.com

“Most of the press is really concentrating on whether or not Brontosaurus is back,” says [Mark] Norell, who was not involved in the the study, “but that’s not really a scientific question — it’s more semantics. And as a scientific question, it’s not even really that interesting. What is interesting, however, is the detailed and complex phylogenetic treatment of a group of dinosaurs that’s hard to study.” – Mark Norell, Chair and Curator-in-Charge of the American Museum of Natural History’s Paleontology Division; iO9 article by George Dvorsky

My favorite post on the Brontosaurus news might have been Michael Balter’s guest blog post on Last Word on Nothing, Guest Post: Brontosaurus and Me. He recounts his own relationship with the story, which actually started nearly a year before the PeerJ paper was published, and how he came to see Brontosaurus in a cultural and personal context.

(An interesting side-note related to scientific publishing: PeerJ apparently broke its own embargo to make sure the 300 page document was correct online before journalists started linking to it. 300 pages! That doesn’t happen much at other traditional scientific journals.)

Read more:

 

 

 

His stare gets me every time. My dog Mojo! Photo by Paige Jarreau.

His stare gets me every time. My dog Mojo! Photo by Paige Jarreau.

 

Paper #4. Puppy Love

Our forth High Five paper will get you every time.

The Science paper, “Oxytocin-gaze positive loop and the coevolution of human-dog bonds,” describes something much more close to home for many of us than its scientific language might suggest: how gazing behavior from dogs promotes human-dog bonding.

 If you think of your dog as your “fur baby,” science has your back. New research shows that when our canine pals stare into our eyes, they activate the same hormonal response that bonds us to human infants. The study—the first to show this hormonal bonding effect between humans and another species—may help explain how dogs became our companions thousands of years ago. – David Grimm, Science News

It makes sense to me. I’ve always said I’ve never had a dog that will just sit there and stare me in the eyes like my dog Mojo does. And Mojo is definitely the closest pet I’ve ever had – did I say pet? He is basically family.

Study co-author Takefumi Kikusui told Live Science: “We humans use eye gaze for affiliative communications, and are very much sensitive to eye contact.” It appears that dog gazing behavior increases oxytocin (hormone) levels in dog owners.

The Japanese team measured oxytocin levels in urine before and after 30-minute interactions between volunteer humans and the dogs and wolves they had raised. The human-animal pairings were split into long and short gaze groups, to see how duration of gaze impacted oxytocin production. They found that only owners involved in the lengthier gazing sessions with, as the authors refer to it, the most “dog-to-owner gaze, and dog-touching”, had a significant increase in oxytocin. Analysis proved however that it was the gaze that was having the greatest impact on oxytocin levels – not the petting or “talking” interactions. The same oxytocin increase was also identified in urine samples from dogs, but never the wolves. – Liat Clark, Wired

Read more:

 

 

 

Maia Weinstock, Flickr.com

Maia Weinstock, Flickr.com

 

Paper #5. National hiring experiments reveal 2:1 faculty preference for women on STEM tenure track

Our final High Five paper – yet again – raised debate in the scientific community and the public sphere. This time it was about sexism in STEM careers, or an apparent lack-thereof.

A study published in PNAS found that in a series of hiring experiments, contrary to prevailing assumptions, men and women faculty members from biology, engineering and psychology preferred female applicants 2:1 over identically qualified males with matching lifestyles. Study authors Wendy Williams and Stephen Ceci conclude: “Our findings, supported by real-world academic hiring data, suggest advantages for women launching academic science careers.”

According to the hiring data, fewer women applied for the STEM jobs, but of those who applied, they were hired at a higher rate than the men who applied.

[T]heir research suggests that women have the advantage, just because they are women and that competence wasn’t what was setting them apart. They sent identical applications to more than 800 tenure-track faculty in the US to consider, the only difference in these applications was the gender, and women were still the preferred candidate. – Julie Gould

Others were not convinced.  wrote about the study for Slate: “A vaunted new study says women have it easy in STEM fields. Don’t believe it.” He puts forward some very good arguments against the significance of the study’s findings.

Unfortunately, the study contradicts every other study about the problems women face in academia—and what’s more, their own research doesn’t back up their conclusions. Sexism is an ongoing problem in universities and science, technology, engineering, and mathematics fields, with persistent bullying of female faculty, prejudice against mothers, barriers to promotion, and lower pay than male colleagues for equal work. (In fact, among all of these well-documented issues, focusing narrowly on hiring practices feels wrongheaded.) – Matthew R. Francis, Slate

Francis points out a variety of factors that might have impacted the study’s findings. “And of course there’s the question of whether people’s good intentions might cause them to respond to a survey differently than they would behave in real life,” he writes.

The unpleasant truth is that women face a lot more challenges in STEM than university hiring practices. Williams and Ceci cloud the issue, both by their methodology and by their conclusions, which are contradicted by other research. We need to confront biases head on if we’re to fix the problem of sexism in STEM, a problem we can’t simply explain away with surveys and op-eds. - Matthew R. Francis, Slate

Read More:

This post discusses the launch and roll-out of Altmetric data and badges across the Michigan Publishing portfolio. It was written with great input from Jon McGlone and Rebecca Welzenbach at Michigan Publishing – thank you both for your efforts and thoughtful contributions! 

You might have seen Michigan Publishing’s recent announcement on the introduction of Altmetric badges across their journal portfolio. The roll out of the badges marks the beginning of what Michigan Publishing are positioning as a two year pilot phase to roll out Altmetric data across much of their content – including grey literature hosted in their institutional repository platform, Deep Blue.

michiganA publisher with a strong tradition in humanities and in technical innovation, Michigan Publishing place a focus on not only ensuring that their business is sustainable, but also in encouraging and supporting their authors in exploring more diverse forms of publishing. In doing so they are keen to help their authors get credit for the research outputs that extend beyond journal articles – and to provide them with the feedback and data to enable them to demonstrate the impact of all of their work.

It’s equally important to Michigan Publishing that the value they offer to their authors can be reflected back to their stakeholders; to the institution that supports them. In implementing the Altmetric data across their portfolio, and using the Altmetric Explorer for internal reporting, they are aiming to gather a much more extensive and transparent understanding of who is using their content and how their publishing programmes are adding value to the disciplines that they serve.

“Altmetric is really important to us in terms of being able to tell stories about impact to be able to report back to our parent institution, and to the authors that publish with us.”

Open access is at the heart of Michigan Publishing’s portfolio. They are one of the first organizations to offer a fully open access journal program that does not charge author fees and is for the most part supported by library and volunteer staff. Although publishing OA books since 2005, they are aiming to improve how they quantify the value of their open monograph program and ensure its usage and sustainability. Altmetrics, they believe, will play a crucial role in being able to report on the attention surrounding their outputs, and in helping them to position the content effectively. Their researchers, they note, get asked to report on the impact of their work – and are looking for stories to tell that provide evidence of engagement beyond the academic sphere.workshop

The new initiative ‘The Common Good: Humanities in the Public Square’ has not gone unnoticed by Michigan Publishing – and keen to maintain their reputation as recognized thought-leader in their space they have taken this as further indication of the drive to look beyond traditional outputs and metrics alone as a measure of success.

As library publishers seek to develop their strategies, sustainability and the ability to demonstrate need and usage are for many equally, if not more, important than financial ROI. Data such as that provided by Altmetric can help them gather the feedback that is recognised as difficult to track and quantify: how is our research being used, what impact is it having in the ‘real world’, and how can we demonstrate this?

“Authors are being asked to deliver a whole range of more quantitative metrics of impact, to talk about why their thoughts are worthwhile, and they’re searching for good stories to tell.”

Already, Altmetric data has helped Michigan Publishing uncover stories that demonstrate the importance of their open access programs to audiences beyond academics or in countries with limited access to subscription-based academic journals. For example, one article published in the Trans Asia Photography Review has seen significant Twitter attention in India, a story they were able identify using Altmetric’s geographical breakdown of article mentions.

Article

Another, published in the Journal of the Abraham Lincoln Association was referenced in several news outlets in 2014–something Michigan Publishing was not aware of until reviewing Altmetric data for the journal.

The project has begun with the initial implementation of Altmetric badges across the OA journals and books. From there it is intended to be extended across their Deep Blue repository content, and further into an increasing spread of non-traditional research outputs.

Altmetric’s Euan Adie adds, “We’re really pleased that Michigan Press decided to come on board with Altmetric. A lot of the work they are doing in supporting the research community closely aligns with the objectives that we as a company have set out to demonstrate, and it’s great that they are planning to apply the data extensively across non-article research outputs.”

As an organisation with the aims of the institution and the academics they serve integral to all of their development, Michigan Publishing look forward to further exploration of the opportunities that altmetrics hold for themselves, their institution, and their authors.

We’re excited to announce that we’ve launched a brand new design for the Altmetric details pages. The new details pages are now appearing on the Altmetric ExplorerAltmetric for Institutions, and the free Altmetric Bookmarklet. (They’ll also be made available on publicly-accessible publisher details pages soon.) Read on to find out about the new features we’ve added, and learn more about our launch plans.

New Details Page
 

Listening to user feedback

The new design constitutes the most significant change we’ve made to the details pages since Altmetric was founded in 2011. Efforts to revamp the details pages have been driven primarily by intensive user research we’ve conducted in the recent months. To find out what our users wanted to see on the details pages, we conducted hours of interviews with various organisations, including universities, funders, and publishers.

During our interviews, we gathered a lot of interesting feedback about the usability of the pages, how often certain tabs were accessed, and so on. We also learned that users wanted increased clarity in the Score tab, as well as more information and transparency about the data sources that we track (beyond what already exists on our website and Knowledgebase). We also got some pretty cool feature requests, which we’ve started implementing in the new details pages.

Our primary goals were to improve the overall user experience for the details pages and to make all messaging as clear as possible.

 

Exploring the new details pages

As part of this major re-design, we’ve made the details page layout clearer and also mobile-friendly, meaning that the pages now load beautifully on mobile phones and tablets.

The new details pages are now mobile-friendly.

The new details pages are now mobile-friendly.

Try clicking through some examples of the new details pages here and here. One significant change that you’ll notice right away is the addition of a “Summary Tab”, which includes bibliographic information, demographics for Twitter and Mendeley, and simplified score in context information. We’ve also added a prominent button that enables you to receive e-mail alerts whenever a particular article is mentioned. (This feature was designed for authors, and provides a daily e-mail digest summarising new attention for subscribed articles.)

The metrics legend (underneath the donut) is now clickable – if you click on a particular source, you’ll be taken to the corresponding source tab. For instance, if you click on the number of Wikipedia page references in the legend, you’ll be taken directly to the Wikipedia tab of the details page.

If you click through the various source tabs, you’ll notice that we’ve added some help buttons (“?” icons that lead to popouts) about our various sources and the Altmetric score in context. These popouts explain what you are seeing on each tab, and can help you to understand the kinds of things that Altmetric collects for each source. There are also clearer directions for getting in touch with our support team, if you happen to need any extra assistance.

Finally, we’ve added some neat sharing features, including the “Embed badge” button, which gives you HTML code that you can use to embed the Altmetric donut (for the article you’re viewing) onto your personal website, CV, or blog. We’ve also added a “Share” button, which lets you easily share a link to a details page on social media or by e-mail. It’s now easier than ever for authors to tell the world about their work’s successes in public engagement and scholarly influence.

 

Launch plans

Today, the new details pages are being launched simultaneously on the Altmetric Explorer, Altmetric for Institutions, and the Altmetric Bookmarklet. This means that within the 3 products, any links that you click on to view the details pages will be redirecting to the newly-designed details pages.

For the time being, badges on publisher sites and the free embeddable badges will continue to link to the older version of the details pages. Over the next few months, we will be gradually phasing out the old details pages and replacing them with the new design. We’ll be getting in touch shortly with all publishers who use Altmetric to discuss their individual upgrades to the new details pages.

 

Concluding thoughts

When reporting on altmetrics data, we think that it’s crucial to look at the conversations surrounding scholarly work, as well as the raw metrics. Auditability of the data is important to us, which is why the Altmetric details pages are meant to bring all the actual conversations and metrics into one place. Whether you use the Bookmarklet, the Explorer, Altmetric for Institutions, or the Altmetric API, the details pages are a big part of your experience with our data. We hope that these new eye-catching details pages will make it even easier to see and showcase the conversations surrounding your research.

What are your thoughts about the new details pages? Let us know by commenting on this post, e-mailing us, or sending us a tweet.

Introducing Bookmetrix

Celebrating the launch of Bookmetrix after the London Book Fair. From left to right: Milan Wielinga and Martijn Roelandse of Springer. Euan Adie, Jean Liu, Matt MacLeod, and Jakub Pawlowicz of Altmetric.

Celebrating the launch of Bookmetrix after the London Book Fair. From left to right: Milan Wielinga and Martijn Roelandse of Springer. Euan Adie, Jean Liu, Matt MacLeod, and Jakub Pawlowicz of Altmetric.

This week, the London Book Fair saw the launch of Bookmetrix, an exciting new book metrics platform that we have built in partnership with Springer. The project was born after Martijn Roelandse (Manager Publishing Innovation at Springer), Euan Adie (Founder of Altmetric), and Milan Wielinga (EVP Strategy and M&A at Springer) began brainstorming ways to showcase the wider impact of books, similar to how altmetrics are being used to illustrate the wider impact of articles. After formalising the project, a dedicated team within Altmetric worked closely with Springer counterparts for 6 months to transform the initial ideas into working software.

At Altmetric, we are constantly exploring ways in which our technology can be used to uncover mentions of articles and other types of scholarly content. The project arrived at a great time for us, as we were already keen to add more support for books.

And so, the mission of Bookmetrix is this: to give authors, editors, and readers a unique way to explore the broader impact and engagement generated by a Springer book. By bringing together many different types of metrics, namely citations, online mentions, reference manager readership stats, book reviews, and downloads, we hope that Springer’s editorial teams will be able to gain a better understanding of how their books have been received. Additionally, with all the new data that may potentially be used to support researcher CVs and funding applications, Springer authors should be able to get more credit for the books and chapters they have written.

 

A closer look at the platform and data

The central part of the platform is the free-to-access “Bookmetrix details page”, which allows users to browse through all the metrics and data for individual Springer books and chapters. Each book in the Springer database has its own Bookmetrix details page, and can be accessed from the book page on SpringerLink (see an example here), as well as via the Papers app.

Bookmetrix Details Page

An example of a Bookmetrix details page.

In order to broaden the picture of impact, Altmetric contributed online mentions data to Bookmetrix, making it possible to see how Springer books and chapters have been referenced across mainstream media, policy sources, Wikipedia, blogs, social media, and more. Most of the other data available for each book and chapter in Bookmetrix have come directly from Springer, including citations (which are gathered from CrossRef), usage data (downloads), and featured book reviews.

In addition to the Bookmetrix details pages, we also built an internal book search interface for Springer staff, enabling their editorial, marketing, and sales teams to search, filter, analyse, and report on metrics for their entire books collection.

 

Behind the scenes at Altmetric

The team at Altmetric. From left to right: Louise Hills, Matt MacLeod, Jean Liu, and Jakub Pawlowicz.

The team at Altmetric. From left to right: Louise Hills, Matt MacLeod, Jean Liu, and Jakub Pawlowicz.

Development formally started on a pleasant autumn day last year, when we gathered in our London office with Martijn Roelandse to draw up the first plans for the project. Martijn spent the whole day with us, sharing his vision for the product and discussing the features that we wanted to deliver.

Our small Altmetric development team, consisting of Matt MacLeod (Software Developer), Jakub Pawlowicz (Software Developer), Louise Hills (Agile Coach), and myself (Product Development Manager), spent 6 months building Bookmetrix from the ground up. We checked in regularly with Martijn and other staff at Springer, and also presented new features in internal demos every 2 weeks.

We worked with Springer to plan out the features we would deliver as part of Bookmetrix.

We worked with Springer to plan out the features we would deliver as part of Bookmetrix.

Since we were building a completely new product (which was quite different from Altmetric!), we made sure to pepper our development process with several rounds of user research, mainly with focus groups. Every step of the way, we wanted to make sure that we were building the right features for end-users, and that our software was easy to understand and use. We are very grateful to everyone who participated in our user research sessions (often on very short notice)!

I know that everyone involved with the project on the Altmetric side will agree when I say that the development of Bookmetrix has gone very smoothly. Springer were great partners – we are grateful to Martijn and his colleagues, who recruited users for testing, worked hard to get us the Springer data that we needed, and helped us to sync up with the development processes at SpringerLink and Papers.

We’ve been delighted to see such a warm response to the product following its launch at the London Book Fair, and we hope that Springer authors, editors, and readers will enjoy using Bookmetrix as much as we have enjoyed building it.

 

What do you think?

We’d love to hear what you think about Bookmetrix, so please share your comments and feedback with us below, or by sending us a tweet at @altmetric.

If you’d like to stay up to date with all the latest Bookmetrix news, you can follow its brand new Twitter feed, @Bookmetrix.

The Research Exellence Framework is a process of assessing research quality at UK universities and is funded and organised by the Higher Education Funding Council for England. At Altmetric, we work with a lot of UK universities and are always really interested in learning more about the processes of research assessment. To this end, I attended the one-day REFlections event at the Royal Society on 25th March, to gain an insight into what people thought of the REF 2014, and in particular to hear what people had to say about the Unknowncontroversial decision to introduce “impact” as one of the assessment areas.

The day started with some very positive statistics. According to a the “key facts” sheet in the conference pack, research from 154 UK universities was assessed, and 1,911 submissions were made. The results showed that 30% of submissions were given a four star rating and judged to be “world leading” (up from 14% in 2008) while 46% of submissions were classified as “internationally excellent”, with a three star rating awarded (up from 37% in 2008). To give a bit of context, David Sweeney (Director of Research and Knowledge Exchange at HEFCE) informed the audience that roughly the same number of staff made submissions in both years, suggesting a genuine increase in top quality research.

Listening to each speaker present their thoughts, I felt the theme of the morning could definitely be summed up in one word; “multidisciplinarity”. David Sweeney posed the idea that the results of the REF 2014 defy previous criticisms that the exercise approaches institutional research in a narrow or insular way, by categorising submissions under their respective academic disciplines.

M’hamed El Aisati gave a very impressive presentation about a project undertaken by Elsevier, which was executed with the guiding principle that “some of the most interesting research questions are found at the interface between disciplines”. The project involved looking at how often journals across a large range of subjects were citing each other, and translating this into an infographic, or “map”. However, various members of the audience thought El Saiti was promoting the idea that multidisciplinary research is inherently good. They raised concerns that people would attempt to bias the REF responses to their submissions by including more accounts of multidisciplinary research, if they assumed the REF would favour accounts of academic endeavours combining more than one subject. The question of how to balance an appreciation of multidisciplinary research whilst continuing to honour and recognise the findings of more niche academic subjects was therefore an interesting one.

The late morning and afternoon sessions moved on from the question of quality and started to focus on the “impact” section of the assessment, which accounted for 20% of the overall assessment for each submission. Jonathan Adams closed the morning’s proceedings by introducing the REF impact case study database, which was put together in partnership with Digital Science. Each case study includes an introduction to the research, as well as citation data and an “details of the impact” section. For example, part of the impact of a case study of a submission from Durham university entitled “an X-ray tool for predicting catastrophic failure in semiconductor manufacture” was that “Jordan Valley semiconductors UK made the strategic decision to invest in the design and manufacture” of safer X-ray imaging tools.

After lunch, two analysts from RAND Corporation reported on their evaluation of the impact assessment process, in which they conducted face to face and telephone interviews with those who had been involved in making submissions, and with the assessment panelists. They summed up their findings as follows;

“The introduction of an impact element in REF 2014 might have been expected to generate concerns because of the relative novelty of the approach and because of the obvious difficulties in measurement, but in general it has succeeded.”

So, what can metrics providers make of all this? Taking all the speakers into account, it is seems as though “impact” is increasing in importance as a way of assessing research quality, but that the “obvious difficulties in measurement” described by RAND suggest a lack of tools with which to measure and quantify such a broad and slippery term, and translate it into relevant numbers. This then, is the gap in the market that metrics providers are seeking to fill, whether it’s bibliometrics or altmetrics.

However, this idea was somewhat contradicted by James Wilsdon, when he spoke about the Independent Review of the Role of Metrics in Research Assessment, the full results of which are to be published in July 2015. James concluded that “it is not currently feasible to assess the quality of research outputs based on quantitative indicators alone”. He elaborated that “no set of numbers can create a nuanced judgement of research” and that the “collection of metrics for research is not cost-free”. In response to this, perhaps it’s worth pointing out that at Altmetric is not simply “a set of numbers”, as we try to provide qualitative as well as quantitative data. We aim to give our users a level of granularity by allowing them to click through to the full text of all mentions, and to the profiles of those who have shared research on social media.

In summary, REFlections provided much food for thought as to the role of “impact” in assessing research quality in UK HEIs, and the role of metrics in determining impact. At Altmetric we’re continuously preoccupied with questions of data coverage. How can we go beyond the article, and provide data for other research outputs? How can we increase our coverage beyond the sciences, and provide data for other academic disciplines?

It occurred to me as I left the Royal Society, that even if multidisciplinary research isn’t inherently good, and even if high-impact research doesn’t automatically mean good research, a multi-faceted approach to assessing impact itself might be the best way forward.

FernandoThis guest post is contributed by Fernando T Maestre. Fernando is a Professor in the Biology and Geology Department of the Universidad Rey Juan Carlos, in Móstoles (Madrid, Spain). In this post he talks about how he has been using altmetrics data to supplement his funding proposals and impact reporting: 

Some months after I wrote a tweet about how I was using alternative metrics of the impact of my research outputs (altmetrics hereafter) in my proposals, I was contacted by Cat Chimes from Almetric, who asked me if they could use it as an example about how researchers are using altmetrics. Soon after that I wrote a brief post in my lab´s blog about this topic; this post was also noticed by Chris Woolston, who wrote a piece for Nature on the interest of funders on altmetrics to measure the impact of the research they pay for. If you are interested in this topic and you have not done so yet, I would encourage you to read Dinsmore et al. (2014).

As an extension of my previous post, here I show how I have use altmetrics in my research proposals, as this may help other researchers interested in doing so. While I am not going to provide an in-depth discussion here on what altmetrics can do or why you should use them (there are already plenty of excellent posts, articles and discussions on this topic), I will also provide some personal thoughts on why I found these metrics useful and why it is a good idea to include them in our proposals or research reports. So far I have used altmetrics in two proposals and a prize nomination I submitted in 2014, which were successful in all cases.

In the first proposal, submitted to the Humboldt Research Award of the Humboldt Foundation in Germany, I had to describe five relevant publications. I included several measures of the impact of these publications, including altmetrics. Here is how I did it (note that the numbers correspond to the moment I prepared this application, in February 2014):

1. Maestre, F. T. et al. 2012. Plant species richness and ecosystem multifunctionality in global drylands. Science 335: 214-218.

“This study presents the first set of analyses of a global network of dryland sites (224 from all continents except Antarctica), which has been led by Dr. Maestre as part of his European Research Council-funded Starting Grant BIOCOM (http://goo.gl/u9H8tH). While many experiments have suggested that biodiversity enhances the ability of ecosystems to maintain multiple functions, such as carbon storage, productivity, and build-up of nutrient pools (multifunctionality), this study was the first in evaluating the relationship between biodiversity and multifunctionality in natural ecosystems at a global scale. Its main finding was that multifunctionality was positively and significantly related to species richness; the best-fitting models used accounted for over 55% of the variation in multifunctionality, and always included species richness as a predictor variable. The results of this work suggest that the preservation of plant biodiversity is crucial to buffer negative effects of climate change and desertification in drylands, which collectively cover 41% of Earth’s land surface and support over 38% of the human population. Some indicators of the relevance of this article and its impact among the scientific community are the number of citations it has received so far (55 and 90 according to ISI´s Web of Science and Google Scholar, respectively), which have made it be named as a “Highly cited” article by ISI, and the three evaluations received from Faculty of 1000 (F1000) members, which have rated it as a “Must read”/ “Recommended”  article (http://goo.gl/cLa4gl). This study has also been widely discussed in the social media, as indicated by an Altmetric score of 50, which makes it scoring higher than 98% of its contemporaries and includes it into the top 5% of all the articles tracked by Altmetric (more than 1,660,000; see http://goo.gl/aNVUUk for details). In addition, this work has been featured by newspapers, magazines, web pages and blogs from around the world (see http://goo.gl/JrJ4EY for a selection of news).”

 

2. Delgado-Baquerizo, M., F. T. Maestre, et al. 2013. Decoupling of soil nutrient cycles as a function of aridity in global drylands. Nature 502: 672-676.

“Using the network of sites deployed in the framework of the BIOCOM project, this study reports a negative effect of aridity on the concentration of organic C and total N, but a positive effect on that of inorganic P, in dryland soils worldwide. Aridity was negatively related to plant cover, which may favor the dominance of physical (i.e. wind-blown sands that abrade exposed rock surfaces) over biological (i.e. litter decomposition) processes. The results of this study indicate that the predicted increase in aridity with climate change by the end of this century will uncouple the C, N and P cycles in dryland soils, thus negatively affecting the provision of key ecosystem services by drylands, such as the buildup of soil fertility and carbon fixation. This article has attracted lots of attention from scientists since its publication, as it was the object of a “News & Views” in Nature (Wardle, 2013, Nature 502: 628-629), and has been viewed more than 6300 times since its publication two months ago (see http://goo.gl/EuHYOv for details). This article has also been widely discussed in the social media, as indicated by an Altmetric score of 151, which makes it scoring higher than 99% of its contemporaries and includes it into the top 5% of all the articles tracked by Altmetric (more than 1,730,000; see http://goo.gl/f3fu3A for details). This study has also received substantial attention by newspapers, magazines, web pages and blogs from around the world (see http://goo.gl/CU2hSR for a selection of news).”

Similarly, and as part of my application to the Consolidator Grants program of the European Research Council (who just funded my BIODESERT project), I had to present a section on “Early achievements track-record”. Within this section I included key publications with the number of ISI Web of Science® [Google Scholar] citations (excluding self-citations) they have accrued, as well as with their altmetrics. For the two publications presented above, here is how I did it (note that the numbers correspond to the moment I prepared this application, in May 2014):

1) Maestre, F.T. et al. 2012. Plant species richness and ecosystem multifunctionality in global drylands. Science 335: 214-218. IF = 31.027; 62 [101] citations. This article has received three evaluations from Faculty1000 members, has an Altmetric score of 49 and has been featured in more than 100 newspapers, blogs and online news outlets.

2) Delgado-Baquerizo, M.*, F.T. Maestre et al. 2013. Decoupling of soil nutrient cycles as a function of aridity in global drylands. Nature 502: 672-676. IF (2012) = 38.597; 2 [8] citations. This article has an Altmetric score of 149, and has been featured in more than 100 newspapers, blogs and online news outlets. * graduate student I have supervised

Finally, as part of the nomination package for the “Miguel Catalán” prize for scientists under 40 years, awarded annually by the Regional Government of Madrid (“Comunidad de Madrid”), I had to comment on three relevant scientific articles I have published. I included altmetrics when describing the “impact” of these publications as I have shown above in the example of my Humboldt Research Award application (the full application for the Miguel Catalán prize was written in Spanish, so I will not reproduce it here).

I found particularly useful using altmetrics for those papers/research products (such as databases) that have been published recently, as they provide a nice way to showcase the “impact” of research outputs before they start to accrue citations. Whether there is a correlation between altmetrics and citations is a matter of ongoing research and discussion, with poor correlations observed so far (Thelwall et al. 2013, Costas et al. 2104 and Peters et al. 2015), but high scores of the Almetric “donut” indicate that your research is being noticed (and thus is likely to be used in the future) by the research community.

Perhaps more importantly, it is becomingly increasingly crucial that our research gets the widest dissemination as possible, regardless whether your lab budget comes from a public or private funder. Indeed, the dissemination beyond the traditional scientific “circuit” (articles, scientific meetings, workshops…), and particularly among the general public, is now a requisite for funding agencies and foundations worldwide. Social media provide excellent opportunities to disseminate our work beyond our peers, and thus almetrics provide a very nice way of measuring the “impact” of scientific activities among a wider audience. I do not see altmetrics as a replacement of more traditional measures of “impact”, such as the number of citations or the h-index, among other reasons because there are many scientists that have not fully embraced social media (including myself, as for example I am not a Mendeley reader yet….). However, the capabilities of altmetrics make them a good complement to these more traditional “impact” metrics.

If you have suggestions about how to use altmetrics in your research proposals or reports, please send me a tweet (@ftmaestre) or e-mail, I would love to hear them.

 

References

Costas R, Zahedi Z, Wouters P (2014) Do altmetrics correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective. arXiv:1401.4321. doi: 10.1002/asi.23309

Dinsmore A, Allen L, Dolby K (2014) Alternative Perspectives on Impact: The Potential of ALMs and Altmetrics to Inform Funders about Research Impact. PLoS Biol 12(11): e1002003. doi:10.1371/journal.pbio.1002003

Peters I, Kraker P, Lex E, Gumpenberger C, Gorraiz J (2015) Research data explored: Citations versus altmetrics. arXiv:1501.03342.

Thelwall M, Haustein S, Larivière V, Sugimoto CR (2013) Do Altmetrics Work? Twitter and Ten Other Social Web Services. PLoS ONE 8(5): e64841. doi:10.1371/journal.pone.0064841