July High Five – Four-legged snakes and other shocking discoveries

Welcome to Altmetric’s “High Five” for July, a discussion of the top five scientific papers with the highest Altmetric scores this month. On a monthly basis, my High Five posts examine a selection of the most popular research outputs Altmetric has seen attention for that month.

The theme this month is shocking (and not-so-shocking) discoveries.

 

Tetrapodophis

Tetrapodophis specimen. Credit: Dave Martill.

 

Paper #1. Four-legged fossil snake is a world first

Our top paper this month is “A four-legged snake from the Early Cretaceous of Gondwana,” published in Science on July 24, 2015. The study, authored by researchers based in the US, the UK and Germany, details the discovery of a fossil snake… with four legs.

The discovery of snakes with two legs has shed light on the transition from lizards to snakes, but no snake has been described with four limbs, and the ecology of early snakes is poorly known. We describe a four-limbed snake from the Early Cretaceous (Aptian) Crato Formation of Brazil. The snake has a serpentiform body plan with an elongate trunk, short tail, and large ventral scales suggesting characteristic serpentine locomotion, yet retains small prehensile limbs. – Martill, Tischlinger, Longrich, 2015

According to the Science editor’s summary of the study, this ancestor of today’s snakes, named Tetrapodophis amplectus, “appears to have been a burrower and shows clearly the early transitional stages from a lizardlike body plan to the smooth legless snakes we know today.”

The paper, shared by members of the public and scientists alike, received coverage from major news sources including CBS News, “Fossil shows prehistoric snake had four feet,” the Telegraph, “Four-legged snake discovered,” Wired, “Four-legged ‘hugging snake’ could be a missing link,” and the Washington Post, “Four-limbed, long-fingered snake hints at a creepy crawly evolutionary journey.” The discovered fossil of Tetrapodophis amplectus is about 110 million years old.

I thought, ‘Bloody hell, it’s got back legs!’ It had front legs. Nobody had ever seen a snake before with four legs, and yet evolutionary theory predicts that there should be an animal that is transitional between four-legged lizards and snakes, and here it was. – David Martill, study co-author, as quoted by Laura Geggel, LiveScience

The crazy part is that David Martill had this “Bloody hell” moment not in the field, but when he was looking closely at the unidentified fossil as he led a student group through the collections at the Bürgermeister Müller Museum in Solnhofen, Germany.

I looked closer and the little label said: Unknown fossil. Understatement! I looked even closer—and my jaw was already on the floor by now—and I saw that it had tiny little front legs! […] But no snake has ever been found with four legs. This is a once-in-a-lifetime discovery. – Paleobiologist David Martill, as quoted by Ed Yong, Not Exactly Rocket Science

Ed Yong did a fantastic job describing the implications of this fossil find in his blog post on the subject at National Geographic:

There are two competing and fiercely contested ideas about this transition. The first says that snakes evolved in the ocean, and only later recolonised the land. This hypothesis hinges on the close relationship between snakes and extinct marine reptiles called mosasaurs (yes, the big swimming one from Jurassic World). The second hypothesis says that snakes evolved from burrowing lizards, which stretched their bodies and lost their limbs to better wheedle their way through the ground. In this version, snakes and mosasaurs both independently evolved from a land-lubbing ancestor—probably something like a monitor lizard. Tetrapodophis supports the latter idea. It has no adaptations for swimming, like a flattened tail, and plenty of adaptations for burrowing, like a short snout. It swam through earth, not water.

The discovery of this four-legged snake is not without controversy, however. There is some debate over how the fossil came to reside in a private collection when it was identified as originating in Brazil, where the export of such fossils is generally illegal. Shaena Montanari, a writer who covers paleontology, dinosaurs and comparative biology, wrote in a July 24th Forbes article: “I am slightly surprised that Science allowed this to be published despite the fact the legality and provenance of this specimen can in no way be proven beyond a guess, albeit a very educated one.”

It appears that the technicalities surrounding the collection and discovery of this fossil are as intriguing as is the fossil’s implications for the history of snakes.

I think this creature is far more exciting for what it might be than for what [the team] says it is. – Michael Caldwell, vertebrate paleontologist, University of Alberta, as quoted by Sid Perkins for ScienceNOW

 

Credit: Brandon Motz, Flickr.com

Credit: Brandon Motz, Flickr.com

 

Paper #2. Insights into Sexism… via Video Games

Our next High Five paper was published in PLOS ONE this month, titled “Insights into Sexism: Male Status and Performance Moderates Female-Directed Hostile and Amicable Behaviour.” The researchers, Michael Kasumovic and Jeffrey Kuznekoff, used online video games to study gender-directed behavior. Gender-directed behavior of online gamers was prompted by the sound of another player’s voice (e.g. male or female voice).

We used an online first-person shooter video game that removes signals of dominance but provides information on gender, individual performance, and skill. We show that lower-skilled players were more hostile towards a female-voiced teammate, especially when performing poorly. In contrast, lower-skilled players behaved submissively towards a male-voiced player in the identical scenario. This difference in gender-directed behaviour became more extreme with poorer focal-player performance. We suggest that low-status males increase female-directed hostility to minimize the loss of status as a consequence of hierarchical reconfiguration resulting from the entrance of a woman into the competitive arena. Higher-skilled players, in contrast, were more positive towards a female relative to a male teammate. – PLOS ONE study

The study sparked headlines in several online news outlets, such as “Study: Online gaming ‘losers’ are more likely to harass women” in Ars Technica. Most outlets ran with the analogy that males who exhibit the most sexist behavior in online gaming environments are the “losers.”

Talk to women who play games online, especially first-person shooters, and you’ll quickly hear tales of them being bombarded with gender-focused harassment if and when they decide to speak up on a groupchat channel. Now, a new study suggests that the players most likely to engage in this kind of harassment are the ones who are actually worst at the game itself. – Kyle Orland, Ars Technica

However, a few of those writing about the study pointed out that Kasumovic and Kuznekoff have relatively little evidence to back up their claims that the sexist behavior seen from low-performing males is at its core an evolutionary response to threat. But the results themselves resonated with many of those writing about the study.

I prefer my gameplay without sexual harassment, but on any given evening that may not be in the cards when I hop online to play Call of Duty. I’ve learned to take these rare vocal assaults with a heavy sigh and the understanding that trolls are much like bullies: They harass others as a way to cope with their own insecurities. But I’ve always wondered why women always seemed to be singled out. Were their sexist remarks really of the same mentality as a bully; researchers say, in a way, yes. – Natalie Shoemaker, Big Think

More reading:

 

Possible layout of the quarks in a pentaquark particle. Image: Daniel Dominguez. Source: CERN press release.

Possible layout of the quarks in a pentaquark particle. Image: Daniel Dominguez. Source: CERN press release.

 

Paper #3. Pentaquarks

Our next High Five paper, a report from the European Organization for Nuclear Research or CERN published in Physical Review Letters, is difficult for the non-specialist to read. It details the sighting of an elusive subatomic particle called the pentaquark. Nature News reports, “an exotic particle made up of five quarks has been discovered a decade after experiments seemed to rule out its existence.”

The pentaquark is not just any new particle — it represents a way to aggregate quarks, namely the fundamental constituents of ordinary protons and neutrons, in a pattern that has never been observed before. Studying its properties may allow us to understand better how ordinary matter, the protons and neutrons from which we’re all made, is constituted. – Large Hadron Collider spokesperson Guy Wilkinson, via Nature News

The report got the attention of several online news and specialty outlets, including Smithsonian magazine, “What Is a Pentaquark and Why Are Physicists so Excited About It?” Quartz, “Scientists at the Large Hadron Collider have discovered an exotic new state of matter,” and Ars Technica, “CERN experiment spots two different five-quark particles.”

Pentaquarks are an exotic form of matter first predicted back in 1979. Everything around us is made of atoms, which are mode of a cloud of electrons orbiting a heavy nucleus made of protons and neutrons. But since the 1960s, we’ve also known that protons and neutrons are made up of even smaller particles named “quarks”, held together by something called the “strong force”, the strongest known force in nature in fact. – Gavin Hesketh, The Conversation

The discovery of a five-quark particle opens up many new questions, as detailed in many of the news reports of this particle sighting. What holds these quarks together, and how are the five quarks oriented and put together to form the pentaquark?

Scientists had a devilishly hard time finding these particles, even though they likely exist in high-energy settings, like dying stars collapsing to form black holes – or the moments just after the Big Bang. In 2013, tetraquarks were convincingly unmasked by two research teams. The pentaquark, however, has been a brutal tease. [Until now.] – Physics Buzz

More reading:

 

      More details Normal (left) versus cancerous (right) mammography image. Wiki.

Normal (left) versus cancerous (right) mammography image. Wiki.

 

Paper #4. Breast Cancer Screening: “The more we look, the more you find.”

Our next High Five paper sparked some debate online about the limited effectiveness and risks of breast cancer screening. The study, published in JAMA, examined 6 million women in 547 counties self-reporting to the Surveillance, Epidemiology and End Results cancer registries during the year 2000. The study authors, including researchers from Seattle, Harvard University in Cambridge, and The Dartmouth Institute for Health Policy and Clinical Practice in Hanover, conclude:

Across US counties, there was a positive correlation between the extent of screening and breast cancer incidence (weighted r = 0.54; P < .001) but not with breast cancer mortality (weighted r = 0.00; P = .98). […] The clearest result of mammography screening is the diagnosis of additional small cancers. Furthermore, there is no concomitant decline in the detection of larger cancers, which might explain the absence of any significant difference in the overall rate of death from the disease. Together, these findings suggest widespread overdiagnosis. – JAMA study

The findings were picked up by several major news outlets and discussed in a variety of science and medicine blogs. Nancy Shute wrote for NPR’s shots: “Here’s more evidence that mammograms don’t always deliver the results that women want. They find more small cancers, but don’t lower a woman’s risk of dying of breast cancer, a study finds.”

This study shows that the more we look, the more you find. – Joann Elmore, M.D., as quoted by Nancy Shute, NPR

 

The trouble with overdiagnosis is that while the cancers doctors find wouldn’t have harmed their patients, the treatment and stress that result from the diagnosis probably will. – Julia Belluz, Vox

But some writers and cancer screening advocates took issue with the JAMA paper’s conclusions.

What’s clear from other published data (curiously omitted in this paper), is that rates of death from breast cancer in women fell during this same decade, 2000 to 2010, across the United States. I find it hard to reconcile the authors’ findings of a lack of change in incidence-based breast cancer mortality, whether or not women were screened in the two years before 2000, with the clear pattern of reduced mortality from the disease. – Elaine Schattner, Forbes, Why Women Shouldn’t Cower To Concerns About Overdiagnosis Of Breast Cancer

David Gorski at Science-Based Medicine provides an in-depth breakdown of the JAMA study and previous research on the effectiveness of breast cancer screening.

 

A native bumblebee, Bombus auricomus. Photo: S. Droege. USGS Native Bee Inventory and Monitoring Lab.

A native bumblebee, Bombus auricomus. Photo: S. Droege. USGS Native Bee Inventory and Monitoring Lab.

 

Paper #5. A world of change for bumblebees

Our last but not least top paper this month is a Science report titled “Climate change impacts on bumblebees converge across continents.” The internationally authored study tested for climate change-related shifts in bumblebee ranges. The authors found bumblebee losses at the southern range limits of these species, as southern species have shifted to higher elevations but failed to shift to more northern regions.

Responses to climate change have been observed across many species. There is a general trend for species to shift their ranges poleward or up in elevation. Not all species, however, can make such shifts, and these species might experience more rapid declines. Kerr et al. looked at data on bumblebees across North America and Europe over the past 110 years. Bumblebees have not shifted northward and are experiencing shrinking distributions in the southern ends of their range. Such failures to shift may be because of their origins in a cooler climate, and suggest an elevated susceptibility to rapid climate change. – Editor’s Summary

In other words, “It’s too hot for bumblebees in the south—and they’re not moving north,” according to a headline in Quartz. Numerous news outlets, science magazines and science blogs picked up the study to report on the narrowing range of bumblebees, with headlines such as “A Century of Data Reveal that Climate Change is Shrinking Bumblebee Ranges,” (IFLscience) and “A Smaller World for Bumblebees,” (The Scientist). Most media coverage of the study was straightforward in highlighting the findings as significant and robust.

Kerr compared the situation of bumblebees to a vice. He explained that as temperatures have warmed since 1975, many species of bumblebees are being forced into smaller and smaller habitats. On the southern ends of their habitat in Europe and North America, bumblebees are losing about 9 km per year (5.6 miles), and have already lost 300 km (about 186 miles). – Katherine Ellen Foley, qz.com

More reading:

 

graceEarlier this month the Biodiveristy Heritage Library announced that they had implemented the Altmetric badges across their online articles and other reference content. We spoke to Outreach and Communication Manager Grace Costantino to hear about the work the BHL does, and how they hope to make use of Altmetric data.

About the Biodiversity Heritage Library

The Biodiversity Heritage Library (BHL) is a consortium of worldwide libraries, headquartered at the Smithsonian Libraries in Washington D.C. Launched in 2007, the BHL online resource site receives over 80,000 unique visitors a month, and currently includes over 46.6 million pages of natural history literature from from over 96,000 titles.

The library’s mission is to inspire discovery through free access to biodiversity knowledge; anyone in the world wherever they are should be able to make use of their content. Access to such information is a particular necessity to biodiversity-related sciences, as historical data and species classification underpins the work that scientists are doing today.

Why the interest in altmetrics?

In 2014 the BHL received an Institute of Museum and Library Services grant to transform BHL into a more social, semantic digital library. As part of this “Mining Biodiversity” project, they are further integrating with social media, enhancing search functionality, and improving their semantic metadata.

“We wanted to see where people were talking online about our content – and help our readers see those conversations too.”

Crucially, Grace and her colleagues are keen to find ways to make it easier for their audience to discover more info about their collections, and share thoughts and knowledge about those collections using social media. The first step they took to encourage this was to add better sharing buttons to BHL, but they also wanted to more easily capture the online conversations surrounding their content and let others see what people had been saying.

Implementation

Although some of the BHL content is assigned a DOI, this is not consistent across all of their content. Grace and her team worked with Altmetric to instead track mentions to their content based on the URI (the unique identifier) of each piece of content.

Screen Shot 2015-07-30 at 11.33.00

BHL has now launched the Altmetric badges across their online platform, and are using the Altmetric Explorer internally to monitor and report on the online attention across their collection.

They also added an overview of the Altmetric data and what it offers their readers to their Wiki, helping users of the platform understand what the data shows and how it can be interpreted.

Up and running

Grace reports that the BHL are finding the Altmetric data really valuable for discovering conversations that they didn’t know were happening – particularly as lots of people will share a link to the content they are talking about but don’t necessarily mention the BHL, making this activity difficult to track by keyword alone.

To announce the roll-out of the Altmetric badges Grace and her team put together a program of blog and social media content, and will also include announcements in their newsletters and quarterly reports. Already they are using the Altmetric Explorer to identify what types of books are really popular with audiences, and making additional efforts to offer similar content.

“Through the Altmetric data we identified that our marine books in particular are really popular. Information like this helps us better tailor our posts and the content we share to ensure maximum engagement.”

With altmetrics providing an up-to-date measure of the success of their ongoing promotional and engagement efforts, in future the BHL hopes to use altmetrics to help determine ongoing outreach and engagement strategy to ensure the continued awareness and success of their valuable content.

Read more on the BHL blog.

Become an Altmetric Ambassador today!

Icinghower - Ambassador Program

We’re proud to announce the launch of the Altmetric Ambassadors program! We’ve debuted the Ambassadors program in response to requests to help researchers and librarians worldwide spread the word about altmetrics (in general) and Altmetric.com (in particular) at their institutions.

A team of volunteer Ambassadors will:

  • Stay up-to-date with the latest altmetrics news and research

  • Host brown bags to introduce their colleagues to altmetrics and Altmetric’s tools

  • Show their colleagues how to explore the online attention surrounding their work using the Altmetric bookmarklet

  • Add Altmetric badges to their personal or lab web pages

  • Share their stories to demonstrate how altmetrics can help all researchers

In return, we’ll send you exclusive Altmetric swag, share top-secret development plans for Altmetric products, buy coffee and donuts any time you host an Altmetric-related workshop, and help connect you with a fantastic, international group of like-minded researchers and librarians.

Interested? Learn more and sign up here today!

Today we’re excited to release our new collection of teaching resources for you to use in your own Altmetric for Institutions training sessions. We’re often asked to share our training slides so we’ve created a CC-BY licensed collection for you to use in your own teaching.

We’ve designed the new teaching materials collection to include everything you need to run a successful Altmetric for Institutions session – from advertising online, posters to promote around campus and re-usable slides with explanations and hands-on activities.

3883611573_ebd77078ff_o

Image by Andy Bright on Flickr

So what’s included?

  • Introduction to Altmetric for Institutions training slides with notes – [PowerPoint]
  • Workshop activities – slides with hands-on activities – [PowerPoint]
  • Workshop activities handout – [A4 Word] [US Letter Word]
  • Text for advertising your session online – [Word]
  • Tips & Tricks: promoting your research online – flyer for researchers – [A4 PDF] [US Letter PDF]
  • Poster to advertise your session around campus – [A4 Word] [US Letter Word]

See our Teaching Resources page for full details. The collection is CC-BY licensed so feel free to remix, reuse and share your training session materials afterwards – we’d love to see what you create.

And take a look at our new tutorial videos…

Screen Shot 2015-07-20 at 11.10.12

Today we’re also launching our new tutorial videos so you can take a look around exploring data in Altmetric for Institutions and the Altmetric Details Page. Take a look:

  • Altmetric for Institutions: Navigating – find out more about Altmetric for Institutions with a walkthrough of our demo edition. We show you how to browse your institutional outputs, the summary report, access Altmetric Details Pages, view authors and departments, and expand to search across the entire Altmetric database.
  • Altmetric Details Page: We take you on a tour of the Altmetric Details Page to find attention data for a research output with a walkthrough of the Altmetric donut, sources tabs, help information and setting up email alerts for new mentions.

Feel free to embed in your help guides and share with researchers and support teams in your institution!

Here are our ideas for running a successful altmetrics training session at your institution:

  1. Plan ahead. Check your room setup, will attendees have access to PCs for hands-on activities or should they bring laptops? Or will it be more of a lecture setup? Do you have time for group discussion at the end to talk about the key learning points and altmetrics data you’ve covered during the session? Our activities slides are useful for hands-on workshops, and the Introduction to Altmetric for Institutions slides also offer a good introduction for both a workshop and a lecture-style presentation.
  2. Run a live demo of Altmetric for Institutions and get everyone to create an account – this will really help attendees get a feel for the database navigation, key functionality (exploring the data, filtering results, summary reports, author/departments, custom groups, exporting and saving etc.). See slides 21-30 of the Introduction to Altmetric for Institutions slide deck for key functionality to demonstrate.
  3. Emphasise the importance of looking beyond the score. It’s useful to highlight the value of considering the quality of the mention rather than focussing only on the score. The Altmetric score of attention is an indicator that a paper has received lots of attention. Take a look at some conversations surrounding your institutional research papers and discuss in your session. See slide 34 of the Introduction to Altmetric for Institutions slides for an example of a paper with a relatively low score but with a policy document mention that demonstrates how the journal article has contributed to NHS treatment guidelines.
  4. Cascade training and advocacy across library and training support teams and embed altmetrics in existing scholarly communications sessions (open access, bibliometric analysis, research data management, ORCID, etc.) This helps demonstrate how the Altmetric for Institutions service can be used alongside existing services to support research innovation.
  5. Discuss how altmetrics are part of a broader conversation. Share ideas for how researchers and support teams can use altmetrics data to broaden the view of research attention alongside existing analyses, e.g. traditional bibliometrics and funding awards etc.

Let us know if you create new materials you’d like to share or if there’s something you don’t see here but would find useful. Happy training!

TraceyWe spoke to Tracey DePellegrin, Executive Editor of the Genetics Society of America journals about their motivations and experience so far of integrating Altmetric badges across their journals.

As Executive Editor, Tracey is responsible for identifying new ways of author outreach, and was instrumental in driving this new program for their publications.

 

About the society

The Genetics Society of America has over 5,000 members from all areas of genetics and genomics. Alongside a busy conference and outreach program, they also publish two journals –Genetics, and the open access G3: Genes|Genomes|Genetics. The editorial teams of the journals are committed to the goals of the society; to further the field of genetics research and to encourage communication amongst geneticists worldwide. The content they publish is high quality and the society aims to position themselves as the voice of their members to policy makers.

 

Challenges and motivations

With a society founded in 1931, and Genetics first published in 1916, Tracey says that one of the challenges the journals sometimes face is being recognized as the innovative and forward thinking publications they are. In fact, GSA are often ahead of the curve in embracing new technologies and policies (for example, they’ve had an advanced open data policy, strictly enforced, for more than 5 years, and, along with Cal Tech, pioneered the use of article links to model organism databases in 2009).

The society are also keen to encourage their authors and readers to see value in content beyond the Impact Factor – although they are conscious that for some academics, particularly those in the far east, there are requirements that they publish in journals considered to be ‘high impact’. Tracey and her colleagues are keen to show and help their authors demonstrate other types of attention and engagement.

“Part of our responsibility to the community includes discussing all kinds of ‘impact’ – and helping authors to extend the short- and long-term reach of their research.”

 

A step forwardGSA

GSA first implemented altmetrics on their titles late 2013. They were keen to see what online attention their content was attracting – and, in line with the society’s aims, encourage their authors to actively engage with the conversations going on around research published in their fields. The response they had from their contributors has been enormously positive.

Tracey enthuses; “We LOVE Altmetric – because our authors do! Because citations are such a lagging indicator, they like being able to see a glimpse of who’s talking about their work – even as soon as it’s published early online. And because the social media data is retroactive, we even have some nice surprises.”

Screen Shot 2015-07-16 at 12.16.22

Internally, altmetrics data are being used by the GSA team to monitor their outreach efforts and help shape ongoing activities.

They regularly check in to gauge the attention that their early online articles are getting, and track to see what effect pushing out specific articles via social media has had.

“We LOVE Altmetric – because our authors do!”

Board reports are also benefitting from the additional context that altmetrics provide – they are now incorporating highlights of the online mainstream and social media coverage, including that which is not so positive, to help their teams get a better understanding of how their research is being received.
Board report  Board report

 

In one case, Tracey adds, altmetrics enabled them to quickly identify where an editorial they had published was being met with a negative response, and follow up with their own blog post to further clarify their position and address some of the feedback they’d received.

Working with Altmetric, Tracey reports, was an ideal solution for GSA. They are fans of the colorful donut graphic and see the insight Altmetric data offer as adding value to their journal content (currently hosted by Highwire).

 

What next?

In future, Tracey comments, they’re keen to extend their use of the data and the Altmetric donut visualisations – perhaps including them in email campaigns or using it to identify popular articles to showcase on journal homepages.

“In science, it’s so important to have discussions centered around new research. Altmetric helps us to figure out where the discussions are taking place, and to encourage a wider group to participate.”

Alongside that they’ll be continuing to educate their authors about altmetrics and how they can make use of them, and are determined to move the concept of impact and attention beyond just the numbers for genetics research as a whole. By providing an increasing amount of feedback and insight, Tracey and her colleagues hope to help their authors better understand who is discussing their work, and how it is being received.

We’ve got the leaflets, we’ve got the slides, we’ve got the website; but we wanted something extra special to help us spread the word of Altmetric for Institutions to the wider world. Today we’re excited to announce the release (technically, the world premiere) of Altmetric for Institutions: the movie. Fewer donuts than the Simpsons but hopefully equally punchy graphics – take a look and let us know what you think!

 

Making the video

We worked with online marketing company Distilled to take our idea from concept to a reality – and their creativity and design skills certainly help us along the way!

storyboardFrom the initial script they worked with us to flesh out the concept – drawing out mockupstoryboards, mocking up the design and feel of the final video, and running through a number of voiceover options before we settled on the final feel.

Crucially, we wanted to communicate the benefits that Altmetric for Institutions offers. As researchers are increasingly asked (by management, funders, project leaders, alumni donors) to demonstrate the impacts of their work, this new platform can help them track, monitor and report on early signs of engagement and influence.

Users can create custom groups to monitor the online attention surrounding specific projects, explore the entire Altmetric database to see how much and what type of mentions research outputs from their peer institutions and fellow scholars are receiving, and set up email alerts and reports to be regularly updated on the activity relevant to their work.

All of this data can be particularly useful for enabling more effective online reputation management (both of an individual scholar and an institution as a whole), reporting on evidence of attention, influence and engagement to attract research funding and alumni donations, and for helping to determine future research and outreach strategy.

We’ve already heard some great examples of how institutions are adopting and rolling out the platform, and look forward to hearing how these activities progress as more and more researchers become familiar with the altmetrics data and start to apply it as part of their teaching and career development.

Don’t forget to stay tuned to our YouTube channel over the coming months – we’ve already started adding some past presentations, and are planning to keep on updating it with lots of useful training and user videos to help you get the most from our data and tools!

Metric tideWe’ve just put down our virtual copies of The Metric Tide, a report compiled by an expert panel of academics, scientometricians, and university administrators on the role of bibliometrics and altmetrics in research assessment (including the UK’s next REF). What an excellent read.

In the last 24 hours, many smart people have published articles dissecting various aspects of the report. To this thoughtful and productive discussion, we’d like to add our voice.

Below, we’ve teased out what we believe to be the most important themes from the report, and supplement some of the current discussions with our thoughts on how altmetrics in particular can play a role in research assessment in the next REF.

 

Altmetrics as a complement, not replacement

We’ve long said that altmetrics should be a complement to, not a replacement for, citation-based metrics and expert peer-review. So it was heartening to see that same sentiment echoed in The Metrics Tide report (“Metrics should support, not supplant, expert judgement”).

Altmetrics, by and large and like citation-based metrics, cannot tell us much about research quality (though many people often assume they’re intended to). They do have some distinct advantages over citation-based metrics, however.

Altmetrics are faster to accumulate than citations, so it’s possible to get a sense of an article’s potential reach and influence even it was only recently published. (This is useful in an exercise like the REF, where you might be compiling impact evidence that includes articles only published in the previous year – or even month) Altmetrics are also much more diverse than citations, in that they can measure research attention, help flag up routes to impact, and (occasionally, looking at some sources) quality among scholars and also members of the public, practitioners, policy makers, and other stakeholder audiences.

These differences make altmetrics a powerful complement to citations, which measure attention from a scholarly audience. And they’re a useful complement to expert peer review, in that they can quickly bring together data about and reactions to the article the reviewer may not have noticed themselves. (You can find the full review criteria used by the REF panels here).

Indicators rather than metrics

The Metrics Tide report didn’t spend a lot of time talking about what we consider to be an important consideration when using altmetric data for assessment: relevant altmetrics data is typically an indicator of impact rather than a measure of it. It points to and potentially provides evidence for a route to impact.

That’s why we believe it’s very important for altmetrics providers to report on “who’s saying what about research” in a way that’s easy for others to discover and read. Computers can’t consistently, correctly parse sentiment from documents or online mentions, and they can’t (yet?) figure out what policy gets actually put into practice, or who actually acts on research they’ve cited or mentioned. it’s necessary for humans to parse the data and potentially investigate further. The data alone doesn’t give us enough to, say, write a full impact case study automatically.

That’s the downside (for anybody expecting a quick metric win). The upside is that by removing the burden of systematic data collection from a wide variety of data sources that are all different kinds of indicators we can make it much easier for a human to find and create promising impact stories. Hopefully by doing the heavy lifting of looking at policy outputs, at educational material, at books and datasets as well as articles we can help remove some potential biases too.

We’d encourage the community to read The Metrics Tide report with this in mind. We should see indicators and the related, qualitative “mention” data as two, inseparable sides of the same coin.

Our metrics should be as diverse as reality

The Metrics Tide pointed to two main areas in which attention to diversity is very important: disciplinary differences for what’s considered “research impact” and being mindful of how metrics, when not contextualized, can perpetuate systemic inequalities based on gender, career stage, language, and more.

We have little to add to the point that disciplinary differences in what constitutes “impact” are many. Indeed, we vigorously agree that creating discipline-specific “baskets of metrics” is a much better approach to using metrics for assessment than attempting to use citation-based metrics alone to understand research impact across all fields.

In all disciplines, these “baskets of metrics” should be carefully constructed to include the “alternative indicators that support equality and diversity” that the report hints at. We’d suggest that such “diversity” indicators for the impact of research should include metrics sourced from platforms that highlight regional (especially non-Western) impacts (like Sina Weibo, VK, and others), as well as data features like geolocation and language that can do the same.

All metrics should always be contextualized using percentiles, as well, not only allowing one to more accurately compare articles published in the same year and discipline to each other, but also the research of female academics (which studies show tends to be cited less often than comparable work of male academics), early career researchers, and so on.

Transparency is necessary

We vigorously agree with the report’s statement that “there is a need for greater transparency in the construction and use of indicators”. Transparency is necessary in terms of both how data is collected (what’s considered a “mention”? where is this mention data collected from?) and displayed.

To that point, we think The Metrics Tide could go one step further in its recommendations for what constitutes “responsible metrics”. In addition to metrics being robust, humble, transparent, diverse, and reflective of changes in technology and society, we believe metrics should also be auditable. If somebody says an article has 50 citations you should be able to see those 50 citations.

Such transparency is paramount if we are to conscionably use metrics for assessment, a decision that will have huge effects on individuals and universities alike.

More tangible change is required before we can fully rely on metrics

The final (and somewhat sobering) theme of The Metrics Tide came in the form of many reminders of just how far scholarly communication has left to go before we can actually implement metrics for assessment in a useful and fair way.

In particular, the lack of usage of DOIs and other persistent identifiers when discussing research online is a major stumbling block to the accurate tracking of metrics for all research. Moreover, a scholarly communication infrastructure that, for the most part, does not interoperate makes compiling information for assessment exercises like the REF a very difficult and costly endeavor.

(Interestingly this itself speaks to diversity and disciplinary differences: we see many smaller journals or publishing platforms in BRIC and the developing world eschew DOIs because of the cost. DOIs are more prevalent in STM than in the arts & humanities. Some outputs suit DOIs as they are born and live digitally – others, like concerts or sculptures – don’t. But the great shouldn’t be the enemy of the good)

Luckily, there’s a fairly clear path forward: The Metrics Tide calls for governments and funders to offer more investments in the research information infrastructure, especially to increase interoperability with systems like ORCID. We’d add to that and call for the private sector (including other metrics aggregators) to kill information silos in favor of building or allowing technologies that “play well” with others even if they’re commercial. An API for Google Scholar would be great – but it requires publishers to allow it.

The report also proposes the establishment of the Forum for Responsible Metrics, which we applaud and intend to support in any way we can moving forward (as we’ve done in the past with other common sense, metrics-related initiatives like DORA and the Leiden Manifesto).

We look forward to seeing how the recommendations made in The Metrics Tide play out in the coming years, both here in the UK and in academia worldwide, and to working together towards a future where metrics are used intelligently as part of a much wider scholarly agenda.

In May, I shared some interesting results from a small-scale exercise I ran, comparing what was submitted with REF impact case studies with indicators of “real world” impact sourced from Altmetric data. My findings suggested that Altmetric can help find specific mentions of research in public policy documents that would otherwise go unreported in REF impact case studies. Altmetric may also help find overall themes to the research that has the most “real world” impact (i.e. epidemiology, climate change, etc) in a way that citations can’t.

The data also raised some questions about differences between articles with high citation counts and those with a lot of Altmetric attention:

  1. Are there differences between what’s got the highest scholarly attention (citations),  the highest “real world” attention (as measured by Altmetric), and what’s been submitted with REF impact case studies?
  2. What are the common characteristics of the most popular (as measured by Altmetric) outputs submitted with REF impact case studies vs. the overall most popular research published at each university?
  3. What are the characteristics of the attention that’s been received by REF-submitted impact case studies outputs and high-attention Altmetric articles?

In today’s post, I’ll dig into these questions, with an eye towards shedding more light on themes that might make the REF preparation work of researchers, impact officers, and other UK university administrators easier. The articles data used in this analysis can be found on figshare.

Are there differences between what’s got the highest scholarly attention (citations), the highest “real world” attention via Altmetric, and what was submitted with REF impact case studies?

Short answer: yes.

There seems to be little correlation between the number of citations and the overall Altmetric score received by the publications I looked at. That’s in line with other studies that have examined citation correlations.

And articles with many citations and high Altmetric scores aren’t any likelier to be submitted with REF impact case studies, as found in the previous post.

In other words, when looking at simple attention data (citations and Altmetric scores), there doesn’t seem to be overlap with what’s been chosen for submission with REF impact case studies. But when you drill down into the characteristics of the Altmetric attention data–and the characteristics of the articles themselves–themes do emerge.

What are the common characteristics of the most popular (as measured by Altmetric) outputs submitted with REF impact case studies vs. the overall most popular research published at each university?

There are some common themes found in the types of articles that appear in each university’s “top ten” groups of publications.

Journal of publication: Highly cited publications by London School of Hygiene and Tropical Medicine (LSHTM) authors only appeared in two journals: The Lancet and New England Journal of Medicine. However, this homogeneity of journal title didn’t hold up for highly cited publications by University of Exeter (UE) authors, which were published in a greater variety of journals.

Authorship numbers: Highly-cited LSHTM publications were also more likely to have many authors than the university’s REF-submitted or high-attention publications, or than any “top ten” publications published by UE authors. For highly-cited LSHTM publications, the minimum number of authors was eleven.

Publication date: Overall, highly-cited articles were more likely to be published in 2012 than 2013, which makes sense, given citation delays caused by the publication process. Altmetrics, sourced primarily from text mining and APIs, tend to accumulate much more quickly than citations.

Open Access: LSHTM articles were more likely to be Open Access if highly cited or submitted with a REF impact case study than those that were simply high attention. UE articles were only slightly more likely to be Open Access if highly cited, and were equally as likely to be OA if submitted with a REF impact case study or of high attention overall.

Type of publication: Editorials published by LSHTM authors were more likely to be of high attention, and research articles were more likely to be highly cited or submitted with REF impact case studies. UE publications overall were more likely to be research articles, whether highly cited, of high attention, or submitted with REF impact case studies. No other types of research outputs were among the highest attention, submitted-to-REF, or highly cited lists.

Subject: No matter the type of attention a publication received, they were more likely to focus on public health, epidemiology, or climate change if they appeared in any “top ten” group for both universities.

Of all the themes uncovered, publications’ subjects seem to be the most useful benchmark for researchers to use to select and compile impact case studies. But to know for sure, we’ll have to ask researchers and impact officers themselves how they settled upon particular case studies, and whether they even considered using citation data to inform those themes.

What are the characteristics of the attention that has been received both by outputs submitted with REF impact case studies and by high-attention Altmetric articles from each institution?

Overall, REF-submitted outputs were less likely to have news coverage, appear on Reddit, and surprisingly slightly less likely to appear in policy documents (though this may be due to the selection of policy documents that we currently track) than articles that simply had a lot of attention (as measured by the Altmetric score). In terms of demographics (as sourced from Twitter and Mendeley data), both universities had the most impact among audiences in Great Britain and the United States.

Wikipedia: All articles from both groups were just as likely to have a Wikipedia mention (1 of 10 articles in both groups were cited on the platform).

Twitter: Articles from both groups were just as likely to have attention on Twitter, although outputs from the REF had on average ten times fewer tweets than the highest-attention articles did. Ironically, University of Exeter’s REF outputs were also slightly more likely than highest-attention outputs to be highly shared among scientists than members of the public, whereas LSHTM’s REF outputs had more diverse “top tweeters” overall, including members of the public, scientists, practitioners, and science communicators.

Qualitative data: At Altmetric, we consider the auditable qualitative data we provide to be just as important (if not more important) than the numbers we report. So, I took a look at who was saying what (and where) for each publication. Two points and one example stood out to me:

  • Often, Facebook posts are made by groups promoting research that’s relevant to their audiences. For example, an autism awareness organization might share an article on recent developments in neurological research. Alternatively–in a worst case scenario for many scientists–scholarship is sometimes also shared by groups like climate change denialists that misunderstand the science described. However, having access to what’s being said in both types of groups provides a valuable opportunity for the authors to engage with the members of the public who are reading their work and, in some cases, misinterpreting it.
  • For the LSHTM “highest-attention” article, “The Future of the NHS–Irreversible privatization?”, it was fascinating to read through the commentaries that accompanied the public Facebook and Google+ shares it received. Like the article itself, many were critical of the idea of privatizing the United Kingdom’s National Health Service. It was shared by many biomedical journals, professional groups, and patients’ rights organization.
  • One final example of “public discussion” set me to thinking: researchers have been known to scoff at the idea that it’s valuable to have laypersons discussing their work. For example, it may seem trivial to some that this group on training for cyclists has posted some recommendations on cycling performance and nitrite consumption that cite research on the topic. Yet, is that not also a small form of public impact? If research is advancing knowledge among the public, by and large that’s a good thing.

So what does this mean in practice?

The themes in publications uncovered above may little bearing on how researchers choose articles for submission with REF impact case studies based on variables like journal title, number of authors, and so on. (Which is a very good thing, as such variables have little bearing on the quality or impact of an article itself.)

However, as I found in my previous post on this topic, it’s possible that Altmetric attention data may be useful in choosing which subjects to base selections for impact case studies upon. But Dr. Rosa Scobles–Acting Director of Planning at Brunel University, who helped coordinate her university’s REF2014 submission–is not so sure.

“Altmetrics might be useful in helping to prepare REF impact case studies if online attention is a step on the road to impact,” Scobles recently explained to me via telephone. “That might be true especially if the steps you’ve taken towards engagement have impact. For example, if you’re doing a public health campaign and part of that campaign is to raise awareness using social media, you could say that measurable engagement (retweets, followers, Twitter impressions) points to true impact (behavior change). Or if your research informs public policy, and you can read that policy online, that might be evidence of impact. But most types of ‘impact’ are very difficult to measure in general.”

Another academic who helped prepare for REF2014–Dr. Amy Gibbons, a Faculty Impact Officer at Lancaster University–recently emailed to offer her thoughts on the process, specifically the question of why universities who have a lot of citations in policy documents might not include that information in their REF impact case studies:

“A third possibility for why only a small percentage of the policy impact articles were mentioned in case studies, is that the researcher felt that the study/research was not longitudinal or in-depth enough yet for a case study but instead individual instances may have been submitted to their department/faculty for inclusion in the impact template (detailing overall types of impact activity to accompany the case studies for submission to a panel – worth 20% of the impact submission in REF 2014).”

She makes an excellent point: the value of research changes over time, with what we consider “REF-worthy” impact possibly only being measurable in the longer term.

Overall, what’s most relevant for the REF is the qualitative Altmetric attention data that’s now available. In terms of public engagement, qualitative Altmetric data can be used to help scientists connect with their audiences, and that’s an important “real world” impact by my estimation. And, once articles are chosen for submission with REF impact case studies, researchers and impact officers can now document who is saying what about their university’s research, and whether that discussion is happening in the news or in a policy document.

Looking towards the future

Reading through REF impact case studies has made one thing clear: universities have varied motivations for selecting what to submit to the REF, and those motivations aren’t necessarily reflected in the “one size fits all” approach that Altmetric and other altmetrics services take when building reports. I wonder if market demands will drive altmetrics services like ours and others towards being more flexible and modular. Theoretically, universities could pick and choose the modules that would help them uncover impacts like “technology commercialization” or “policy impact” (if those were important to them), or any other number of impact flavors that help them better communicate their value to the world.

Tomorrow, the much-anticipated results of the HEFCE metrics review (which looked at whether altmetrics might be useful for evaluation purposes in the next REF) will be announced. I’m eager to see the recommendations of the scholars who led in the review, and whether they agree that altmetrics can be indicators (rather than evidence) of potential impact.

Did you help prepare for REF2014 and want to share your thoughts on the role of altmetrics in assessment? If so, leave them in the comments below!

heatherThis is a guest post contributed by Heather Coates, Digital Scholarship & Data Management Librarian at IUPUI. It is intended as a follow-up to Heather’s previous post, Advice from a librarian: how to do successful altmetrics outreach. In her previous post, Heather discussed some key points to bear in mind when conducting altmetrics workshops and other educational activities within an institution. In this second part, she focuses on the core themes that can help a researcher think strategically about how they disseminate and share their work and expertise. 

The research cycle doesn’t end with publication

Dissemination is part of the research process. There are far too many articles published each year (http://www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf) to take a passive approach to dissemination. A 2012 report estimates that 28,100 journals publish ~1.8 million articles per year. Not every article needs to reach broad public awareness, but you can make sure that it reaches the community with whom you are trying to engage. For some, this may be as simple as depositing copies of articles or policy reports in a repository, then sharing it back with the community being studied. In the case of disciplinary work, there may be multiple communities of researchers or practitioners who are stakeholders. In this case, the strategy may differ for each community. Engaging in a discussion with public health practitioners may require involvement with state and local agencies, while engaging with citizen scientists requires a less formal, more grassroots approach. The possibilities are endless, so it’s important to prioritize a few that are effective and important to you personally.

Finally, be sure to include all your scholarly products like data, syllabi, code, not just articles and books, in your dissemination plan. If you need some help thinking about how to share your data, check out the Open Data Handbook. For example, I tend to have different strategies for talking about my work depending on the format. When I present at conferences, engagement tends to happen mostly on Twitter and in-person. I use Storify to gather the tweets into a narrative that I can potentially use in my dossier. If there are associated blog posts, I include those as well. With more formal publications, I share with colleagues on Twitter, but may also write a short piece on my blog, then deposit copies in our institutional repository. When there are associated data, those go into our data repository, of course! The other major product I create are instructional materials on topics like data management and bibliometrics. These go onto Slideshare rather than our repository due to the constantly changing face of tech platforms. I often write up significant instructional events on my blog and Tweet about them.

Are you sensing a pattern? I try not to get overwhelmed with the possibilities; instead, I stick to a few platforms with which I am comfortable and use them as tools to engage with people who might be interested in my work.

Your scholarship is more than just publications

The scholarly ecosystem only recently began offering rewards for creating and sharing scholarly products like data, code, instructional materials. For example:

  • In 2014, the NIH changed their biosketch guidelines to recognize products, rather than just publications, when considering grant applications.
  • Although the practice of citing data is still relatively new, funding agencies like the NIH and NSF and publishers are working to make it widespread, thereby promoting the formal dissemination and sharing of data.
  • Platforms like Github are enabling researchers to get credit for reuse of their code, and new journals can provide peer review of instructional materials, like the journal Syllabus.

You are the product

Your publications are not the product that institutions want – you are. Demonstrating the impact of your work is mostly useful in communicating that you are a high-quality, productive scholar contributing to the missions of your institution, school, and department. Tell the reviewers how you and your work align with these priorities. Use evidence to demonstrate the impact of your work – citation metrics, altmetrics, testimonials, letters of support, etc.

With this in mind, remember that metrics cannot fully describe the value and impact of your work. That is up to you to articulate. What these metrics can do is to support your argument about why your institution should keep you around for the next 15-20 years. Metrics, like all statistics, can be misrepresented. Be sure you understand where a metric comes from and what it means. If a metric doesn’t support your argument, simply don’t include it.

Have a plan – strategically plan how you will disseminate your work

  • Take some time out each year to think about the following: Who do you want to engage with? The public? Policymakers? Potential collaborators?
  • What platforms or media channels do these groups use to communicate?
  • What guidelines or criteria for evaluation do your departmental, school, and institutional provide?

Having a plan is particularly important to ensure that your work is accessible to your key audiences, as well as ensuring that it gets done. It’s far too easy to forget about papers once they have been published, but having a concrete plan for sharing your scholarly products makes it far more likely that it will get done. Basic strategies such as creating a Twitter hashtag for your session or poster are much more effective if they are incorporated into the materials so that attendees know how to engage with you and share your work. Creating a hashtag that ties into a broader theme or conversation of the conference allows you to engage and connect your work to that conversation.

Finally, depositing presentation materials or slides in your repository before the conference so that attendees have instant electronic access can be an effective way to share content and engage them in a deeper discussion during the conference. In my own personal experience, things that I have deposited before a conference to share at the conference have more views in the first few months than those I deposited after the conference.

Execute the plan – treat the work of disseminating, sharing, and tracking evaluation of your scholarly products as a project

Treat the dissemination of your work and the creation of your scholarly reputation like the project it is. Manage the project proactively – make the tasks a priority, set clear goals, track your progress, and re-evaluate when something isn’t working. This is less about self promotion and more about sharing your work in a way that engages your communities and facilitates discussion or deeper understanding of the topic. Some tips to share with faculty include:

  • Choose how, with whom, and when to engage – Be able to describe these communities clearly enough to explain to those reviewing your dossier.
  • Choose a few tools, be selective, then use them consistently – You don’t have to use all the tools. Pick just a few that fit into your workflows and work for the communities you have targeted.
  • Evaluate progress & adjust if necessary – Create summary tables or visualizations of your evidence periodically to confirm whether the data are supporting your story. If not, why?

Review and thoughtfully select the evidence supporting your argument – demonstrate that you are a productive scholar/teacher/practitioner worth keeping around

  • Know the criteria by which you will be evaluated. Choose and explicitly connect these metrics to those criteria.
  • Align your work with institutional priorities.
  • Align your work with the priorities of your research community or field.
  • Identify key themes for your narrative and explicitly connect them to the evaluation criteria.

tl;dr: Be proactive in engaging with your targeted communities. Treat the dissemination of your scholarly products as part of the research process and build it into your plan for getting tenure. Use a range of metrics to support the story of your scholarship.

For more information on altmetrics or ideas for gathering data on scholarly products, check out the following:

Screen Shot 2015-06-30 at 15.08.34

LIBER

On 24th June, I attended the 44th annual LIBER conference with our head of marketing Cat Chimes and our Training and Implementations Manager Natalia Madjarevic. The theme of this year’s conference was Open Access. In the impressive halls of Senate House Library at the University of London, we rubbed shoulders with librarians and other institutional representatives and discussed that most complex of questions: how can we combine technology, legislature and policy in a way that successfully and ethically facilitates the global sharing of knowledge?

The morning session I attended was organised and presented by SPARC Europe, an Open Access advocacy group who liaise with European universities and government bodies to try and further OA initiatives. Firstly, Alma Swan gave an update on SPARC’s efforts to convince the European government to amend copyright laws, so that researchers can perform data and text mining queries with large sets of articles, if their institution is subscribed to the journal (UK copyright law incorporated this change as of last year).

Alma also introduced the ROARMAP, or (Registry of Open Access Repository Policies and Mandates), an online database that houses over 700 repository policy documents for a range of institutions and funders. SPARC looked at six policy conditions across the universities (such as whether that institution has made it mandatory for researchers to deposit their publications in the repository) and performed a regression analysis to try and ascertain the extent to which these policies affected researcher behaviours. They found a positive correlation between institutions that insisted (rather than simply recommended) on depositing, and the deposit rate. In conclusion, Alma suggested that researchers should make a habit of depositing their publications in Open Access spaces, not simply to comply with university regulations, but to make their research more visible to tenure and funding committees, thereby potentially enhancing their career prospects. This approach has interesting implications from an altmetrics perspective, as the more people store and share research in easily accessible online spaces, the more activity and attention data can be collated around those outputs.

The session also included an update from David Ball on Foster, an Open Science training initiative, and from Joseph McArthur, co-founder of the Open Access button. Overall, the workshop provided a comprehensive overview of the Open Access issues currently facing institutions.

After lunch, keynote speaker Sir Mark Walport (UK Government Chief Scientific Advisor and Head of the Government Office for Science) gave a more general introduction to current

Florida Polytechnic University - Image credit: Rick Schwartz at Flickr

Florida Polytechnic University – Image credit: Rick Schwartz at Flickr

issues in research production and dissemination, and how these issues affect the librarian as a “visionary in the communication of knowledge”. He argued that technological developments have allowed us to make huge steps in knowledge dissemination, but that there is pressure to maintain best practises and really think about how to communicate research to different audiences, as we continue to move from the printed page to the screen. He also advocated a transition from the single research paper as an isolated and closed declaration of discovery, arguing that research outputs should be continually updated to include more recent data that corroborates (or undermines) the original findings. One of the slides that really summed up his entire presentation included an image of the new library at Florida Polytechnic University, which doesn’t contain any paper books; only digitised records.

In one of the last sessions of the day, our very own Natalia Madjarevic gave a presentation on how Altmetric data can help libraries improve their research services. Scott Taylor, (research services librarian at the University of Manchester), talked about how his institution had used donutthe data to identify “impact stories” around their research – helping the impact officers uncover previously unknown information about how the scholarly outputs of their faculty had been shared, discussed and put to use beyond academia. Following this, Bertil F Dorch presented the findings of a project on whether sharing astrophysics datasets online can increase citation impact. It was really interesting to get an altmetrics expert, a librarian and researcher in the same room to talk about putting research online, and how that practice relates to different models of research evaluation.

Overall, day two of the annual LIBER conference provided many interesting insights. Although Altmetric only attended day two of a five day conference, we still really got a sense of what librarians, policy makers and OA advocates are thinking and talking about in 2015. One of the things that struck me was that although Open Access is now an established way of offering data and research, the OA movement still presents challenges and opportunities in equal measure. Over the next few years, it will be interesting to see the outcomes of efforts from OA advocates such as SPARC, and to monitor changes in academic publishing and researcher practices, in light of Mark Walport’s comments.

Thanks for reading, and feel free to leave feedback as always!