Thoughts on LIBER 2015
Screen Shot 2015-06-30 at 15.08.34

LIBER

On 24th June, I attended the 44th annual LIBER conference with our head of marketing Cat Chimes and our Training and Implementations Manager Natalia Madjarevic. The theme of this year’s conference was Open Access. In the impressive halls of Senate House Library at the University of London, we rubbed shoulders with librarians and other institutional representatives and discussed that most complex of questions: how can we combine technology, legislature and policy in a way that successfully and ethically facilitates the global sharing of knowledge?

The morning session I attended was organised and presented by SPARC Europe, an Open Access advocacy group who liaise with European universities and government bodies to try and further OA initiatives. Firstly, Alma Swan gave an update on SPARC’s efforts to convince the European government to amend copyright laws, so that researchers can perform data and text mining queries with large sets of articles, if their institution is subscribed to the journal (UK copyright law incorporated this change as of last year).

Alma also introduced the ROARMAP, or (Registry of Open Access Repository Policies and Mandates), an online database that houses over 700 repository policy documents for a range of institutions and funders. SPARC looked at six policy conditions across the universities (such as whether that institution has made it mandatory for researchers to deposit their publications in the repository) and performed a regression analysis to try and ascertain the extent to which these policies affected researcher behaviours. They found a positive correlation between institutions that insisted (rather than simply recommended) on depositing, and the deposit rate. In conclusion, Alma suggested that researchers should make a habit of depositing their publications in Open Access spaces, not simply to comply with university regulations, but to make their research more visible to tenure and funding committees, thereby potentially enhancing their career prospects. This approach has interesting implications from an altmetrics perspective, as the more people store and share research in easily accessible online spaces, the more activity and attention data can be collated around those outputs.

The session also included an update from David Ball on Foster, an Open Science training initiative, and from Joseph McArthur, co-founder of the Open Access button. Overall, the workshop provided a comprehensive overview of the Open Access issues currently facing institutions.

After lunch, keynote speaker Sir Mark Walport (UK Government Chief Scientific Advisor and Head of the Government Office for Science) gave a more general introduction to current

Florida Polytechnic University - Image credit: Rick Schwartz at Flickr

Florida Polytechnic University – Image credit: Rick Schwartz at Flickr

issues in research production and dissemination, and how these issues affect the librarian as a “visionary in the communication of knowledge”. He argued that technological developments have allowed us to make huge steps in knowledge dissemination, but that there is pressure to maintain best practises and really think about how to communicate research to different audiences, as we continue to move from the printed page to the screen. He also advocated a transition from the single research paper as an isolated and closed declaration of discovery, arguing that research outputs should be continually updated to include more recent data that corroborates (or undermines) the original findings. One of the slides that really summed up his entire presentation included an image of the new library at Florida Polytechnic University, which doesn’t contain any paper books; only digitised records.

In one of the last sessions of the day, our very own Natalia Madjarevic gave a presentation on how Altmetric data can help libraries improve their research services. Scott Taylor, (research services librarian at the University of Manchester), talked about how his institution had used donutthe data to identify “impact stories” around their research – helping the impact officers uncover previously unknown information about how the scholarly outputs of their faculty had been shared, discussed and put to use beyond academia. Following this, Bertil F Dorch presented the findings of a project on whether sharing astrophysics datasets online can increase citation impact. It was really interesting to get an altmetrics expert, a librarian and researcher in the same room to talk about putting research online, and how that practice relates to different models of research evaluation.

Overall, day two of the annual LIBER conference provided many interesting insights. Although Altmetric only attended day two of a five day conference, we still really got a sense of what librarians, policy makers and OA advocates are thinking and talking about in 2015. One of the things that struck me was that although Open Access is now an established way of offering data and research, the OA movement still presents challenges and opportunities in equal measure. Over the next few years, it will be interesting to see the outcomes of efforts from OA advocates such as SPARC, and to monitor changes in academic publishing and researcher practices, in light of Mark Walport’s comments.

Thanks for reading, and feel free to leave feedback as always!

Welcome to Altmetric’s “High Five” for June, a discussion of the top five scientific papers with the highest Altmetric scores this month. On a monthly basis, my High Five posts examine a selection of the most popular research outputs Altmetric has seen attention for that month.

The theme this month is BIG news.

Study #1. Entering the sixth mass extinction

 

Close-up of the Endangered California Desert Tortoise, Gopherus agassizii. Photo (C) Paige Brown Jarreau.

Close-up of the Endangered California Desert Tortoise, Gopherus agassizii. Photo (C) Paige Brown Jarreau.

 

Our top paper this month is “Accelerated modern human–induced species losses: Entering the sixth mass extinction,” published in Science Advances. In this study, Gerardo Ceballos and colleagues across six different universities assess whether human activities are causing a modern day mass extinction.

According to the study authors, even under conservative assumptions about past vertebrate species extinction rates, “the average rate of vertebrate species loss over the last century is up to 100 times higher than the background rate.”

These estimates reveal an exceptionally rapid loss of biodiversity over the last few centuries, indicating that a sixth mass extinction is already under way. Averting a dramatic decay of biodiversity and the subsequent loss of ecosystem services is still possible through intensified conservation efforts, but that window of opportunity is rapidly closing. – G. Ceballos et al. 2015

Shared largely by scientists and members of the public on social media, this study was covered by news outlets including The Daily Beast (We’re not the Dinos: We’re the Asteroid), Spektrum.de in Germany, Popular Science (We’re entering a sixth mass extinction, and it’s our fault) and National Geographic (Will Humans Survive the Sixth Great Extinction?).

Ecologists have long warned that we are entering a mass extinction. Science journalist Elizabeth Kolbert just won the Pulitzer Prize in nonfiction for her book titled “The Sixth Extinction”—yet this particular study, led by Gerardo Ceballos of the National Autonomous University of Mexico, is so profound because its findings are based off the most conservative extinction rates available. Many other studies in the past were criticized for overestimating the severity of the crisis. Even when using these conservative estimates, however, Ceballos and his team found that the average rate of vertebrate species loss over the last century is over 100 times greater than the normal rate of extinction, also known as the background rate. – Grennan Milliken, Popular Science

Cumulative vertebrate species recorded as extinct or extinct in the wild by the IUCN (2012). Dashed black line represents background rate. Credit: Ceballos et al.

Ceballos et al.

Most news outlets focused on the rather gloomy message that extinction rates have skyrocketed and that humans are the prime suspect in terms of who/what is to blame for this. This is interesting, considering that sad or gloomy messages don’t tend to spread in social media environments as much or as quickly as feel-good or exciting ones. However, anger or indication at the news might have prompted readers to share. “Yes, humans are probably to blame for the Earth’s sixth mass extinction event, which is wiping out species at a rate 53 times greater than normal,” Matthew Francis wrote for The Daily Beast. The graph to the right shows the cumulative vertebrate species recorded as extinct or extinct in the wild by the IUCN (2012) as compared to the conservative background rate used by Ceballos and colleagues.

To be fair, scientists have suspected humans are the reason for the Sixth Extinction for some time. It’s even the subject of several books. However, it’s difficult to assign numbers and rates of extinction over human history: It’s easiest to see extinctions long after they happened, rather than in process. The key is quantifying how many extinctions have happened on our watch versus the normal rate of species death. – The Sixth Mass Extinction: We Aren’t The Dinosaurs, We’re The Asteroid

But Nadia Drake over at National Geographic had a slightly different message than most writers covering this study. Drake interviewed journalist Elizabeth Kolbert, author of the Pulizer Prize winning book The Sixth Extinction, about “what these new results might reveal for the future of life on this planet,” including human life. “Are humans destined to become casualties of our own environmental recklessness?”

There are two questions that arise: One is, OK, just because we’ve survived the loss of X number of species, can we keep going down the same trajectory, or do we eventually imperil the systems that keep people alive? That’s a very big and incredibly serious question. And then there’s another question. Even if we can survive, is that the world you want to live in? Is that the world you want all future generations of humans to live in? That’s a different question. But they’re both extremely serious. I would say they really couldn’t be more serious. – Elizabeth Kolbert, as interviewed by Nadia Drake for NatGeo

How do the results of this study make YOU feel?

 

Study #2. Changing Textbooks – Newly Discovered Link Between Brain and Immune System

 

Image: Maps of the lymphatic system: old (left) and updated to reflect UVA's discovery. Image credit: University of Virginia Health System

Image: Maps of the lymphatic system: old (left) and updated to reflect UVA’s discovery. Image credit: University of Virginia Health System

 

Our next top paper is “Structural and functional features of central nervous system lymphatic vessels,” a research letter published in Nature this month. This study describes the discovery of a central nervous system lymphatic system in mice – in other words, a link between the brain and the immune system. As Time magazine headlined, “Game-Changing Discovery Links the Brain and the Immune System.”

The discovery of the central nervous system lymphatic system may call for a reassessment of basic assumptions in neuroimmunology and sheds new light on the aetiology of neuroinflammatory and neurodegenerative diseases associated with immune system dysfunction. – A. Louveau et al. 2015

 

“The first time these guys showed me the basic result, I just said one sentence: ‘They’ll have to change the textbooks.’” – Kevin Lee, chairman of UVA Department of Neuroscience, quoted in Science Daily

@Vectorofscience, a PhD student studying infectious diseases whom I follow on Twitter, tweeted this about the study: “A lymphatic system in the brain? Now that’s cool, and it’s going to change the way we view immunity in the CNS. http://t.co/eGZtbKFDPD.” If there were a missing link between the brain and the immune system, this study appears to be a big step toward clearing up that link.

Here’s a surprise: there are lymphatic vessels going into the brain. That’s reported in this paper in Nature. (Here’s a pretty breathless press release from the University of Virginia, where the work was done). – Derek Lowe, Ph.D., In The Pipeline

 

Scientists have discovered a previously unknown link between the brain and the immune system that could help explain links between poor physical health and brain disorders including Alzheimer’s and depression. […] The new anatomy is an extension of the lymphatic system, a network of vessels that runs in parallel to the body’s vasculature, carrying immune cells rather than blood. Rather than stopping at the base of the skull, the vessels were discovered to extend throughout the meninges, a membrane that envelops the brain and the spinal cord. – Hannah Devlin, The Guardian

The question now, to be answered by future research, is how exactly does this link between the brain and the immune system impact neurological diseases and mental illnesses?

 

Study #3. A new horned dinosaur, Regaliceratops

 

Our third top paper, “A New Horned Dinosaur Reveals Convergent Evolution in Cranial Ornamentation in Ceratopsidae,” was published in Current Biology this month. The paper describes an intriguing new horned dinosaur. As reported by Ian Sample in The Guardian, “Nicknamed Hellboy, the dinosaur had short horns over the eyes and a long nose horn, the opposite of the features sported by its close relative triceratops.”

Regaliceratops exhibits a suite of cranial ornamentations that are superficially similar to Campanian centrosaurines […] This marks the first time that evolutionary convergence in horn-like display structures has been demonstrated between dinosaur clades, similar to those seen in fossil and extant mammals. – C. Brown and D. Henderson

Regaliceratops (“regal”) was named for its frill, “a set of large, pentagonal plates like a crown atop its head,” by researchers at the Royal Tyrrell Museum of Palaeontology who found the skull of this dinosaur in Canada. The discovery of this dinosaur is even more significant because it provides evidence of evolutionary convergence in horned dinosaur display between this dino and its cousins from distant eras. In other words, without being direct ancestors, Regaliceratops and the centrosaurines developed similar horn displays on their skulls.

There are these really stubby horns over the eyes that match up with the comic book character Hellboy. – Caleb Brown, paleontologist at the Royal Tyrrell Museum of Palaeontology in Alberta, Canada, as quoted by National Geographic.

The discovery was covered by many news outlets and online media sites, including Popular Science, National Geographic (Triceratops cousin unearthed in Canada is so elaborately adorned ‘it blows your mind’) and IFLscience (New Horned Dino Rocked a Crown-Shaped Frill) among others.

But Regaliceratops’s amazing looks weren’t the only aspect of this study that attracted media coverage. It may touch your heart to know that the leading author of the paper, Caleb M. Brown, proposed to his girlfriend in the acknowledgements sections of the published paper!

“C.M.B. would specifically like to highlight the ongoing and unwavering support of Lorna O’Brien. Lorna, will you marry me?” – A New Horned Dinosaur Reveals Convergent Evolution in Cranial Ornamentation in Ceratopsidae, acknowledgements

Buzzfeed picked up on the marriage proposal too. Not only amazing science, but cute. More BIG news!

 

Study #4. No Global Warming Hiatus

 

Credit: NOAA

Credit: NOAA

 

Our next most-mentioned paper this month is a report published in Science, “Possible artifacts of data biases in the recent global surface warming hiatus.”

Much study has been devoted to the possible causes of an apparent decrease in the upward trend of global surface temperatures since 1998, a phenomenon that has been dubbed the global warming “hiatus.” Here, we present an updated global surface temperature analysis that reveals that global trends are higher than those reported by the Intergovernmental Panel on Climate Change, especially in recent decades, and that the central estimate for the rate of warming during the first 15 years of the 21st century is at least as great as the last half of the 20th century. These results do not support the notion of a “slowdown” in the increase of global surface temperature. – Abstract, Possible artifacts of data biases in the recent global surface warming hiatus

In the report, Thomas R. Karl and other scientists from the National Oceanographic and Atmospheric Administration (NOAA) present evidence that disputes the suggestion based on previous analyses that global warming “stalled” during the first decade of the 21st century.

Karl et al. now show that temperatures did not plateau as thought and that the supposed warming “hiatus” is just an artifact of earlier analyses. Warming has continued at a pace similar to that of the last half of the 20th century, and the slowdown was just an illusion. – Editor’s Summary, Possible artifacts of data biases in the recent global surface warming hiatus

The report sparked quite a bit of news coverage and strong discussions on social media.

Last week, a paper out of NOAA concluded that contrary to the popular myth, there’s been no pause in global warming. The study made headlines across the world, including widely-read Guardian stories by John Abraham and Karl Mathiesen. In fact, there may have been information overload associated with the paper, but the key points are relatively straightforward and important. – Dana Nuccitelli, The Guardian

 

[T]here never was any “pause” or “hiatus” in global warming. There is evidence, however, for a modest, temporary slowdown in surface warming through the early part of this decade. – Michael Mann

More good reads about this new report can be found below:

I expect the deniers — as usual — will be blowing a lot of hot air about this, but the science is becoming ever more clear. Global warming is real, and it hasn’t stopped. People who claim otherwise are trying to sell you something… and you really, really shouldn’t be buying it. – Phil Plait

 

Study #5. Your viral history in a single drop of blood

 

The capsid of SV40, an icosahedral virus. Image credit: Phoebus87 at English Wikipedia

The capsid of SV40, an icosahedral virus. Image credit: Phoebus87 at English Wikipedia

 

Our last paper, “Comprehensive serological profiling of human populations using a synthetic human virome,” was published in Science this month. The study describes a new method call VarScan that “enables human virome-wide exploration, at the epitope level, of immune responses in large numbers of individuals.”

VirScan combines DNA microarray synthesis and bacteriophage display to create a uniform, synthetic representation of peptide epitopes comprising the human virome. – G. Xu et al. 2015

What does all that mean? It means the authors of this study have developed a blood test that identifies antibodies against all known human viruses. With a drop of your blood, this new test could technically give scientists a history of all viral infections you’ve ever had.

Every time a virus gets you sick, your immune system keeps a record. This essentially becomes a kill list that lets your body recognize and readily dispatch of any virus that tries to invade again. Scientists have now created $25 test blood test that prints out this list—an easy and cheap way to find out every virus that’s ever made you sick. – Sarah Zhang, Gizmodo

 

Thanks to a method described today (June 4) in Science, it may be soon be possible to test patients for previous exposures to all human-tropic viruses at once. Virologist Stephen Elledge of Harvard Medical School and the Brigham and Women’s Hospital in Boston and his colleagues have built such a test, called “VirScan,” from a bacteriophage-based display system they developed in 2011. The scientists programmed each phage to expresses a unique viral peptide, collectively producing about 100 peptides from each of the 206 known human-tropic viral species. – A Lifetime of Viruses, by Amanda Keener, The Scientist

You can imagine the benefits of such a test, for diagnosis of odd disease symptoms for example.

 

That’s it for this month! Have thoughts about these findings? Share them with me on Twitter, @FromTheLabBench, or comment below. Thanks!

In our first post in this blog series, we introduced the advantages of using altmetrics to curate your digital identity as a researcher. The aim of this post is to look in more detail at how you can do just that, and provide some tips for how to adapt your online activity to successfully promote your research. We also talked to Ethan White, Biology researcher at the University of Florida, and Jacquelyn Gill, Professor of Ecology at the University of Maine, to see what tips they had for our readers.

Blogging 

Screen Shot 2015-06-18 at 15.19.50

Jacquelyn Gill’s blog

Ethan and Jacquelyn both said they use blogs and Twitter most often to promote their research. Blogs are a really great way to introduce new research and participate in the conversations that are happening in your field. However, the blogosphere is not simply an online space from which to alert the world to your own activities.

Following other blogs, commenting on other people’s posts and including links to other blogs in your posts means you can participate in wider academic discussions, and potentially invite more engagement with your own research. If you create a blog using WordPress, Blogger or Tumblr, you can view and save preferred blogs from the same platform using the built-in “suggested blogs” sections on their sites.

You can also install the free Altmetric bookmarklet to see if anyone has mentioned your own research (or even other research published in your field) in a blog post – simply drag the bookmarklet to your browser bar and click it while viewing your article on the publisher site to bring up the Altmetric data.

For more blogging tips, this post from Helen Eassom at Wiley has some great suggestions for effective practise.

Maintaining a consistent digital identity

Screen Shot 2015-06-23 at 16.47.03

Ethan White’s Twitter profile

It’s important to be consistent with how you present your identity across different online platforms. For example, you might want to use the same photo across your university faculty page, blog homepage and social media accounts, so that people who might be interested in your research can instantly identify you and verify (for example) your Twitter account against your LinkedIn profile.

Another way of maintaining these connections is to link between platforms when posting. You can do this by sharing your newest blog posts on social media, or including a link to your blog or website in your Twitter bio and faculty page. According to Jacquelyn Gill, “Maintaining visibility on multiple platforms is key! I’ve found Twitter to be an especially great resource in signal-boosting blog posts and new articles. Most other platforms don’t take much work, but it’s always worth putting in the time to keep them up-to-date”.

Networking

Screen Shot 2015-05-07 at 15.46.30Blogs and social media networks can offer the opportunity to engage with people you might not otherwise have had the chance to meet. If (for example) a fellow researcher leaves an interesting comment on one of your blog posts, it should be easy to respond to their comments, and perhaps later locate them on social media to continue the conversation. The people they follow might also be useful contacts to engage with, thereby increasing your own network. If you’re on the conference circuit, it’s always worth following up any talks you give with a link directly to your published research, using the conference hashtag to alert other delegates to your tweet.

As with blogging, the Altmetric bookmarklet can show you who has been sharing both your own work and other outputs published in your discipline via their blogs and on Twitter, Facebook, Sina Weibo and Google Plus – providing insight into who it might be worth following or reaching out to for additional visibility in future.

Ethan White had lots of interesting things to say about using online platforms to manage and update your professional network. He argued that it’s more useful to think of blogs and social media as tools to create mutually beneficial relationships that support knowledge dissemination.

“Developing a good network of online colleagues will ultimately help you promote your research online more successfully. Think about it this way: if you had a colleague who only ever stopped by your office to tell you that they’d just had a new paper published, you might not be super excited to see them, but if you have a colleague who you talk to about lots of different things, and respect based on their opinions on science in general, then you’d be excited to hear that they had a new idea or had just published a new paper”.

Ethan’s analogy works really well, and suggests that a researcher’s attitude towards online engagement with research is just as important as their practises.

Sharing your own research online 

Ensuring you research is as freely accessible as possible can really help raise your profile online. Make a habit of uploading articles to your institutional repository or sharing them amongst academic networks like Mendeley, Zotero or ResearchGate (once they are free of any embargo restrictions, of course), so they can be read by people who may not otherwise have access.

You can also use services such as Figshare to upload and attach unique identifiers to non-article research outputs, such as datasets, posters or images – giving other researchers the opportunity to reuse and build on your work (dependant on your chosen security and copyright preference settings). Once you’ve made your research available, you might like to include links to your outputs from your email signature, institutional faculty page or LinkedIn profile, or even post it to a subject specific forum.

If you’re keen to take it a step further you might like to consider building your own website to showcase your work. There are lots of free platforms available, so this need not be technically daunting – try Moonfruit or Wix to help you get started.

Finally…..how can I make sure my online activity is picked up by Altmetric?

  • If you have a blog, email support@altmetric.com with the homepage and a link to the RSS feed, so we can add it to our list, and start picking up mentions of published research outputs in your posts.

  • When blogging about research, make sure you embed a link to the article in the main body of text. Our software ignores headers and footers when scraping a page, so mentions of articles in footnotes don’t get picked up.

  • When posting on social media, attach a link to the main article page of the research output on the publisher website, rather than to a PDF.

As always, feel free to give us feedback on this blog post – thanks for reading!

This is a guest port contributed by Elaine Devine, Communications Manager (Author Relations), Taylor & Francis Group.

Psychology, animal welfare, agricultural sustainability, defence analysis, palaeontology, higher education research, neuroscience, toxicology, ecology, nutrition. It’s a diverse list but what brings all these different areas of research together? Every one (plus more) is covered in a list of the 20 original research articles with the highest Altmetric scores, published across Taylor & Francis Group journals.

CPB-hero-image-Altmetrics

The sheer diversity of this list highlights the enormous variety of research published, but also shows that any article has the potential to gain attention online; be this via a blog, social media, or picked up by news outlets. We’re an engaged lot and if something fires the imagination (and encourages debate) it’s exciting to see just how quickly the snowball effect can begin. But what makes one journal article get picked up in this way? It’s a magic formula that isn’t always clear – is it subject matter, an effective title, a ‘hot’ topic, renowned authors, or a combination of any, all or none of these?

With Altmetric data recently added to Taylor & Francis Online and CogentOA, we had the opportunity to look at which articles, dating back to January 2012, had the highest Altmetric scores (articles published from this date now feature the Altmetric donut within the journal’s table of contents, on individual article pages and, on Taylor & Francis Online, for all authors within their My Authored Works account). We gathered these together into a ‘top 20’ list. Looking at the final list raised the question ‘why this article over another?’

To try and answer this, I asked the authors featured in the list why they thought their article had gained so much attention. Their responses were varied, and I’ve included just a few snippets here:

“…people are anxious to find out how technology is impacting relationships because its use is so ubiquitous; we are just beginning to uncover the real-life impact of our increased use of technology for communication in our intimate relationships…”

Lori Schade, licensed marriage and family therapist and adjunct faculty at Brigham Young University, Utah and co-author of ‘Using Technology to Connect in Romantic Relationships: Effects on Attachment, Relationship Satisfaction, and Stability in Emerging Adults(no.18 in our list)

 “…it shows how we can apply our scientific knowledge…to policy forums.  This is a type of translational science, applying our scientific knowledge to improving animal welfare and management practices.” 

Diana Reiss, Professor, Department of Psychology, City University of New York and co-author of A Veterinary and Behavioral Analysis of Dolphin Killing Methods Currently Used in the “Drive Hunt” in Taiji, Japan (no.1 on our list)

“…most people are unaware that this organochlorine compound causes numerous adverse biological effects.  The large number of downloads has raised awareness among scientists and the general public about safety and health concerns…”

Susan S. Schiffman, Department of Electrical and Computer Engineering, College of Engineering, North Carolina State University and co-author of ‘Sucralose, A Synthetic Organochlorine Sweetener: Overview Of Biological Issues (no.16 in our list)

Reading through their responses, what came across strongly was the importance of ‘real world’ implications for their research. Whether it was on GM crops or cyber attacks, childhood amnesia or the impact of technology on relationships, each of the articles explored a topic relevant to our everyday lives. This list highlights, as I’m going to steal from the excellent Ontario project, that ‘research matters’, not just in the lab or the lecture theatre, at a conference, or when drafting and re-drafting a paper, but in the real world and to real people.

Congratulations, and thank you, to all the authors who featured in the ‘top 20’ and then very kindly sent me their thoughts. I’m looking forward to seeing what’s on the list when we run it again next year, but the one thing I do know is that it will be just as diverse again, with just as many ‘real world’ implications and applications. Now that’s what I call impact.

Read the full top 20, including links to every article featured and the authors’ comments.

44Over the last few months, team Altmetric have been busy visiting our institutional customers running altmetrics training sessions and facilitating workshops. We’ve travelled from Washington D.C. to Melbourne to Cambridge, offering training support for teams across a number of universities. We’ve also run a fair few online webinars, training up our customers across Altmetric’s tools, running online tutorials and offering tips for implementing and rolling out Altmetric for Institutions.

Primarily, we focus on running Train the Trainer sessions, ensuring Altmetric super users are tooled up to cascade altmetrics training across their institution. We also work hard to engage with the broader research community – encouraging responsible use of metrics, emphasising the value of looking beyond the score by analysing conversations behind the donut, and sharing ideas for building altmetrics into current workflows and using alongside existing forms of measuring and understanding research impact.

Training support in numbers

Since November last year we’ve run almost fifty training sessions across academic institutions and publishers. We’ve reached almost 500 attendees including librarians, research office teams, communications officers, researchers, publishers and university policy makers. This post shares a few highlights from our recent training visits and features a guest appearance from our very own Altmetric mascot, President Icinghower.

President Icinghower meets President Lincoln

IMG_6024

A momentous meeting of minds

Earlier in the year we took a trip to Washington D.C. and visited the publications team at the World Bank. It was great to discuss how the World Bank are using Altmetric for Institutions and Altmetric embeddable badges across their repositories, showcasing attention to a wide range of published content including reports, journal articles and working papers. Take a look at this example of the World Bank’s Altmetric badge embeds in the Open Knowledge Repository.

We were also accompanied on our trip by intrepid traveller President Icinghower, who was pretty thrilled to meet President Lincoln at the Lincoln Memorial!

Training librarians and research office teams

Next we flew to Australia, where we ran a number of training sessions, university visits and interactive workshops. A highlight was running a training session for staff at Macquarie University in Sydney. We were really excited to hear the Library team’s plans for rolling out both Symplectic Elements and Altmetric for Institutions across the institution, and discussing the integrations between the two systems.

IMG_6214 (1)We also saw Macquarie University Library’s book automated storage and retrieval system, which fills the entire basement of the Library, holds 80% of the print collection, and delivers books on request for Library users – saving space in the library for additional study areas.

The following week, we visited the University of Melbourne – running workshops with figshare to mark the launch of both Altmetric for Institutions and Figshare for Institutions. We ran training sessions and hands-on activities with researchers and university staff, and had some great discussions with attendees about using altmetrics on your CV, how we calculate the Altmetric score of attention and some of the most popular research outputs produced by the University of Melbourne.

Digital Science Showcase Workshop

29We rounded off the week with the Digital Science Showcase Event, Innovations in Research Management. Kathy Christian, Altmetric COO, discussed the Altmetric story so far and shared some of the product development plans on our roadmap this year. We also had the opportunity to run an afternoon Altmetric workshop, with plenty of post-it notes, stickers and group discussions. Thanks to everyone who came along to the workshops and participated so actively – we loved hearing about your altmetrics use cases and ideas for Altmetric future developments!

We’ll be at lots more events this year, hosting workshops and running training sessions for all of our customers: get in touch if you if you’d like to find out more!

heatherThis is a guest post contributed by Heather Coates, Digital Scholarship & Data Management Librarian at IUPUI. 

My perspective as a tenure-track librarian tends to be that of a practitioner-researcher. Practically speaking, this means that part of my job is to know how the scholarly ecosystem works – to understand how scholarly products are created, disseminated, used, curated, and evaluated.

Over the past three years, I have taught several workshops sponsored by IUPUI’s Office of Academic Affairs on using citation metrics and altmetrics to demonstrate excellence and impact in promotion and tenure (P&T) dossiers. To date, several things have led me to some insights that I think are helpful for librarians interested in supporting use of altmetrics: developing altmetrics workshops and doing one-on-one consultations; conversations with my campus’s Associate Vice Chancellor of Academic Affairs and  Director of Faculty Enhancement, and the experience of assembling my own dossier. In this post, I’ll share useful strategies for offering successful altmetrics workshops on your own campus, and advice for crafting messaging that resonates with researchers at all stages of their careers.

 

Get out of the library

Faculty do not typically think of the library for support in putting together their dossiers, so it is crucial to partner with units that faculty seek out for this expertise. Luckily for us, a valuable opportunity fell into our laps. In 2012, librarians were invited to work with the Office of Academic Affairs to support faculty in gathering evidence for P&T dossiers. This support began with a 2-hour workshop, which is now part of a regular series in support of faculty development. Here’s how it went.

We started off in the fall of 2012 with a broad introduction to publication-based metrics. It was a fairly traditional library workshop that focused heavily on citation metrics from subject and citation databases, plus Google Scholar. However, we did describe the various levels of evidence (journal-level, article-level, and author-level) and introduce the idea that journal-level metrics are the least relevant to promotion and tenure. We also introduced ImpactStory, a researcher profile that includes altmetrics data, and sources for informal metrics like acceptance rates, library ownership counts, and indexing status.slides

Here’s how the workshop was structured: in the first 30 minutes, we provided the explanatory content (what citations and altmetrics are, how they are sourced, and so on). The rest of the two-hour workshop was a mix of demonstrations and hands-on activities with tools like Web of Science and Google Scholar. We wrapped with a Q&A panel that included two librarians and the Vice Chancellor of Faculty Affairs.

We learned two major things in the first workshop: many faculty already had Google Scholar profiles, and faculty were more interested than we assumed in altmetrics.

There was enough interest expressed in the evaluations that we expanded the altmetrics section in the fall 2013 workshop. Around that time, we also began offering this workshop each fall and spring semester, rather than once a year.

This allowed us to differentiate the focus of the workshops a little each time: for example, one year , we held two workshops, one for health professionals, science, and technology and another for humanities; another year, we focused on demonstrating impact in public scholarship and civic engagement, as well as for team science. Our most recent workshops (2014-2015) have focused primarily on altmetrics. We have also begun to differentiate workshops and guidance for the types of products and scholarship that faculty across campus are creating.

In general, the content covered in these workshops includes the following:

  • Why metrics – proxy for quality

  • Types of metrics

    • Journal-level, author-level, article-level citation metrics

    • Web and social media metrics

  • Sources of metrics

    • Aggregators

    • Publishers

    • Repositories

  • Evaluating metrics (comparison table)

  • Tips for gathering data

  • Strategize

    • Gather

    • Record

    • Select

    • Visualize

The workshops have been so successful because we brought librarian expertise to a support system (Academic Affairs promotion & tenure workshop series) that was already established and in demand. Had we tried to host these workshops on our own, they would not have been as well-attended.

The most valuable aspect of this process for me has been the collaborative relationship developed with the Office of Academic Affairs. Thanks to them, I have learned much about the promotion and tenure process at IUPUI, as well as the dynamics between campus tenure/promotion guidelines and department/school-level guidelines. In fact, my relationship with the OAA has helped pave the way for a longitudinal study examining trends in IUPUI faculty publication practices.

 

Helping faculty make strategic decisions about how to disseminate their scholarship

These workshops have been a great opportunity to hear about faculty concerns and engage them in a conversation about broader issues. In the context of helping faculty to achieve their career goals, it is easier to bring up issues like open access, data sharing, choosing publication venues, and other strategic decisions they can make to increase the reach and impact of their scholarship.

It became clear during the development of the first workshop that faculty do not typically make strategic decisions about disseminating their scholarship, except aiming for journals with the highest impact factors. (And we all know how well that works for faculty publishing in less cited fields and unranked journals.) This strategy should be informed by personal, departmental, and school priorities. Many early career faculty look to their department chairs for guidelines for what journals are most highly valued. When researchers follow departmental priorities without considering the specific audiences (researchers, practitioners, business or public communities) with whom they want to engage, the impact and reach of their work often suffers. This is particularly true for interdisciplinary and community-based research. Librarians should make use of our skills in navigating the ecosystem to guide faculty in making informed choices for publication venue, author rights retention, sharing, and targeted outreach to the key stakeholders for their scholarly products.

Over time, the altmetrics component of the workshop grew based on feedback from faculty and the dialog with Gail and Mary. We also began to offer differentiated workshops based on the types of P&T case (e.g., research, service, teaching, balanced case) as well as the discipline. Though librarians have expertise in the scholarly ecosystem and bibliometrics, we are not necessarily well-informed about the P&T standards and procedures at our own institutions. Those of us who have been involved in developing and providing these workshops have learned an immense amount about how this evidence can support a candidate’s case, as well as the need to present it as part of a cohesive narrative that is easy for the reviewers to read and evaluate. In particular, I was surprised by the weight placed on the candidate’s presentation of their evidence. Even with very strong metrics, a candidate must present them in the appropriate context and connect them explicitly to their case in order for reviewers to give them weight. The Becker Model from Washington University provides a useful framework for putting metrics into context so that reviewers can evaluate them appropriately in the context of the candidate’s argument. This model also helps faculty to connect particular products and available metrics to particular types of impact.

My advice for building these collaborative relationships with units like Academic Affairs is to go slowly and focus on developing a constructive dialog. Like the library, they exist to support faculty. Building a network of support in which faculty can succeed is a major driver for this collaboration. It also helps to counter the perception that libraries exist only to circulate books.

 

Being change agents

Advocating for the use of broader metrics for impact and reputation in P&T requires engaging with faculty, departmental administration, and campus administration to make change on campus. While “change” was not necessarily a goal when we started offering these workshops, guerilla advocacy has become a part of my conversations with faculty.

One advocacy tactic is to help researchers step back to see the value of all their scholarly products in new ways. One easy way to get them to think outside the box is to have them list all of the products resulting from one specific research project. This usually includes presentations, posters, white papers, policy reports. Sometimes faculty list code, models, data, and teaching materials, depending on their focus. With these specific products in mind, describe a couple of scenarios for how altmetrics provide data for individual items. This really helps them to understand the power of altmetrics. Rather than relying on a general metric describing the impact of the container for their work (i.e., the journal), they can point to specific evidence for how their presentation or blog post or syllabus has been discussed and reused. This type of evidence is also more powerful for supporting a case because it relates directly to the items produced during the period of review.

The promotion and tenure process is about demonstrating potential for contributing to scholarly knowledge in your discipline, but that’s difficult to do in the relatively brief window that faculty have to publish before they go up for tenure. Given the lag in accumulating citations, a great way to get buy in for altmetrics is to help faculty understand the portfolio of metrics that they can use to demonstrate the impact of their work in multiple areas, more quickly than citations.

My goal as a librarian in this area is to advocate for more sustainable practices through informed decision making. If you care about these issues, become a local champion for open access or data sharing or open science at your institution. You can also raise awareness through service on campus committees, getting involved with new faculty orientation, and engaging department chairs in discussion of their priorities and criteria. As a champion, the most important step is to become an adopter – use altmetrics in your own promotion and tenure materials and discuss their value with the library promotion and tenure committee.

 

Become an altmetrics expert

There’s no better way to be an effective instructor than to know that topic firsthand. So, you should become a user of altmetrics before you offer a workshop on them. Try to use altmetrics for your own professional advancement (annual reviews, promotion & tenure, grant applications, etc)–in doing so, you’ll very quickly learn the best places to find altmetrics data, which data types are the most useful to demonstrate particular flavors of impact, and so on.

As a practitioner first, I am usually my own guinea pig for the strategies and tools I recommend in our workshops. I do my best to walk the talk, so to speak. In my mid-tenure review, I included a table of webometrics (pageviews and downloads) for my materials in IUPUI’s institutional repository and on Slideshare, plus a few tweets related to my conference presentations. In my last annual review, I included a screenshot of my ImpactStory profile, Storify conversations about my conference presentations, and an extensive table of metrics for materials on Slideshare.

My dossier is due in May 2016. My case will be based strongly on my engagement with and contribution to data librarianship as demonstrated by altmetrics, at least primarily. I am fairly confident that the reviewers will see the value of this evidence, at least at my own institution.

Since preparing for my mid-tenure review (where I collected most of my impact data manually), aggregation tools like ImpactStory, the Altmetric bookmarklet, Google Scholar, and PlumX (if your institution has a subscription) have made gathering impact data much, much simpler. These services collect data from across the web and incorporate it into a single, article-level or researcher-level report – give them a go for yourself!

One final way to be an altmetrics expert is to keep on top of the altmetrics research literature. New studies are published all the time and are often shared in this Mendeley altmetrics group. To get started, check out these resources:

These readings contain great content (including strategies for using altmetrics and examples of researchers who have used altmetrics for grants and tenure) that you can borrow from when creating workshops.

Welcome to Altmetric’s “High Five” for May, a discussion of the top five scientific papers with the highest Altmetric scores this month. On a monthly basis, my High Five posts examine a selection of the most popular research outputs Altmetric has seen attention for that month.

The theme this month is odd discoveries.

Paper #1. Warm-Blooded… Fish?

Opah. Credit: NOAA Fisheries

Opah. Credit: NOAA Fisheries

 

All fish are cold-blooded ectotherms, right? You probably learned that in grade school. But now, we know that “fact” is wrong.

Our top paper this month, picked up by at least 87 news outlets and 20 blogs, documents the discovery of the first warm-bodied fish. The paper, “Whole-body endothermy in a mesopelagic fish, the opah, Lampris guttatus,” was authored by Nicholas Wegner, Owyn Snodgrass, Heidi Dewar and John Hyde, and published in Science on May 15th (incidentally the day I and many others graduated with a PhD!) On Twitter, the discovery received attention predominantly from members of the public, according to Altmetric data, as well as scientists and science communicators.

Mammals and birds warm their entire bodies above the ambient temperature. Generally, this ability is lacking in other vertebrates, although some highly active fish can temporarily warm their swim muscles. Wegner et al. show that the opah, a large deepwater fish, can generate heat with its swim muscles and use this heat to warm both its heart and brain. This ability increases its metabolic function in cold deep waters, which will help the fish compete with other, colder-blooded species. – Editor’s Summary, Science

Here, we describe a whole-body form of endothermy in a fish, the opah (Lampris guttatus), that produces heat through the constant “flapping” of wing-like pectoral fins and minimizes heat loss through a series of counter-current heat exchangers within its gills. Unlike other fish, opah distribute warmed blood throughout the body, including to the heart, enhancing physiological performance and buffering internal organ function while foraging in the cold, nutrient-rich waters below the ocean thermocline. – Paper abstract, Science

The study was covered extensively in the news, with stories in the New York Times, “In a First, a Fish Is Shown to Be Fully Warm-Blooded,” NPR, “First In Fish: ‘Fully Warmblooded’ Moonfish Prowls The Deep Seas,” and Science News, “Deepwater dweller is first known warm-hearted fish.”

“It’s hard to stay warm when you’re surrounded by cold water, but the opah has figured it out.” – Study author Nicholas Wegner, in NOAA Fisheries news release

Wegner realized the opah was unusual when a coauthor of the study, biologist Owyn Snodgrass, collected a sample of its gill tissue. Wegner recognized an unusual design: Blood vessels that carry warm blood into the fish’s gills wind around those carrying cold blood back to the body core after absorbing oxygen from water. – NOAA Fisheries news release

The opah is a large and colorful fish, weighing more than 100 pounds and resembling a fat car tire. It’s odd appearance may have added to the newsworthiness of the discovery of its warm-blooded nature. Not to mention that the opah, or moonfish, is camera-shy, unexpectedly fast in the water, and delectable raw or cooked!

“If you get in the way of their fins, they’ll smack you. They’re pretty feisty.” – Owyn Snodgrass, quoted in National Geographic article.

IFLScience also featured the discovery, “Revealed: First Warm-Blooded Fish (And We’ve Been Eating It For Years).”

This discovery is surprising since the opah is large and conspicuous; indeed, it’s already a favourite in fish markets and restaurants. Wegner and his colleagues deserve great credit for recognising and describing in detail the specialised gill heat exchangers that have been hidden right under the noses of fishermen and chefs for centuries. – Imants Priede, IFLScience!

The discovery was also featured in several well-known science blogs, including Nothing in Biology Makes Sense, Southern Fried Science, io9 and Not Exactly Rocket Science.

Realizing that opah’s are warm-blooded completely changes the way sceintist view life history strategies of this unique fish.  Once thought of as slow, ungainly predators, in reality opah’s are swift, actively chasing down and feasting on agile deep ocean prey like squid. – Kersey Sturdivant, Southern Fried Science

OpahEyeNOAAFisheries

Elevated temperature in the eye and brain of the opah allow for enhanced vision. Image Credit: NOAA Fisheries, Southwest Fisheries Science Center

Most fish have body temperatures that match the surrounding water. A small number of them can warm specific parts of their bodies. Swordfish, marlins, and sailfish, can temporarily heat their eyes and brains, sharpening their vision when pursuing prey. Tuna and some sharks, including the mako and great white, can do the same with their swimming muscles, going into turbo mode when they need to. But none of these animals can heat their entire bodies. Their hearts and other vital organs stay at ambient temperature, so while they can hunt in deep, cold waters, they must regularly return to the surface to warm their innards. The opah has no such problem. It can consistently keep its entire body around 5 degrees Celsius warmer than its environment. It doesn’t burn as hot as a bird or mammal, but it certainly outperforms its other relatives. – Ed Yong, Not Exactly Rocket Science

For more information about the opah, see this Science video.

Paper #2. A Jurassic bird-like theropod with bat-like wings: Meet Yi qi

Restoration of the membrane-winged scansoriopterygid Yi (dinosaur). Emily Willoughby, (e.deinonychus@gmail.com, emilywilloughby.com)

Restoration of the membrane-winged scansoriopterygid Yi (dinosaur). Image Credit: Emily Willoughby.

 

Our next High Five paper was published in Nature magazine in April 2015. The study, “A bizarre Jurassic maniraptoran theropod with preserved evidence of membranous wings,” documents the discovery of fossils of a small feathered dinosaur with bat-like wings. The Guardian covered the study with the headline “Is it a bird? Is it a bat? Meet Yi qi, the dinosaur that is sort of both.”

Researchers today announced the discovery of a stunning new dinosaur fossil: a glider with wings similar to both birds and bats. It has been named Yi qi (meaning ‘strange wing’) and is a small feathered dinosaur from the Middle Jurassic age fossil beds of China that have yielded a host of important fossils in recent years. Yi qi, like so many other small dinosaurs, is preserved with a full coating of feathers and was a close relative of the lineage that ultimately gave rise to birds. However, what sets this animal apart from numerous other dinosaurian gliders and proto-birds is the composition of its wings. In addition to some unusual feathers that are positioned on the long arms and fingers, there is a truly gigantic bone on each wrist that extends backwards, and between this bone and the fingers is preserved a membrane-like soft tissue that would have given the animal something of a wing, like that of bats. – David Hone, The Guardian

The study certainly grabbed imaginations. Sedeer el-Showk wrote for Nature’s Scitable blog Accumulating Glitches: “It’s wonderful to see this long-lost world grow ever more diverse. I would love to be ten years old again, my head burstingly full of names and pictures of dinosaurs, overflowing with knowledge about their lifestyle. This time, though, it wouldn’t be a drab world of lumbering grey and brown giant reptiles, but one also peopled by their colourful feathered cousins, including a tiny dinosaur with feathers on its head swooping among the gingko trees with bat-like wings.”

The study was also covered by io9, “Scientists Find New Dinosaur With Bat-Like Wings,” Wired.co.uk, “Weird bat-winged dinosaur may reveal evolution of birds,” and Not Exactly Rocket Science at National Geographic, “Chinese Dinosaur Had Bat-Like Wings and Feathers.”

These wings were mutually exclusive: dinosaur or pterosaur, feathery or leathery. But Yi went for both options! It had membrane wings with a feathery covering on the leading edge. It shows that at least some dinosaurs had independently evolved the same kind of wings as pterosaurs—an extraordinary example of convergent evolution. “This is refreshingly weird,” says Daniel Ksepka from the Bruce Museum, who was not involved in the study. “Paleontologists will be thinking about Yi qi for a long time, and we can surely expect some interesting research into the structure and function of the wing.” – Ed Yong, Not Exactly Rocket Science

Yi means “wing” and qi means “strange” in Mandarin. So Yi qi is the “strange winged” dinosaur. – Wonderful Scientific Names, Part 4: Yi qi, by Stephen Heard

 

Paper #3. 3.3-million-year-old Stone Tools Found in Kenya – Re-writing Textbooks

stone-tools-infographic-5-20-15-1024x821

From Stony Brook Press Release

 

Our third High Five paper, “3.3-million-year-old stone tools from Lomekwi 3, West Turkana, Kenya,” was published in Nature this month. The study, covered by at least 53 news outlets and 9 blogs, describes the discovery of stone tools in Kenya that predate the previously earliest known stone tool archaeological site by 700,000 years. That’s enough to attract quite a bit of attention.

Human evolutionary scholars have long supposed that the earliest stone tools were made by the genus Homo and that this technological development was directly linked to climate change and the spread of savannah grasslands. New fieldwork in West Turkana, Kenya, has identified evidence of much earlier hominin technological behaviour. We report the discovery of Lomekwi 3, a 3.3-million-year-old archaeological site where in situ stone artefacts occur in spatiotemporal association with Pliocene hominin fossils in a wooded palaeoenvironment. – S. Harmand et al. 2015

Scientists have long considered members of genus Homo to be the originators of complex stone tool manufacture and use. But a newly reported find is forcing a reconsideration of human history. Researchers working in Kenya have found 3.3-million-year-old stone cores and flakes, which indicate the manufacture of tools and are about 700,000 years older than the artifacts previously considered the oldest stone implements. – Bob Grant, The Scientist

I can’t help but think of the classic bone tool-wielding scene in “2001: A Space Odyssey.” Now the species wielding the first tool in that scene may no longer be thought of as an early Homo, but rather an older species of ancient apes, the likes of the famous Lucy fossil. According to Hannah Devlin writing for The Guardian, the finding “overturns idea that tool-making ability was unique to our own ancestors and is hailed as a ‘new beginning to the known archaeological record.’

The Homo genus, from which modern humans descend, only emerged around 2.5 million years ago, when forests gave way to open grassland environments in Africa. Until now, it was widely assumed that environmental changes around this time triggered the shift towards a bipedal hunter-gatherer life style. Jason Lewis, of Stony Brook University in New York and a co-author, said: “The idea was that our lineage alone took the cognitive leap of hitting stones together to strike off sharp flakes and that this was the foundation of our evolutionary success. This discovery challenges the idea that the main characters that make us human, such as making stone tools, eating more meat, maybe using language, all evolved at once in a punctuated way, near the origins of the genus Homo.”

The question of what, or whom, might have made the tools remains a mystery, but fossils from around the same period found at the site provide some clues. The skull of a 3.3-million-year-old hominin, Kenyanthropus platytops, was found in 1999 about a kilometre from the tool site and a skull fragment and tooth from the same species were found just a few hundred metres away. – Hannah Devlin, The Guardian

More reading on this study:

World’s Oldest Stone Tools Predate Humans, By Carl Engelking

As they dug deeper, they found a series of sharp stone flakes that bore the telltale marks of intentional engineering. In all, they uncovered 20 well-preserved flakes, cores, anvils — used as a base to shape stones — and an additional 130 other tools. To make these tools, hominins would have needed a strong grip and good motor control, scientists said, providing potential insights into the physical capabilities of human ancestors. – Carl Engelking

Chipping Away At The Mystery Of The Oldest Tools Ever Found, NPR

Stone Tools From Kenya Are Oldest Yet Discovered, John Noble Wilford, New York Times

Our Ancestors Made These Tools 3.3 Million Years Ago, by Gregory Filiano at Stony Brook

Watch: Stony Brook Team Finds Earliest Stone Tools

 

Paper #4. A Vegetarian (really!) relative of T. Rex.

Image: Skeleton reconstruction of Chilesaurus. Image Credit: Jaime A. Headden

Image: Skeleton reconstruction of Chilesaurus. Image Credit: Jaime A. Headden

 

The next High Five paper continues the “old and odd discoveries” theme for this month. The study, “An enigmatic plant-eating theropod from the Late Jurassic period of Chile,” was published online in Nature in late April. It’s not hard to see why this study was covered by over 55 news outlets and 9 blogs. NewScientist magazine headlined “Freakiest dinosaur ever found is a vegetarian relative of T. rex.”

Meet T. rex‘s bizarre vegetarian relative. Chilesaurus diegosaurezi was discovered in southern Chile where it lived around 150 million years ago. It looks like a mosaic of several other dinosaurs. It had a tiny head, a 3-metre-long body and small arms like T.rex – but with blunt fingers instead of raptorial claws. And unlike T. rex, it had broad hind feet more like those of Diplodocus, while its pelvic girdle tipped back like that of Triceratops. “This dinosaur was a plant-eater, based on its teeth and jaws, but the rest of the skeleton looks like a strange chimera of various meat- and plant-eating dinosaurs,” says Darla Zelenitsky, a dinosaur palaeontologist at the University of Calgary in Alberta, Canada. – Andy Coghlan, NewScientist

“It really is a very strange and exciting combination of features. If I hadn’t seen an articulated [assembled] specimen, I’d have found it hard to believe.” – National Geographic Explorer Paul Sereno of the University of Chicago, T. rex’s Oddball Vegetarian Cousin Discovered

The evolution of herbivores like Chilesaurus diegosaurezi from meat-eating ancestors is uncommon but not unprecedented. Daniel Culpan writes in Wired.co.uk, “today’s placid, plant-munching pandas actually evolved from carnivorous ancestors related to the grizzly bear.

More reading:

‘Frankenstein’ dinosaur was a mash-up of meat eater and plant eater, by Ashley Yeager, Science News

 

Paper #5. Your Facebook Filter Bubble MAY Be Your Own Fault, Facebook Says

Facebook is often criticized of creating echo chambers. Hamilton Mausoleum has a spectacularly long lasting unplanned echo. Image Credit: I, Supergolden. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

Facebook is often criticized of creating echo chambers. Hamilton Mausoleum has a spectacularly long lasting unplanned echo. Image Credit: I, Supergolden. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

 

Our final High Five paper veers from the “odd discovery” theme of our first four. The study, “Exposure to ideologically diverse news and opinion on Facebook,” was published in Science magazine this month and received some controversial coverage online.

The study, conducted by three researchers with affiliations with Facebook, examined how 10.1 million U.S. Facebook users interact with socially shared news. The researchers measured the extent to which Facebook friends expose each other and others to ideologically cross-cutting content, for example news slanted in the opposite direction of one’s own political affiliation. The researchers “then quantified the extent to which individuals encounter comparatively more or less diverse content while interacting via Facebook’s algorithmically ranked News Feed, and further studied users’ choices to click through to ideologically discordant content,” (Bakshy, Messing and Adamic, 2015).

What did the researchers find as a result of this study? “Compared to algorithmic ranking, individuals’ choices about what to consume had a stronger effect limiting exposure to cross-cutting content,” (Bakshy, Messing and Adamic, 2015). In other words, the “filter bubble” or “echo chamber” effect of Facebook is not (all) Facebook’s fault, according to this study.

If the results of the 2015 General Election did not reflect the conversation on your News Feed, Facebook wants you to know it’s mostly your fault — not theirs. The social network reports in a study published in Science that the choices a user makes about who they follow has a greater impact on the political tone of their news feed than its own content algorithms. – Michael Rundle, Wired.co.uk

Stand back, Facebook conspiracy theorists: according to the study, we can’t blame algorithms for our newsfeeds’ tendency to turn up yet more baby pictures or Rand Paul political ads. Instead, our likelihood of encountering content shared by people with opposing points of view depends more on the political views of our friends and what links we’re most likely to click. – Andrew Freedman, Mashable

For those who aren’t aware, Facebook’s news feed does not show you every post from your friends (which is what I have always like about Twitter – it generally shows you most posts from those you follow). Rather, Facebook generates a selection of posts to insert into your news feed based on an algorithm that “guesses” what it thinks you will like most, based on your previous activity on the site. The algorithmic decision of what goes onto your news feed has been a source of concern for some, who suggest that Facebook generates a “filter bubble” or “echo chamber” effect by limiting the news and updates you could potentially be exposed to.

“Of algorithms are going to curate the world for us, then… we need to make sure that they also show us things that are uncomfortable or challenging or important.” – Eli Pariser

But according to Eytan Bakshy, Solomon Messing and Lada Adamic, it is Facebook users themselves, to the same or greater extent than Facebook’s news feed algorithm, that are limiting their own exposure to politically or otherwise cross-cutting content.

The authors found that Facebook’s algorithm had a modest effect on the kind of content people saw, filtering out 5 per cent of news that conflicts with conservative views and 8 per cent for liberals. However, they found that self-screening had a much bigger effect. People showed a clear preference for stories that fit their own world view: liberals clicked on only 7 per cent of conflicting content, while conservatives clicked on 17 per cent. The authors conclude that individual choices, more than algorithms, limit exposure to diverse content. Or, as Christian Sandvig, an internet policy researcher at Harvard University, put it in a blog post yesterday: “It’s not our fault.” – Aviva Rutkin, NewScientist

The Facebook researchers’ findings have not been immune to scrutiny, however. Many have criticized the peer-reviewed Science study over the last few weeks, citing importantly a lack of representativeness of the Facebook users examined in the study. Aviva Rutkin also writes about several criticisms of the study. Zeynep Tufekci, a professor at the University of North Carolina, Chapel Hill, writing on Medium, responded to the study thus:

[C]onfusingly, the researchers compare whether algorithm suppression effect size is stronger than people choosing what to click, and have a lot of language that leads Christian Sandvig to call this the “it’s not our fault” study. I cannot remember a worse apples to oranges comparison I’ve seen recently, especially since these two dynamics, algorithmic suppression and individual choice, have cumulative effects. Comparing the individual choice to algorithmic suppression is like asking about the amount of trans fatty acids in french fries, a newly-added ingredient to the menu, and being told that hamburgers, which have long been on the menu, also have trans-fatty acids — an undisputed, scientifically uncontested and non-controversial fact. – How Facebook’s Algorithm Suppresses Content Diversity (Modestly) and How the Newsfeed Rules Your Clicks

Eli Pariser, author of The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think, also responded to the study on Medium.com.

Yes, using Facebook means you’ll tend to see significantly more news that’s popular among people who share your political beliefs. And there is a real and scientifically significant “filter bubble effect” — the Facebook news feed algorithm in particular will tend to amplify news that your political compadres favor. This effect is smaller than you might think (and smaller than I’d have guessed.) On average, you’re about 6% less likely to see content that the other political side favors. Who you’re friends with matters a good deal more than the algorithm. But it’s also not insignificant.

In its press outreach, Facebook has emphasized that “individual choice” matters more than algorithms do — that people’s friend groups and actions to shield themselves from content they don’t agree with are the main culprits in any bubbling that’s going on. I think that’s an overstatement. Certainly, who your friends are matters a lot in social media. But the fact that the algorithm’s narrowing effect is nearly as strong as our own avoidance of views we disagree with suggests that it’s actually a pretty big deal. – Eli Pariser

More reading:

On Facebook, you control the slant of the news you choose, by Bruce Bower, Science News

Don’t (just) blame Facebook: We build our own bubbles, by Scott Johnson, ArsTechnica

Surprise: Facebook Says that Facebook A-Okay for News! By John M. Grohol, PsychCentral

The Problems With Facebook’s Polarization Study, by Annie Lowrey, ScienceOfUs

[D]espite the buzz this study is getting, we still don’t have a very good sense of how Facebook and other social-media services might or might not contribute to polarization. – Annie Lowrey

 

That’s it for this month! What did you think of these top five scientific studies? Come back next month for more!

In the last blog post in our researcher series, we included some perspectives on Altmetric from some metrics-savvy researchers. One of the responses was from Jean Peccoud, who commented on the Altmetric score, saying it “can [sometimes] feel a little like black magic”.

This isn’t the first time we’ve heard this, or similar, and we appreciate that people are keen to understand more about what goes on in the background to calculate the score for each research output. Our aim for this blog post, therefore, is to provide more detail around the Altmetric scoring system, and to offer insight into the weighting we give to each source we’re tracking.

We hope this post will help to answer some of the questions researchers new to altmetrics may have about how Altmetric collects and displays attention data. For those who are already familiar with Altmetric and use it to monitor the attention for their research, we hope this post will refresh their memories and provide a bit more context around the data.

Where can I find the Altmetric score? donut
The Altmetric score appears in the middle of each Altmetric donut, which is our graphical representation of the attention surrounding a research output.  It can often be found on publisher article pages, and also appears when a user is using any of our apps, or using the Altmetric Bookmarklet.

The colours of the donut represent the different sources of attention for each output:       

colours

                                  

Why do Altmetric assign a score for articles at all? 

The Altmetric score is intended to provide an indicator of the attention surrounding a research output. Although it may be explorerstraightforward enough to monitor the attention surrounding one research output, for example, it becomes harder to identify where to focus your efforts when looking at a larger set. The number alone can of course not tell you anything about what prompted the attention, where it came from, or what people were saying, but it does at least give you a place to start – “is there online activity around this research output that would be worth investigating further?”

We work with a lot of publishers and institutions who want to be able to see which articles are getting the most (or indeed the least) attention. They’re interested in monitoring the attention of not only single articles, but to be able to place that measure within the context of the journal the article comes from, or in comparison with other publications from peers. Again, we’d always encourage anyone looking at our data to also click through to the Altmetric details page for each output content of the mentions and see what people are saying about the item, rather than using the arbitrary numbers to draw conclusions about the research.

How is the score calculated?
The Altmetric score is an automatically calculated, weighted algorithm. It is based on 3 main factors:

1. The volume of the mentions (how many were there?)
2. The source of the mentions (were they high-profile news stories, re-tweets, or perhaps a Wikipedia reference?)
3. The author of the mentions (was it the journal publisher, or an influential academic?)

Screen Shot 2015-05-26 at 13.33.32

Combined, the score represents a weighted approximation of all the attention we’ve picked up for a research output, rather than a raw total of the number of mentions. You can see this in the example on the right – the article has been mentioned in 2 news outlets, 2 blogs, 6 Facebook posts, 84 tweets, 1 Google + posts and 1 Reddit post. However, the score is 85, not 116.

That said, each source is assigned a default score contribution – as detailed in the list below:

Screen Shot 2015-05-26 at 13.42.11

These default scores are designed to reflect the reach and level of engagement of each source: a news story, for example, is for the most part likely to be seen by a far wider audience than a single tweet or Facebook post. It’s also worth mentioning that social media posts are scored per user. This means that if someone tweets about the same research output twice, only the first tweet will count. Blog posts are scored per feed; if two posts that were stored in the same RSS feed link to the same article, only the first post will be counted.

You’ll have noticed that the Altmetric Score for any individual research output is always a whole number – so each time a new mention is picked up the score is rounded to the nearest whole number. For example, a single Facebook post about an article would contribute 0.25 to the score, but if there was only one post, the score for that article would be 1. However, if there were four Facebook posts mentioning a research output, this would still only contribute 1 to the overall score.

Weighting the score
Beyond tracking and calculating based on these default score contributions, another level of filtering is applied to try to more accurately reflect the type and reach of attention a research output has had. This is where the ‘bias’ and ‘audience’ of specific sources plays a further part in determining the final score.

News outlets
News sites are each assigned a tier, which determines the amount that any mention from them will contribute to the score, according to the reach we determine that specific news outlet to have. This means that a news mention from the New York Times will contribute more towards the score than a mention from a niche news publication with a smaller readership, such as 2Minute Medicine. Each mention is counted on the basis of the ‘author’ of the post – therefore if a news source publishes two news stories about the same article, these would only be counted as one news mention.

Wikipedia 
In addition to the news weighting, scoring for Wikipedia is static. This means that if an article is mentioned in one Wikipedia post, the score will automatically increase by 3. However, if an article is mentioned in several Wikipedia posts, the score will still only increase by 3. The rationale behind this is that Wikipedia articles can reference hundreds of research outputs. As such, a mention of a paper as a reference alongside lots of other research, is not really comparable (in terms of reach and attention) to a mainstream news story that is only about one research paper. We consulted a Wikipedia expert when trying to decide on the appropriate scoring, and eventually decided to keep the score static to reduce the potential for gaming. We agreed that if we were to decide that score would increase with each Wikipedia mention, people could potentially game the scoring by manually adding their publications as references to old articles. This would mean that their scores were biased through illegitimate attention.

Policy Documents

The scoring for policy documents depends on the number of policy sources that have mentioned a paper. Mentions in multiple policy documents from the same policy source only count once. If, for example, a research output is mentioned in two policy documents from the same source, this will contribute 3 to the score. However, if two policy documents from two different policy sources mention the same research output, these would both count towards the score, so the score would increase by 6.

Social media posts
For Twitter and Sina Weibo, the original tweet or post counts for 1, but retweets or reposts count for 0.85, as this type of attention is more secondhand (and therefore does not reflect as much engagement as the initial post). Again, the author rule applies; if the same Twitter account tweets a the same link to a paper more than once, only the first tweet will actually count towards the score (although you’d still be able to see all of the tweets on the details page). For tweets, we also apply modifiers that can sometimes mean the original Tweet contributes less than 1 to an article score. These modifiers are based on three principles:

  • Reach – how many people is this mention going to reach? (This is based on the number of people following  the relevant account)
  • Promiscuity – how often does this person Tweet about research outputs? (This is derived from the amount of articles mentioned by this Twitter account in a given time period).
  • Bias – is this person tweeting about lots of articles from the same journal, thereby suggesting promotional intent?

These principles mean that if (for example) a journal Twitter account regularly tweets about papers they have just published, these tweets would contribute less to the scores for these articles than tweets from individual researchers who have read the article and just want to share it – again, here we are trying to reflect the true engagement and reach of the research shared. This can also work the other way; if (for example) a hugely influential figure such as Barack Obama were to tweet a paper, this tweet would have a default score contribution of 1.1, which could be rounded up to a contribution of 2.

Combating gaming
Gaming is often mentioned as a risk of altmetrics (as a principle, it is actually applicable to any kind of metric that can be influenced by outside behaviour). Researchers are keen to compare themselves to others and many in the academic world have taken to using numbers as a proxy for ‘impact’. Altmetric have taken steps to combat practices that could be suspected gaming or otherwise negatively influencing the score, including:

  • Capping measures for articles that have more than 200 Twitter or Facebook posts with the exact same content. For articles such as these, only the first 200 Twitter or Facebook posts will count towards the score, in order to prevent articles with lots of identical social media posts from having much higher scores than articles with examples of more legitimate, unique attention.
  • Flagging up and monitoring suspect activity: where an output sees an unusual or unexpected amount of activity, an alert is sent to the Altmetric team, who investigate to determine whether or not the activity is genuine.

The most powerful tool we have against gaming, however, is that we display all of the mentions of each output on the details page. By looking beyond the numbers and reading the mentions, it is easy to determine how and why any item has attracted the attention that it has – and therefore to identify whether or not it is the type of attention that you consider of interest.

What’s not included in the score?
Lastly, it’s useful to remember that some sources are never included in the Altmetric score. This applies to Mendeley and CiteULike reader counts (because we can’t show you who the readers are – and we like all of our mentions to be fully auditable), and any posts that appear on the “misc” tab on the details page (misc stands for miscellaneous).

We get asked about the misc tab quite a lot, so I thought it would be good to explain the rationale behind it. We add mentions of an article to the misc tab when they would never have been picked up automatically at the point when we are notified of them. This could have been because we’re not tracking the source, or because the mention did not include the right content for us to match it to a research output. By adding posts like this to the misc tab, we can still display all the attention we’re aware of for an article without biasing the score through excessive manual curation.

We hope that by posting this blog, we’ve managed to shed some light on the Altmetric score and the methods that go into calculating it. As always, any comments, questions or feedback are most welcome. Thanks for reading!

In England, the recent national Research Excellence Framework (REF) exercise is using “real world” impact for the first time to determine how much money institutions will be allocated from the Higher Education Funding Council for England. In this post, we examine the REF results and discuss the possibilities for documenting such impacts using altmetrics.

At Altmetric, we have a keen interest in understanding the public impacts of research. We’ve been following the UK Research Excellence Framework assessment exercise closely since it was piloted in the late 2000s (just a few years before Altmetric was formally founded). There’s overlap in the types of indicators of impact we collect (called “altmetrics” in the aggregate and including media coverage, mentions of research in policy documents, and more) and types of impact that were reported by institutions for the REF (impact on culture, health, technology, and so on).

So, when the REF results were announced in March, we naturally asked ourselves, “What (if anything) could altmetrics add to the REF exercise?”

In this post, I’ll give a brief background on the REF and its implementation, and then dive into two juicy questions for us at Altmetric (and all others interested in using metrics for research evaluation): What can research metrics and indicators (both altmetrics and citation metrics) tell us about the “real world” impact of scholarship? And can they be used to help institutions prepare for evaluation exercises like the REF?

First, let’s talk about how the REF works and what that means for research evaluation.

The REF wants to know, “What have you done for taxpayers lately?”

REF 2014 logoMany countries have national evaluation exercises that attempt to get at the quality of research their scholars publish, but the REF is slightly different. In the REF, selected research is submitted by institutions to peer review panels in various subject areas, which evaluate it for both its quality and whether that research had public impacts.

There is an enormous cost to preparing for the REF, which requires institutions to compile comprehensive “impact case studies” for each of their most impactful studies, alongside reports on the number of staff they employ, the most impactful publications their authors have published, and a lot more.

Some have proposed that research metrics–both citation-based and altmetrics–may be able to lessen the burden on universities, making it easier for researchers to find the studies that are best suited for inclusion in the REF. And a previous HEFCE-sponsored review on the use of citations in the REF found that they could inform but not replace the peer review process, as indicators of impact (not necessarily evidence of impact themselves).

But using metrics for evaluation is still a controversial idea, and some have spoken out against the idea of using any metrics to inform the REF. HEFCE convened an expert panel to get to the bottom of it, the results of which are expected to be announced formally in July 2015. (The results have already been informally announced by review chair James Wilsdon, who says that altmetrics–like bibliometrics–can be useful for informing but not replacing the peer review process.)

screenshot of the cover of the Kings College and Digital Science REF report

Until then, there is rich data to be examined in the REF2014 Impact Case Studies web app (much of which is available for analysis using its API) and this excellent, thorough summary of the REF impact case study results (pictured at right). We decided to do some informal exploration of our own, to see what Altmetric data–which aim to showcase “real world” impacts beyond the academic sphere–and citation count data could tell us about the publications selected for inclusion in REF impact case studies.

What can altmetrics tell us about “real world” research impacts?

Going into this thought experiment, I had two major assumptions about what Altmetric and citation data could tell me (and what it couldn’t tell me) about the impact of publications chosen for REF impact case studies:

  • Assumption 1: Altmetric data could find indicators of “real world” impacts in key areas like policy and public discussion of research. That’s because there’s likely overlap between the “real world” attention data we report and the kinds of evidence universities often use in their REF impact case studies (i.e. mentions in policy documents or mainstream news outlets).

  • Assumption 2: Citation counts aren’t really useful for highlighting research to be used in the impact case study portion of the REF. Citations are measures of impact among scholarly audiences, but are not useful for understanding the effects of scholarship on the public, policy makers, etc. Hence, they’re not very useful here. That said, citation counts are the coin of the realm in academia, so it’s possible that faculty preparing impact case studies may use citations to help them select what research is worthy of inclusion.

These assumptions led to some questions that guided my poking and prodding of the data:

    1. Are there differences between what universities think have the highest impact on the “real world” (i.e. are submitted as REF ICSs) and what’s got the highest “real world” attention as measured by Altmetric? If so, what relevant things can Altmetric learn from these differences?
    2. If “impact on policy” is one of the most popular forms of impact submitted to the REF, do articles with policy impacts (as reported by Altmetric) match what’s been submitted in REF impact case studies?
    3. Can citation counts serve as a good predictor of what will be submitted to with REF impact case studies?

I decided to dive into impact data for a very small sample of publications from two randomly chosen universities: The University of Exeter and The London School of Hygiene and Tropical Medicine.

I created three groups of publications to compare for each university, six groups total for comparing across both universities:

    1. Top ten articles by overall attention for each institution, as measured by Altmetric’s attention score;
    2. Top ten articles by attention for each institution that were submitted with a non-redacted REF impact case study; and
    3. Top ten articles by Scopus citation count for each institution*.

Though the REF impact case studies included publications primarily released between 2008-2013 (as well as older research that underpins the more recent publications), the publications I used were limited to those published online between 2012 and 2013, when the most comprehensive Altmetric attention data would be available.

I also used Altmetric to dig into the qualitative data underlying the pure numbers, to see if I could discover anything interesting about what the press, members of the public or policymakers were saying about each institution’s research, how it was being used, and so on.

Before going any further, I should acknowledge some limitations to this exercise that you should bear in mind when reading through the conclusions I’ve drawn. First and foremost is that my sample size was too small to draw any conclusions about the larger body of publications produced across the entirety of England. In fact, while this data may show trends, it’s unclear whether these trends could hold up across the each institution’s entire body of research. Similarly, using publication data from the 2012-2013 time period alone means that I’ve examined only a small slice of what was submitted with REF impact case studies overall. And finally, I used the Altmetric attention score as means of selecting the highest attention articles for inclusion in this thought exercise. It’s a measure that no doubt biased my findings in a variety of ways.

With that in mind, here’s what I found out.

Are there differences between what universities think have the highest impact on the “real world” (i.e. are submitted as REF impact case studies) and what’s actually got the highest “real world” attention (as measured by the Altmetric score)?

In the Altmetric Explorer, you can learn what the most popular articles are in any given group of articles you define. As seen above, by default articles are listed by their Altmetric score: the amount of attention–both scholarly and public–that they receive overall.

You can also use the Explorer to dig into the different types of attention a group of articles has received and filter out all but specific attention types (mentions in policy documents, peer reviews, online discussions, etc).

So, I fed the list of articles from 2012-2013 that were submitted with each institution’s REF impact case studies into the Explorer, and compared their Altmetric attention data with that of with the overall lists of publications from each institution during the same time period (sourced from Scopus). I then used the Altmetric score to determine what the top ten highest attention articles from the REF were, and did the same for the overall list of articles from each institution. Those “top ten” lists were then compared, with unexpected results.

There is no real overlap in articles that are simply “high attention” (i.e. articles that have the highest Altmetric scores) and what was submitted with REF impact case studies. That’s likely because the Altmetric score measures both scholarly and public attention, and gives different weights to types of attention that may not match up with the types of impact documented in impact case studies.

However, when you drill down into certain types of attention–in this case, what’s been mentioned in policy documents–you do see some overlaps in the “high attention” articles with that type of attention from each institution, and what was submitted with REF impact case studies.

Even though the Altmetric score alone can’t always help choose the specific articles to submit with REF impact case studies, altmetrics in general may be able to help universities choose themes for the case studies. Here’s why: for both universities, the disciplines of publications submitted for the REF impact case studies (primarily public health, epidemiology, and climate change) closely matched the disciplines of overall “high attention” publications, as measured by the Altmetric Explorer.

So–we don’t yet have precise, predictive analytics powered by altmetrics yet, but altmetrics data can potentially help us begin to narrow down the disciplines whose research has the most “real world” implications.

If “impact on policy” is one of the most popular forms of impact submitted to the REF, do articles with policy impacts (as reported by Altmetric) match what’s been submitted in REF ICSs?

Yes. We found many more articles with mentions in policy documents than were chosen for inclusion in REF impact case studies for each institution. Yet, it’s still likely that a human expert would be required to select which Altmetric-discovered policy document citations are best to submit as impact evidence.

To find articles with policy impacts, I plugged in all publications published between 2012 and 2013 for both universities into the Explorer app. Then, I used Explorer’s filters to create a list of articles for each institution that were cited in policy documents.

Of all articles published between 2012 and 2013, thirty publications from University of Exeter and fifty-five articles from London School of Hygiene and Tropical Medicine had been mentioned in policy documents.

But how many of those articles were included in REF impact case studies? Turns out, the impact case studies from University of Exeter only included five articles of the thirty total that were found by our Explorer app to have policy impacts. And LSHTM impact case studies only included one of the fifty-five total articles that we found to have policy impacts.

I think there are two possible reasons for this discrepancy. The first is that each university only selected a small subset of their scholarship that has policy impacts in order to showcase only the work with potentially the most lasting mark on public policy, or that they wanted to submit research with only certain types of impacts (e.g. technology commercialisation). The other is that those researchers compiling impact case studies for their universities simply weren’t aware that these other citations in policy documents existed.

However, this is currently only speculation. We’ll need to talk with university administrators to know more about how impact case studies are selected. (More on that below.)

Can citation counts serve as a good predictor of what will be submitted to the REF?

No. Neither university submitted any articles with their impact case studies that were in the “top ten by citation” list, presumably because citations measure scholarly–not public–attention. However, that’s not to say that other universities would not use citation counts to select what to submit with REF impact case studies.

So what does this mean in practice?

Generally speaking, these findings suggest that Altmetric data can be useful in helping universities determine themes to the “real world” impact their research has, and the diversity of attention that research has received beyond the academy. This data could be useful when building persuasive cases about the diverse impacts of research. For example, it can help scholars discover the impact that they’ve had upon public policy.

However, it’s unclear whether Altmetric data could help researchers choose specific publications to submit with impact case studies for their university. We’ll be doing interviews soon with university administrators to better understand how their selection process worked for REF2014, and whether Altmetric would be useful in future exercises.

There’s more digging to be done

In getting up close and personal with the Altmetric data during the course of this exercise, I came to realize that I had another assumption underlying my understanding of the data:

  • Assumption 3: There are probably differences in the types of research outputs that are included REF impact case studies and what outputs get a lot of attention overall, as measured by Altmetric. There are also probably differences in the types of attention they receive online. I’ve guessed that Open Access publications were more likely to be included in impact case studies (as all REF-submitted documents must be Open Access by the time REF2020 rolls around), that the most popular articles overall saw more love from Twitter than chosen-for-REF articles, and that the most popular articles overall on Altmetric had orders of magnitude more attention than chosen-for-REF articles.

And that assumption led to three more questions:

    1. Are there differences between what’s got the highest scholarly attention (citations),  the highest “real world” attention (as measured by Altmetric), and what’s been submitted with REF impact case studies? If so, what are they?
    2. What are the common characteristics of the most popular (as measured by Altmetric) outputs submitted with REF impact case studies vs. the overall most popular research published at each university?
    3. What are the characteristics of the attention that’s been received by REF-submitted impact case studies outputs and high-attention Altmetric articles?

So, I’m rolling up my sleeves and getting to work.

Next month, I’ll share the answers I’ve found to the questions above, and hopefully also the perspectives of research administrators who prepared for the REF (who can tell me if my assumptions are on the mark or not).

In the meantime, I encourage you to check out the REF impact case study website and the Kings College London and Digital Science “deep dive” report, which offers a 30,000-foot view of REF impact case studies’ themes.

* The Altmetric data and Scopus citation information for these three groups of articles has been archived on Figshare.

We spend a lot of time in the Altmetric office talking about the varied sources and different types of research outputs we track – but as the team has grown we’ve been having to work harder to keep track of them! As a handy guide (for you and for us!) here’s a summary of the events we’ll be speaking at or attending in the next few months:

MLA 2015
15th – 20th May, Austin, Texas
Product Specialist Sara Rouhi and Research Metrics Consultant Stacy Konkiel are in town today and tomorrow hosting altmetrics workshops. There’s still time to tweet them if you’d like to meet up!

ORCID-CASRAI Joint Conference
18th – 20th May, Barcelona, Spain
Altmetric Training and implementation Manager Natalia Madjarevic is there to give an overview of the various automated workflows and systems we’ve set up to help institutions easily implement our platform. Keep an eye out for for her and say hello if you get a chance!

CARA 2015 annual conference
24th – 27th May, Toronto, Canada
Digital Science rep Stuart Silcox will be attending and on hand to answer all of your altmetrics questions! Drop Stuart a line if you’d like to arrange to meet.

SSP
27th – 29th May, Arlington, VA
Phill Jones will be chairing an Altmetric panel, The Evaluation Gap: using altmetrics to meet changing researcher needs on Thursday the 28th of May. Join Phill and panelists Cassidy Sugimoto, Jill Rodgers, Terri Teleen, and Colleen Willis for an exciting discussion on the challenges and opportunities of these new metrics.

HASTAC
27-30th May, East Lansing, Michigan
Altmetric’s Research Metrics Consultant Stacy Konkiel will be attending the HASTAC conference this year. Stacy’s really interested in how we might further apply altmetrics to humanities disciplines, and is always up for discussing new ideas, so be sure to say hi!

Open Research Data: Implications for Science and Society
28th – 29th May, Warsaw, Poland
Product Specialist Ben McLeish will be giving a short presentation on “Digging for data: opportunities and challenges in an open research landscape” – and will be happy to meet to discuss any questions you might have.

NASIG
28th – 30th May, Washington DC
Sara Rouhi (based in Washington herself) will be speaking here as part of the Great Ideas Showcase. Sara will discuss our work with tracking attention to published research in public policy documents. She’ll look at some of the data we’ve gathered so far, and share feedback we’ve had from institutions who have been exploring it for themselves. Drop Sara a line if you’d like to chat, and be sure to stop by her session to find out more.

ARMA 2015
1st – 3rd June, Brighton, UK
A whole bunch of us are excited to be going to ARMA this year. There’ll be representatives from Digital Science, figshare and Symplectic (as well as us!). Register for the session we’re running in partnership with the University of Cambridge on the afternoon of the 2nd.

Digital Science Showcase
2nd June, Philadelphia
Altmetric founder Euan Adie will be speaking at this Digital Science event. Euan will discuss the opportunities for collaboration and showcasing which are gained from social, news and public policy attention. The overall theme for the day is “Technology Trends in Research Management, Showcasing Outputs & Collaboration”, and the line up is looking great!

Digital Science Showcase
4th June, Los Angeles
In the same week Euan will also speak at the Digital Science event being hosted in LA. The event will follow the same format as the Philadelphia day – with some excellent guest speakers presenting.

Impact of Science
4th – 5th June, Amsterdam, Netherlands
Altmetric COO Kathy Christian will be attending this event, which Altmetric are sponsoring for the first time. It promises to be an interesting couple of days, and do say hello to Kathy or get in touch if you’d like to arrange a chat.

ELAG annual conference
8th – 11th June, Stockholm, Sweden
Representatives from ETH Zurich will be presenting their motivations and experience of implementing Altmetric at their institutions, with support from Altmetric Product Specialist Ben McLeish. It’s shaping up to be a great session so do drop by if you can, or get in touch with Ben if you’d like to arrange a time to meet.

Open Repositories
8th – 15th June, Indianapolis
Stacy Konkiel will be attending on behalf of Altmetric, and she and the figshare team will be hosting a meetup on the Monday night (please email Stacy for details – free beer!). She’ll also be presenting a poster between 6-8pm on Tuesday the 9th, and is looking forward to participating in some thought-provoking sessions.

Symplectic UK User Conference
11th – 12th June, London, UK
We’ve very kindly been asked by our colleagues at Symplectic to present as part of their UK user day. Altmetric developer Shane Preece and Customer Support Exec Fran Davies will give an overview of our institutional platform, and discuss the API connector we’ve built for Symplectic Elements clients.

SLA 2015
14th – 16th June, Boston
SLA is going to be a busy one for us this year! Stacy will be there, and is presenting in the following sessions:

CERN Workshop on Innovations in Scholarly Communication (OAI9)
17th – 19th June, Geneva, Switzerland
Cat Chimes will be attending this workshop (alongside Digital Science’s Director of Research Metrics, Daniel Hook). Cat will be attending sessions and presenting a poster; “Understanding the impact of research on policy using Altmetric data”. Feel free to say hi or share any feedback or questions you might have.

ReCon
19th June, Edinburgh, UK
Euan will be speaking at the Edinburgh event, ‘Research in the 21st Century: Data, Analytics and Impact’. There’s also a hack day taking place on the 20th – get involved!

LIBER conference
24th – 26th June, London, UK
Our Training and Implementations Manager Natalia Madjarevic will be presenting alongside Manchester’s Scott Taylor. Natalia and Scott will be discussing our institutional platform – and Scott will share his experience so far of rolling it out amongst Manchester faculty.

ALA 2015
25th – 30 June, San Francisco
Altmetric’s Product Specialist Sara Rouhi has a packed schedule for ALA – but if you don’t have a chance to attend one of the sessions below feel free to get in touch if you’d like to chat with her.

  • Saturday June 27th: 
    8:30 am – 10:00 am – Exhibits Round Table Program: The application of altmetrics in library research support
  • Sunday June 28th: 10:30-11:30  Altmetrics and Digital Analytics Interest Group; “Navigating the Research Metrics Landscape: Challenges and Opportunities” (Marriot Marquis San Francisco Pacific Suite B)1:00 – 2:30pm ALCTS CMS Collection Evaluation and Assessment Interest Group meeting – Bookmetrix presentation with Springer

EARMA
28th June – 1st July, Leiden, Netherlands
Product Specialist Ben McLeish will be presenting alongside Juergen Wastl from the University of Cambridge, Ben will give and overview of altmetrics and Altmetric, and Juergen will discuss Cambridge’s motivations and experience of adopting the Altmetric institutional tool.

ICSTI workshop on “Innovation in Scientific and Technical Information”
4th July, Hanover, Germany
Ben McLeish will be speaking at this event hosted by the International Council for Scientific and Technical Information. He’ll be giving an introduction to Altmetric and an insight into how they can be applied to help institutions and researchers better position themselves and their research outputs.

Mini-symposium on measuring research impact for the SAPC Annual Conference
8th July, Oxford, UK
Altmetric Product Development Manager Jean Liu is presenting at this event. Jean will share some of the latest developments from Altmetric, and is keen to learn more about how the scholarly community view and evaluate the broader dissemination of their work.

… and somewhere in all of this, we’re hoping to find time for a team BBQ – fingers crossed for sunshine!