23 diverse metrics to use in your next grant application

In yesterday’s post, I shared a new approach to documenting your “broader impacts” in grant applications using metrics and related data, with the aim of giving you solid impact evidence that will send your application to the top of the stack.

Today, let’s talk about some specific types of metrics you can use in your next grant application, including what impacts they communicate and tools that can gather them automatically.

Remember, we’re only interested in metrics that relate to the grant program you’re applying for and inform the type(s) of impact you’re arguing you’ve had. A “kitchen sink” approach this ain’t. Instead, use this guide as a starting point for carefully deciding which metrics are appropriate to include in your next grant application.

Overall attention & reach

“When I track metrics for my research, it’s often to track increases in popularity (using metrics like site views over time, unique vs. total search terms, and other Google Analytics metrics), understand how visitors are getting to my website (using incoming links), and learning more about who is using my scholarship (what countries they live in).” – Holly Bik, microbiologist

What’s the chance that the average person is familiar with your work, having seen it in the press, by visiting your website, or having otherwise come across it online? While most attention metrics can’t guarantee that “clicks on a link == someone has read and fully engaged with your work”, they can more accurately gauge attention to your work than the attention metrics traditionally used: circulation statistics for the journals that print your articles or the number of libraries that own a copy of your book.

  • Website visitors: Google Analytics is widely regarded as the easiest to manage, free web analytics tool, and can tell you a lot about how visitors are using your website: how long they’re staying, how many pages they visit, how often users return, etc.
  • Mentions in the press: Coverage of your research in a high-profile newspaper like The New York Times can be a great way to get a lot of visibility very quickly. You can find this information using the Altmetric bookmarklet or in Scopus, which recently added mainstream media coverage sourced from Altmetric to the suite of metrics it reports.
  • Publisher, repository, and personal website downloads and views: Many publishers and repositories make article view and download statistics available publicly, and some of those that don’t will share that information privately with authors. Visits to scholarship that’s shared on personal websites can be tracked using Google Analytics (see above).
  • Twitter exposure: There are a number of services that will tell you how often your tweets or a tweet mentioning your research has appeared in others’ timelines. Altmetric reports the “upper bound” (maximum number) of users who have seen a link to your research in their timelines. Twitter’s analytics tool is a great way to track the overall exposure of your tweets. And there are many, many platforms oriented towards marketers that can be used to further slice and dice your Twitter stats, including Followerwonk and Socialbro.
  • Blog readership: Most blogging platforms like WordPress have a baked-in analytics tool that can tell you how many readers your blog receives, which posts are the most popular, and where in the world your blog is being read. Poke around the backend of your blog to see what’s available to you. For WordPress blogs, I recommend installing the Jetpack plugin, which comes with a solid statistics dashboard.


Are you attempting to raise awareness of issues you study among members of the public, policymakers, or scholars in other disciplines? Have you created successful programs in partnership with community organizations that are generating a lot of discussion in your region? Then you’re likely interested in the engagement evidence that the following services provide.

  • Screen Shot 2015-11-20 at 9.58.47 AM

    SumAll’s dashboard view

    Social media followers, mentions, retweets, and potential exposure: Sumall is a great tool for tracking your overall engagement on Twitter over time: how many conversations you’re having, how widely your posts and tweets are being seen, and so on. This can be a useful way to know if many people are becoming familiar with you as a researcher.  To showcase others’ engagement with your research articles and other outputs, Altmetric and Impactstory both generate reports describing who is saying what about your work, where in the world they’re saying it, and the number Twitter users who have potentially seen a mention of your work in their timeline.

  • Blog comments: If you share research-related updates on your blog, the number and context of substantive comments you receive can be one indicator of engagement. For example, when applying for grants human sexuality researcher Dr. Debby Herbenick could describe how she uses her blog to answer readers’ sex and health related questions (slightly NSFW link), while Dr. Rosie Redfeld could explain how her blog’s comments section is a popular place for discussion among scientists in her field.

International reach

It’s possible your work is being read and reused all over the world. Thanks to the social web’s rich data, it’s now easier than ever to document the international reach of your research.

  • Screen Shot 2015-11-20 at 9.57.48 AM

    An example of a Twitter map from Altmetric

    Twitter, Mendeley, and Google Analytics maps: Altmetric details pages and Impactstory profiles both generate maps based upon the interest that research has received on Twitter and Mendeley (more on Mendeley below). Google Analytics’ dashboard includes a nice mapping interface, as well.

  • International media coverage: Using the mainstream media tracking tools that Altmetric and Scopus offer (mentioned above), it’s relatively easy to find international news outlets that reference your research. Mention is a standalone, paid service that you can also use to track broader mentions to you or your research, based upon keyword searches. Here’s a thorough guide to creating targeted searches on Mention.

Diverse scholarly impacts

Researchers aren’t robots who only ingest scholarship in order to cite it in the peer-reviewed literature. There are a lot of stops along the way during the research lifecycle, and throughout it researchers use web-native tools to manage their reading lists; discuss each others’ work; and recommend the highest-quality scholarship. Following are some examples of metrics from those tools that you can use to showcase the many ways your research is influencing other scholars.

  • Mendeley readers and citations: Reference manager Mendeley is used by scholars around the world to save, share, and cite publications. Because Mendeley is a web-native tool, you’re able to gather data from it to learn how many other scholars have saved your articles to their library (a metric that’s been shown to be a solid “leading indicator” for later citations). Mendeley’s also recently added a Stats dashboard that tells you how many times your articles have been cited by research in Scopus. You can sign up for Mendeley Stats here. By including Mendeley readersip data in a grant application, you’re able to demonstrate that your work is being read by other scholars, and that it may also be used in professional, teaching, and educational activities (for more on Mendeley’s user motivations for bookmarking, see this article by Mohammadi, Thelwall, and Kousha (2015).)
  • Discussion on research blogs and Twitter: Altmetric bookmarklet, which includes data from a carefully curated list of research blogs and also classifies Twitter users (so you can easily find out roughly how many scholars are discussing your work). Discussions on blogs have been found to have a slight correlation to later citations.
  • Recognition on Faculty of 1000 Prime: Faculty of 1000 Prime are hand-picked by experts in the sciences, reviewed for their quality and recognized for their contributions to advancing the field, helpfulness in teaching, and other areas. To be reviewed positively on Faculty of 1000 Prime is to have an expert’s stamp of approval on your research. Faculty of 1000 Prime reviews, too, are found to correlate slightly with citations in the peer reviewed literature.
  • Web of Science usage counts: Web of Science recently started reporting two types of “usage counts”: full-text requests and exports to reference managers. Full-text requests are counted when a researcher clicks through from the item record to call up a PDF or HTML version of the article (thereby demonstrating more than a passing interest in an article, and a possible intent to read). Exports to reference managers are when a researcher saves a citation from an item record in a format compatible with EndNote or other tools (demonstrating an intent to read and a possible intent to cite the article later on). You can find both types of metrics on the item record for your article in Web of Science.

Attention from practitioners

“Fifty percent of physicians look up conditions on the site, and some are editing articles themselves to improve the quality of available information.” – Julie Beck in The Atlantic

  • Wikipedia mentions: Doctors and patients alike use Wikipedia to understand and diagnose illnesses, and many doctors have taken to editing articles to improve the quality of information available, reports The Atlantic. For public health researchers, this fact can make links to their scholarship all the more valuable. You can find Wikipedia mentions for research articles in both the Altmetric bookmarklet and Impactstory.
  • PubMed Central views: the NIH Manuscript Submission System or, for PLOS-published articles, on the PLOS article metrics page under the “Viewed” section.
  • Citations in public policy: Haynes et al, 2011). So while being cited in public policy documents isn’t itself a guarantee that your research has made a lasting impact upon policy, the right kinds of citations might. To find citations in public policy documents, you can use the Altmetric bookmarklet (which currently indexes policy from a curated list of governments and NGOs) or a carefully constructed Google Alerts search (see above).

Use of non-article outputs

Many of the above examples relate to the impacts of journal articles, but scholars share many other types of valuable research outputs with the world every day: datasets, software, presentations, and white papers, among others. Here are just a few of the types of metrics you can find for non-article outputs.

  • Downloads of software, data, presentations, and white papers: What’s the level of attention your research has received? If you’ve shared your research online, chances are there are download statistics available for it. Outputs shared on repositories like Figshare and Dryad have pageview and download stats available on item’s pages; you can find download stats for Python and R-based software shared on GitHub on Depsy; and presentations shared on Slideshare have download and view statistics readily available either through the presentation’s web page or–if you have a profile–on Impactstory.

    Screen Shot 2015-11-20 at 9.54.26 AM

    Depsy’s suite of software-related metrics

  • Inclusion of software in influential software libraries: Maybe a script you created is used as part of another piece of software that is widely used in your discipline. If that’s the case, you’ve had a lot of indirect impact upon computing in your field, and you should be recognized for it. Impactstory’s new webapp, Depsy, uses a Google-like “dependency PageRank” to highlight when Python and R-based software is used as the building blocks for more influential projects. Search for your own project on Depsy.

Technology commercialization

“Scientists who work on applied research tend to patent more than academics who pursue basic research.” – Calderini et al. 2007 in Markman, Siegel and Wright, 2008

Educational impact

“To enable learning programming at scale, I created Online Python Tutor (pythontutor.com), a code visualization and social learning platform that has been used by over 1.5 million people in 180 countries to visualize over 13 million pieces of code.” – Philip Guo, computer scientist

  • Use of learning objects: The example above from Philip Guo showcases how a non-traditional “learning object” has had a broad impact. If you’ve made your research or educational outputs available on your personal website, Google Analytics or Mixpanel are your best bets to track the attention that your work has received. If your outputs are in a repository like Figshare, you can use those systems’ built-in reports to find the number of download and views you’ve received, or the Altmetric bookmarklet to discover where that work has been discussed and shared on the social web.
  • Inclusion of research in syllabi: If your work is considered canonical in your field, chances are that it’s being used to teach. Unfortunately, it’s difficult to find syllabi where your work has been mentioned, as many instructors are now walling off access to their class materials using systems like Blackboard. One solution around this is Dan Cohen’s Syllabus Finder tool, which scraped the Web between 2002-2009 to collect syllabi. While it’s limited, it is currently the best tool available for searching syllabi.
  • Recognition by experts in F1000 Prime: F1000 Prime reviews sometimes recognize scholarship as being “good for teaching”. This information, coupled with the data suggested above, can show that your work is not only used by many, but also recommended by experts in your discipline.

Impact for recently published work

Citations are still the gold standard in many circles for having lasting impact upon a discipline, but they can take years to accumulate. So, how can you showcase the potential for long-term impact upon a field for papers you’ve only recently published? Three “leading indicators” you might consider using to do so are: Mendeley readers (moderate correlation with later citations), Faculty of 1000 Prime reviews (slight correlation), and mentions on research blogs (slight correlation).

It’s important to keep in mind that the potential for later citations is not the most compelling type of impact that your work may have (even if it is the metric that academia’s most obsessed with). Altmetrics are useful precisely because they help fill in the gaps in knowledge we have about items’ impact, and because many of them do not correlate with citations at all. Instead, they tell us something else about the many flavors of impact our work might have.

Make the data meaningful

The best ways to make any metrics you provide useful to grant reviewers are to include relevant qualitative data and context for the numbers you list.

Most of the services listed above that are used to find metrics can also be used to find the qualitative data that comprises those metrics. It’s often more compelling to know that a Nobel laureate has positively reviewed your work on Faculty of 1000 Prime than it is to know that you’ve gotten 15 Faculty of 1000 Prime reviews. So, include relevant examples in addition to the numbers wherever possible.

You should also provide context in the form of percentiles where you can. Are those 15 Faculty of 1000 Prime reviews a lot or a little, compared to the number that other articles in your discipline receive? It’s more useful to say that you’ve got 15 reviews, which puts your article in the top 99th percentile of other articles published in your field in the same year. Impactstory provides percentiles for all the metrics it offers, and Altmetric offers percentiles for the Altmetric score of each article (a summary of the overall quantity of online attention that research has received).

Where to include this data

There are many places you can include impact evidence during the grant application process:

  • In your NSF or NIH Biosketch, when describing important or relevant work (aka “synergistic activities” or “contributions to science”) you’ve done
  • In your grant narrative or cover letter, when describing why certain past projects make you well-suited for the current line of inquiry you seek funding for
  • In your results from Prior NSF Support section, when describing the intellectual merit or broader impacts of that previous research
  • Wherever else you are asked to provide evidence for engagement

A great guide to documenting impacts in NIH Biosketches comes from Karen Gutzman and Pamela Shaw, both librarians at Northwestern University who regularly help researchers craft winning NIH grants. In it, they recommend that researchers:

  • Consider all their research outputs
  • Highlight the full range of those outputs
  • Discuss the specific impacts of one or more outputs
  • Showcase successful dissemination to stakeholders like the public or other researchers

You can also use the above metrics as possible evaluation criteria for the grant you’re currently applying for. Consider how you might use such metrics to evaluate the success of your work, if you get funded, and include them in your application along with your specific plans to track these metrics.

Have you used metrics or other research impact data in a grant application? We’d love to hear about your experience in the comments below. What data did you include? How did you include it? What was the result of your application?

Screencap of tweets, policy documents, and news articles showcasing various types of research impact.

Has your research introduced popular new methods to your discipline, had an influence on public policy, or changed the way the public understands complex topics like knowledge transfer? If so, such evidence likely exists online and can be used to make a case for funding.

Did you know that the average science PI spends 116 hours preparing a single grant proposal?

How much of that time is spent finding and documenting evidence of “broader impacts” and engagement?

Luckily, it’s now possible streamline at least some of the grant preparation process, so researchers can spend less time on paperwork and more time doing research. Tools like the Altmetric bookmarklet and Impactstory can help you discover your “broader impacts” evidence, without a lot of work. And that evidence will help you stand out when applying for grants.

Think about it: so much of a grant proposal is an explanation of why you are qualified to advance research in a particular area. It follows that if you can provide hard evidence of your past success in doing outreach to the public and delivering other types of “broader impacts”, in addition to excelling in other areas of the application process, that you could give yourself an advantage over those whose claims to greatness remain unsubstantiated.

This is the first of two posts that will offer practical advice (from experts including an NSF program officer, a microbiologist, and librarians that work regularly with NIH-funded faculty) for the best use of metrics in grant applications.

However, as this area of application for research metrics is so new, it is as much a thought exercise as anything else. I’d welcome your feedback in the comments below!

In this post, I’m going to describe some of the types of potential impact funding agencies are looking for in fundable projects, how researchers are currently using metrics in grant applications, and explain the surprising reason why most funding agencies do not tell you how to use metrics in your proposals.

The types of impact granting agencies seek

Increasingly, funders want to know how your work is having an effect upon society. Is it making a difference in the lives of everyday people? This type of impact is often described as “broader impacts”. Broader impacts can be:

  • Increasing diversity in STEM
  • Bettering scientific literacy among the public
  • Improvement of public health or safety
  • and many more things

Funders are also interested in supporting research of “intellectual merit”: research that advances knowledge in a discipline. Examples of intellectual merit include:

  • Making new connections between disciplines
  • Applying new approaches to existing questions
  • Developing new tools or methods for data analysis

Not all granting agencies will use the terms “broader impacts” and “intellectual merit” in their own programs. For example, the Wellcome Trust simply wants to “improve health for everyone by helping great ideas to thrive.” But the idea is usually the same, no matter the funder: they want to support research that will change the world.

OK, but how on earth does one prove that they’re changing the world?

How researchers are currently using metrics in their grant applications

Many diverse metrics tend to be included in grant applications, including:

  • Amount of grant dollars previously awarded
  • Number of graduate and undergraduate mentees
  • Reach of previous research, in terms of people educated, lives saved, or other benefits

Why do people use these metrics? A simple answer can be found in this NIH guideline for writing a proposal:

[Applicants should] capture the reviewers’ attention by making the case for why NIH should fund your research. Tell reviewers why testing your hypothesis is worth NIH’s money, why you are the person to do it [emphasis mine], and how your institution can give you the support you’ll need to get it done. Be persuasive.

Is there anything more persuasive to a scientist than well-applied data?

The key phrase here is well-applied: the metrics you use have to match the point you’re trying to make about the importance of your work. The following things must align for your metrics to be useful:

  • The metric(s) you use
  • Must relate to the grant program you’re applying for
  • And must inform the type(s) of impact you’re arguing you’ve had.

Here’s an example: are you an excellent mentor who’s applying for an NSF “Professional Formation of Engineers: REvolutionizing engineering and computer science Departments” grant? Then the number of graduates you’ve helped and their recent grants, publications, and job offers could be good data to include in your application, as it provides specific evidence of your impact in changing student-oriented practices in your department.

“But,” you might be asking, “what about citations?” Though their use is pervasive in other areas of academia, citation-based metrics are not often used in grant applications. For example, citation counts for articles tend not to be included in applications for some NSF directorates, and many people agree that the journal impact factor should never be used to judge grant applications. A research group within the NIH did recently propose the Relative Citation Ratio, but the metric has got its drawbacks and doesn’t appear to actually be in use for evaluations.

Citation-based metrics don’t tell grant reviewers how your work has made a contribution to your field. Research is cited for many, many reasons, after all. And those metrics can’t be used to describe “broader impacts” upon larger society–how research is translated into practice for the benefit of humanity–because they only measure the discussion of research among scholars.

That’s where altmetrics come in. Altmetrics are data that can better illustrate the various types of impact that research might have: educational, policy, public health, technology commercialization, and more. They’re especially useful to help describe the impact of research that comes in forms other than a journal article (datasets, software, interactive websites, etc).

Microbiologist Holly Bik told me via email,

“If you’re creating public outreach tools like blog posts or things that are often used by other scientists like software, it can be a problem because these newer outputs aren’t valued like journal articles are, and aren’t cited in the same ways. But if you have metrics for how those outputs are being reused, you can prove how valuable your work is: you can show people charts of regular website visitors, number of software citations, interesting collaborations that result, and more. Altmetrics are the only way to communicate those important but non-traditional impacts.

And as computer scientist Noah Smith explains over on Quora,

“The only thing I can think of that’s vaguely related [to using metrics in grant applications] is providing data on how often a software package released to other researchers was downloaded or (maybe) cited as having been used.  This can be taken as an indicator that the researcher can produce tools that others value.

Noah and Holly confirm my previous point: the metrics you use must be appropriate to your goals.

What the grant guidelines won’t tell you

Grant preparation guidelines rarely give instruction on how to use metrics in your grant application; that’s a fact. The omission is intentional, and it’s for a very good reason.

As NSF Program Manager Daniel S. Katz explains, “There is a group working on gathering metrics that have proven useful for [NSF Software Infrastructure for Sustained Innovation] projects, but I’m hesitant to provide examples myself, with the fear that new proposers will read my examples and decide they are the ‘right’ ones for them to use too.

Dan knows that it’s human nature to want to know (and use) the proven, “right” metrics (even sometimes if they’re not very applicable to one’s own work).

Be aware of this inclination and try not to succumb to it when preparing your own grant applications! Of course, knowing what others done can sometimes be useful, but it can be irrelevant equally as often.

So what metrics should you use?

There’s a lot of data out there that can be used to illustrate the many types of broader impacts and intellectual merit your work has had. That includes: your influence upon public policy, widespread use of your software in your discipline, or the fact that your articles and books are used to teach students worldwide.

In my next post, I’ll describe the time-saving tools you can use to collect this data in one place, as-it-happens. Stay tuned!

Icinghower - Ambassador ProgramOur Altmetric Ambassador of the month for November is Elisabeth Vogler, a PHD student and lecturer at the German Institute for International Educational Research. Elisabeth has a Masters degree in library and information science, and is planning to write her PHD thesis on altmetrics. She is currently working on a project entitled “Altmetrics for Education Science in Germany”, the objective of which is to study altmetrics data for German educational research outputs, and identify interesting stories that can be told using this data. She was recently invited to present on altmetrics to a large audience of librarians at the Institutes of the Fraunhofer Society. These credentials suggest that Elisabeth is something of an authority on altmetrics in the German research community! Stacy Konkiel and I had the pleasure of meeting and chatting to Elisabeth at the 2:AM conference in Amsterdam last month.

Elisabeth is currently teaching a 13-week optional module for Masters students at the University of Applied Sciences Darmstadt. She is teaching this module alongside Prof. Dr. Marc Rittberger, who is Director at the Information Center for Education at DIPF. The module is called “Alternative Metrics in Web 2.0”, and covers a range of topics under an overarching theme of bibliometrics and altmetrics. They have taught five lectures on the course so far, and have covered some introductory issues about the nature of science communication in the digital age. They have also talked about more specific topics such as the sources tracked by altmetrics providers, and the potential use cases for the data. The later lectures on the course will be discussion-based classes where the students will present their own opinions on the usefulness of altmetrics tools, and consider how the applications made possible by “the web 2.0” have changed research dissemination.

I asked Elisabeth what she finds useful about altmetrics data, and about why it is her preferred research topic. She said that she likes the immediacy of the data, and the fact that you can click through to the mentions and see who has been sharing your research.

“Now that we can track the discussion immediately via online mentions, a person can reconstruct the thoughts of others and find new questions, ideas, points of criticism, new fellows to talk to or maybe someone who might be interested in supporting research in that area. I love that Altmetrics make a dialogue between a scientist and his reader possible, and not only to other scientists but also readers that might not be experts but just have an interest in something.”

donutLike fellow ambassadors of the month Nader Ale Ibrahim and Colleen Willis, Elisabeth is clearly someone with a genuine interest in altmetrics, who wants to become an expert on the subject and sees educating fellow researchers as being a crucial part of that process. She really embodies the aims of the Ambassador program, and has taken considerate steps to make the data accessible and understandable for others at her institution. We’re very pleased to have attracted people like Elisabeth with the Ambassadors scheme, and we can’t wait to hear the outcome of her discussions with her students at the end of the course!


*The Altmetric Ambassadors program is currently not accepting new applicants, but stay tuned for further updates in the new year! You can also follow #altambs on Twitter. Thanks for reading!

Explore attention data for more than 10,000 ClinicalTrials.gov study records

We’re proud to announce today that Altmetric has begun displaying a wealth of online attention paid to clinical trial study records from ClinicalTrials.gov, the world’s largest registry of clinical trials. Operated by the United States National Library of Medicine (NLM) at the National Institutes of Health, ClinicalTrials.gov holds registrations from about 200,000 trials from more than 170 countries in the world. (You can find out more about the registry here.) Currently, online attention collected from our various sources since 2014 has been matched to over 10,000 study records from ClinicalTrials.gov; each of these has its own Altmetric details page.

We are excited to introduce attention tracking for this new research output type, as many of our users are involved in clinical research and publication. We hope that adding support for ClinicalTrials.gov will be valuable for the medical community in general, too. Patients, researchers, healthcare professionals, regulators, and many others can now read the conversations and media coverage surrounding specific clinical trials, even before results have been published. Screenshot of new Clinical Trials details page

For example, this clinical trial run by InSightec recently made headlines when the blood-brain-barrier was non-invasively opened for the first time in a brain tumour treatment; several news outlets and blogs directly linked to the trial’s study record (see the Altmetric details page) but the results have not been published yet.

And this clinical trial run by Sarepta Therapeutics has already been mentioned in Wikipedia, well before the trial has even finished recruiting participants.


How to find attention data for ClinicalTrials.gov study records

Users of the Altmetric Explorer and Altmetric for Institutions can quickly find all tracked ClinicalTrials.gov study records by entering “ClinicalTrials.gov” into the journal filter on the left sidebar. Within the Explorer and AFI, all users can browse through the entire collection of clinical trial study records and read through all the mentions.

Screen Shot 2015-11-12 at 11.59.42










In order to retrieve attention data for an individual clinical trial, users can input the study record’s National Clinical Trials (NCT) identification number (e.g. NCT00346216) within the identifiers filter on the left sidebar of the Explorer and AFI.

Altmetric for Institutions users have the added benefit of being able to enter NCT identification numbers into Custom Groups, and can thus include clinical trial study records in their collections of publications. Soon, we’ll also be enabling ClinicalTrials.gov records to be imported into Altmetric for Institutions, so that these too can be associated with authors and departments.

Since Altmetric treats the NCT identification number in the same way as any other scholarly identifier, users can also query the basic and commercial APIs with the NCT identification number for the clinical trial study record.

For example, for NCT identification number NCT00346216, the associated basic (free) API call is:


At the moment, it isn’t possible to use the Altmetric Bookmarklet on ClinicalTrials.gov pages, but we do plan to do this in future. For now, it’s easiest and quickest to access the data using the Explorer/Altmetric for Institutions and our API.


Like this feature? Let us know by sending us a message on Twitter or e-mailing us.

Welcome to Altmetric’s “High Five” for October, a discussion of the top five scientific papers with the highest Altmetric scores this month. On a monthly basis, my High Five posts examine a selection of the most popular research outputs Altmetric has seen attention for that month.

Appropriate for the month of Halloween, this month’s top scientific papers often border on “outlandishly” curious findings!

Region that the Keppler Space Telescope can see.

Region that the Kepler Space Telescope can see.


Paper #1. Planet hunters find… Impact debris or an alien mega-structure?

Our top paper this month is titled “Planet Hunters X. KIC 8462852 – Where’s the Flux?” The paper, published in the Monthly Notices of the Royal Astronomical Society, is freely available via arxiv.org.

The paper uses insights gleaned by the Zooniverse citizen science network to investigate strange dips in flux, or light, from the star KIC 8462852. The star is invisible to the naked eye, but researchers have been observing it through the Kepler Space Telescope.

This paper presents the discovery of a mysterious dipping source, KIC 8462852, from the Planet Hunters project. In just the first quarter of Kepler data, Planet Hunter volunteers identified KIC 8462852’s light curve as a “bizarre”, “interesting”, “giant transit.” – Boyajian et al. 2015

The study attracted a great deal of attention from both traditional news and social media sources. And that is likely because one of the more outlandish (excuse the pun) scientific explanations for the sharp dips in flux observed coming from the star is that aliens have built a mega-structure around the star to harness its energy!

In a study first released online in September, a team of scientists has shown that KIC 8462852 has a mysterious flicker—for its age and type, the star should be much brighter than telescopes show it to be. While the research has not yet been reviewed for publication, it is already stirring up excitement in the scientific community and beyond; after eliminating other theories, some suggest that the only explanation for the flicker is the presence of light-blocking megastructures, built by aliens. – Akshat Rathi, qz.com

There are, of course, other less outlandish explanations for the flickering of this star. Science Friday has an informative and entertaining episode out on the study, which features study author Debra Fischer, a professor of astronomy at Yale University. Fischer describes two other explanations: a swarm of comets circling the star, or the debris of a recent planetary collision. Both of these explanations have their issues however, leading scientists to still consider the distant possibility of an alien mega-structure in follow-up research. Some of this follow-up research will most likely “point a massive radio dish at the unusual star, to see if it emits radio waves at frequencies associated with technological activity.”

Jason Wright, an astronomer from Penn State University, is set to publish an alternative interpretation of the light pattern. SETI researchers have long suggested that we might be able to detect distant extraterrestrial civilizations, by looking for enormous technological artifacts orbiting other stars. Wright and his co-authors say the unusual star’s light pattern is consistent with a “swarm of megastructures,” perhaps stellar-light collectors, technology designed to catch energy from the star. – Ross Andersen, The Atlantic

Only additional research and in-depth observation of this star will reveal the true cause the the strange flickering of KIC 8462852.

My money is on comet collisions. But part of me hopes I’m wrong. – Stuart Clark, The Guardian

But whatever is causing the flickering of KIC 8462852, Phil Plait writes on his blog Bad Astronomy, it’s big.

KIC 8462852 is a star somewhat more massive, hotter, and brighter than the Sun. It’s about 1,500 light-years away, a decent distance, so it’s too faint to see with the naked eye. The Kepler data for the star are pretty bizarre: There are dips in the light, but they aren’t periodic. They can be very deep; one dropped the amount of starlight by 15 percent, and another by a whopping 22 percent! Straight away, we know we’re not dealing with a planet here. Even a Jupiter-sized planet only blocks roughly 1 percent of this kind of star’s light, and that’s about as big as a planet gets. It can’t be due to a star, either; we’d see it if it were. And the lack of a regular, repeating signal belies both of these as well. Whatever is blocking the star is big, though, up to half the width of the star itself! – Phil Plait, Slate

More reading:


Elephants on the move in Amboseli National Park, Kenya, East Africa. Image by Diana Robinson, Flickr.com

Elephants on the move in Amboseli National Park, Kenya, East Africa. Image by Diana Robinson, Flickr.com


Paper #2. Long live the elephant, or how elephants crush cancer

Our second High Five paper was published in JAMA this month, titled “Potential Mechanisms for Cancer Resistance in Elephants and Comparative Cellular Response to DNA Damage in Humans.” The authors of this paper set out to identify why elephants have lower-than-expected rates of cancer given their size and life span.

The authors performed a survey of autopsy data collected by the San Diego Zoo across 36 mammalian species including African and Asian elephants. When they looked at cancer mortality and cancer-related genes in elephants, they found something remarkable. Elephants, despite their huge bodies and long life spans, have a cancer mortality rate of less than 5%. We humans, on the other hand, have between an 11% and 25% cancer mortality rate.

What is the elephant’s secret to cancer resistance? Elephants have 20 copies of the gene p53, a tumor suppressor famous for being involved in apoptosis or “assisted suicide” for cells with damaged DNA, for example.

The study authors were able to show that compared to humans, elephants have an increased apoptotic or “cell suicide” response following DNA damage that might otherwise lead to mutations and cancers.

The surprisingly low cancer rates in elephants and other hefty, long-lived animals such as whales—known as Peto’s paradox after one of the scientists who first described it—have nettled scientists since the mid-1970s. – Mitch Leslie, Science News

This paper was covered by over 35 news outlets and mentioned over 800 times on Twitter.

Nature has already figured out how to prevent cancer. It’s up to us to learn how different animals tackle the problem so we can adapt those strategies to prevent cancer in people. – Joshua Schiffman, quoted in NIH article

Several scientists and science writers, however, have pointed out that more research is needed to clarify exactly how elephants’ extra copies of p53 reduce these animals’ cancer risks.

More reading:


The Road to Chernobyl. Image by Timm Suess, Flickr.com

The Road to Chernobyl. Image by Timm Suess, Flickr.com


Paper #3. Wildlife thriving in areas abandoned after the Chernobyl accident

Our next High Five paper is an open access paper published in Current Biology, “Long-term census data reveal abundant wildlife populations at Chernobyl.” The paper attracted the attention of news media and blogs in multiple languages.

According to NPR, “[w]hen you think of a nuclear meltdown, a lifeless wasteland likely comes to mind — a barren environment of strewn ashes and desolation.” But the area around Chernobyl appears to be teeming with wildlife today. Even if animals in the area are being affected by radiation in the area, the effects of this contamination are overshadowed by the fact that this area is now essentially a wildlife reserve relatively free of human disturbances.

It’s well-established that when you create large reserves and protect wildlife from everyday human activities, wildlife generally tend to thrive. – Jim Beasley, researcher at the Warnell School of Forestry at the University of Georgia, as quoted by NPR

The findings of this new study don’t necessarily imply that the area around Chernobyl is now safe for humans, or that the animals in the area are now safe for humans to hunt or eat. Radiation contamination in the area, and in the wildlife, persists. But it does appear that Chernobyl has become an unlikely wildlife haven, especially for larger animals normally under the pressures of hunting and habitat loss.

The researchers walked 35 routes totaling 315 kilometers, covering 14 routes all three years of the study and the remaining 21 routes in two out of three years. They spied tracks made by species including wild boar, elk, roe deer, red deer, wolf, fox, weasel, lynx, pine marten, raccoon dog, mink, ermine, stone marten, polecat, European hare, white hare, and red squirrel. […] Startlingly, the density of wolves is seven times higher near Chernobyl than elsewhere in the region. “We believe the high density of wolves within PSRER is due to a combination of abundant prey populations, greatly limited human activity, and lack of hunting pressure,” says Jim Beasley, a study co-author at the University of Georgia. – Conservation Magazine

Science News headlined, “Humans are worse than radiation for Chernobyl animals, study finds.” The authors of the Current Biology study reported no correlation between radiation contamination levels and animal track counts found in the region.

We’re not saying radiation is good for animals, but we’re saying human habitation is worse. – Jim Smith, environmental scientist at the University of Portsmouthin the United Kingdom, in press briefing

More reading:


Ancestor Teeth. Image credit: Credit: S. Xing and X-J. Wu

Ancestor Teeth. Image credit: Credit: S. Xing and X-J. Wu


Paper #4. Unexpected fossil teeth

Our forth High Five paper published in Nature in September 2015, “The earliest unequivocally modern humans in southern China,” reveals something unexpected about how early humans trekked around the globe. Based on the age of well-preserved fossil teeth found in the newly excavated Fuyan Cave in Daoxian (southern China), modern humans were in southern China 30,000–70,000 years earlier than in the Levant and Europe. This finding significantly changes our understanding of how early Homo Sapiens migrated out of Africa.

Listen to the Nature Podcast in which study author María Martinón-Torres explains how the ancient teeth challenge ideas of early human migration here.

The study authors dated the teeth to be around 80,000 to 120,000 years old.

Those ages buck the conventional wisdom that H. sapiens from Africa began colonizing the world only around 50,000–60,000 years ago, says Martinón-Torres. Older traces of modern humans have been seen outside Africa, such as the roughly 100,000-year-old remains from the Skhul and Qafzeh Caves in Israel. But many researchers had argued that those remains were only evidence of unsuccessful efforts at wider migration. – Ewen Callaway, Nature News

This is a rock-solid case for having early humans — definitely Homo sapiens — at an early date in eastern Asia. – Michael Petraglia, an archaeologist at the University of Oxford, as quoted in Nature News

The finding also begs the question, why did H. Sapiens only enter Europe around ~45,000 years ago according to known records, some 35-75,000 years after they were established in southern China according to the fossil teeth find? According to the Nature study authors:

Our results are relevant to exploring the reasons for the relatively late entry of H. sapiens into Europe. Some studies have investigated how the competition with H. sapiens may have caused Neanderthals’ extinction. Notably, although fully modern humans were already present in southern China at least as early as ~80,000 years ago, there is no evidence that they entered Europe before ~45,000 years ago. This could indicate that H. neanderthalensis was indeed an additional ecological barrier for modern humans, who could only enter Europe when the demise of Neanderthals had already started. – Liu et al. 2015

In other words, H. sapiens left Africa much earlier than we thought, but they may have had competition from Neanderthals in Europe.

Two of the study authors wrote about their findings for The Conversation: “Our fossil find suggests humans spread to Asia way before they got to Europe.”

More reading:


Image: Jeremy Brooks, Flickr.com

Image: Jeremy Brooks, Flickr.com


Paper #5. Step away from that bacon

Our fifth and final High Five paper this month has been shocking, angering and entertaining (with photos of large piles of hotdogs and bacon) the internet this week. Not that the findings haven’t been a long time coming. This week, the World Health Organization (WHO) delivered a summary report published in The Lancet Oncology classifying high consumption of various processed red meats (such as hotdogs, smoked sausages, etc.) as a Group 1 carcinogen, along with smoking. The meat-cancer link is dose dependent however – it depends on how much processed red meat you eat.

After sifting through decades’ worth of scientific literature, an IARC working group of 22 experts from 10 countries classified the consumption of processed meat as a Group 1 carcinogen to humans (processed meats are defined as meats that have been transformed through salting, curing, fermentation, smoking, or other processes to enhance flavour or improve preservation). This conclusion was reached on “sufficient evidence” that the consumption of processed meat causes bowel, or colorectal, cancer. – George Dvorsky, Gizmodo

Understanding the findings requires understanding the context of the risk. The increase in cancer risk for any single individual eating processed red meats is relatively small. However, the effect is significant if we consider the number of people who eat these products around the globe.

We know that, out of every 1000 people in the UK, about 61 will develop bowel cancer at some point in their lives. Those who eat the lowest amount of processed meat are likely to have a lower lifetime risk than the rest of the population (about 56 cases per 1000 low meat-eaters). If this is correct, the WCRF’s analysis suggests that, among 1000 people who eat the most processed meat, you’d expect 66 to develop bowel cancer at some point in their lives – 10 more than the group who eat the least processed meat. – Casey Dunlop, Cancer Research UK Blog

To be clear, this doesn’t mean that processed meat is as bad for you as smoking. (As Vox’s Brad Plumer explains, that’s not the case at all.) What it means is that according to the agency’s assessment, the links between processed meat and certain types of cancer are clear and well-established based on high-quality research. – Julia Belluz, Vox

You can read the summary report for yourself by registering with the publisher for free here. In the meantime don’t panic, but it might be worth reconsidering the balance of meats in your diet.

More reading:

This is a guest post from Tuija SonkkilaLeadership Support Services at Aalto University.

CRIS crash course

CRIS (Current Research Information System) is a new incarnation of semi-automated academic recordkeeping, a hotspot of data synchronized from University master data registries such as staff and project directories; enriched by metadata of activities, events and research output; and augmented with relations to e.g. research infrastructure. Many a CRIS acts also as a full-text repository.

Aalto University in Finland is about to launch its CRIS during the first quarter of 2016. The software is Pure by Elsevier, familiar also from many UK HE institutions.

The public interface for CRIS is a web portal, often nicknamed Research Portal, with typically three different paths to dig deeper: persons, organization, and projects.

Although part of the rationale behind CRIS is to increase the automation level of administrative tasks, a strong hope is also that researchers would find their personal CRIS profile attractive and useful, a live portfolio of their university life. One example of how CRIS can also nicely incorporate altmetrics comes from Aalborg University in Denmark. A Pure installation there too.

Screen Shot 2015-10-22 at 12.23.08

Trying to stay lean

My motivation for building interactive web applications on altmetrics started as a general exercise on mashups. When I found the R programming language a few years ago, and especially the RStudio Shiny web application framework, the interactive web was suddenly much easier to enter for someone like me who is not born native with JavaScript.

From the present day commercial altmetrics products, the PlumX Dashboards and Altmetric for Institutions are conceptually close to a CRIS. They provide both a bird’s eye view to the organization, and close-ups. Could I do something at least remotely similar with R and data originating from our test CRIS installation?


First I need a set of DOIs and their affiliation. At the moment, our Pure test database contains some 16 000 research output items published between 2010 and 2013.

Pure is shipped with a REST Web Services (WS) interface, but you cannot query the DOI field as such. For this exercise, I didn’t venture into filtering DOIs from all data returned by the WS. Instead, I ran a standard Pure report that collects metadata from those publications that have a DOI. Export to Excel, import to R.

For the record, about 20% of the publications have a DOI.

Unfortunately, the Pure data model has recently changed, and now you cannot get the same DOI report any longer. In addition, DOI is becoming part of Pure’s linking strategy to full-text files, which means e.g. that the now still separate DOI field will be depreciated. As of writing this, it is still unclear to me how and how much this will affect exercises like this one where the DOI value is used as a core identifier. One possibility is to start saving the DOI to a general URL field, but then you end up cluttering your data. Anyway, I hope there will be novel ways in Pure to save and ferret out the DOI.

A bunch of DOIs and their affiliation ID was now ready. Then: university organization. Because the organization tree and names of Schools, departments etc. is relatively stable data, a standard Pure report on organization will do just fine. Again, via Excel to R.

After joining the DOIs and organization units with the unit ID, I had the foundation ready. Then to the fun part: getting altmetrics data.

To use the free Altmetric API, you first need a key. Like an analog key, it opens the door to the API. Then you just ask the API, a DOI at a time:”Do you have any metrics about this one? If you do, please return it all to me.” Luckily, this kind of computational dialog is easy in R thanks to the rAltmetric library by rOpenSci. All you have to do is basically just to wait for the end result, and then, from it, select those metrics you are interested in.

Result: 10% of my set of DOIs had data collected by Altmetric.com.

Agrarian reservoir

Aalto University’s profile at Impactstory has been alive for roughly a year. The status is experimental, and items there are manually added by me. Focus is on software but there are also a few videos, slide decks and ArXiv articles. Despite its limitations, the collection has already proved useful. For example, researchers are productive coders – after all, a substantial part of modern research is algorithms and other computational artifacts – and repositories delivered via GitHub are in active use. Or are they? Thanks to the Impactstory profile, we have now some evidence that this indeed is the case.

Since December 2014, I have logged weekly statistics, those values that are visible as small plus flags on the Impactstory site.

Screen Shot 2015-10-22 at 12.24.58

I even made a WeeklyMetrics Twitter bot  of them, broadcasting cryptic messages once a week. No wonder that the bot has not been a huge success! Still, it has worked nicely as quality control: if the bot remains silent at the time it should deliver, I know that there are problems upstreams so I better check what’s the matter.

Industrial age

The layout of the 2amconf web application is based on the standard building blocks of the Shinydashboard R library. Navigation is done via the sidebar on the left, and a number of different visualizations are rendered onto the main body on the right.

    • scatterplot shows all items by default, but you can be filter them by School, and choose different metrics to axes. The names of the metrics follow roughly the convention of the Altmetric API documentation
    • barchart lets you compare two items from the selected School, either stacked or grouped
    • sunburst shows the distribution of items between Schools, departments and smaller units. Here, I use the inhouse acronyms of the units. Otherwise, text would pour out of the window
    • network graph of that part of the University organization that has items in the data
    • pivot table adds some business intelligence to the application. Note that this one is a separate application at the moment due to compatibility issues
    • timeline is for Impactstory items

For verification purposes, there is also a data table. It also acts as a linking layer to Altmetric.com source.

When you filter data by School, you may notice that in the five smaller boxes on the right hand side, values are changing too. In Shiny parlance, these are reactive values. They are dependent on the School filter. Whenever the filter changes, the box values change too. Here, they show the total numer of items plus few top altmetrics scores within that School.

By clicking the score value, you can check from the Altmetric landing page, what the item is about.

For example, in School of Science (SCI), the top number of Facebook citations is 4.

Screen Shot 2015-10-22 at 12.25.51

Following the link you will find out that the item is DOI http://dx.doi.org/10.1093/scan/nss096, a joined Finnish publication from a project where brain activity of supernatural believers and skeptics where examined. The Aalto University authors come from the Brain Research Unit.

Screen Shot 2015-10-22 at 12.26.43

And then

A dashboard-type application like the prototype presented here might be of interest to those who are concerned about the extent and quality of scientific outreach of the University. Are there differences in web presence between Schools? Is there a balanced coverage for all relevant groups of audience? Are some channels over- or underrepresented?

However, there is only so much dimensions you can visualize with scatterplots and barcharts. What is fairly easy though, is to add more metrics.

In this older application with Altmetric data but a wider scope and a different source, there are also values showing

  • number of authors from Web of Science by Thomson Reuters
  • Journal Metrics from Scopus by Elsevier
  • Finnish Publication Forum ranking (JuFo)

From CRIS, the number of authors can be calculated. Journal Metrics and JuFo are among those datasets that will be imported to CRIS on a yearly basis anyway.

The color palette of the scatterplot circles represent the School. With an outer layer in different color aka stroke you can tell something else.

Screen Shot 2015-10-22 at 12.27.45

Here, the golden stroke is reserved to those items that are published in an Open Access journal. The data is kindly made available by Lib4RI in Switzerland.

All these prototypes do not contain any dynamic data. A move to a more up-to-date application is not trivial. Let’s say that I’d like to add a query field: “This is our unit ID. Please plot a chart of all our present altmetrics – while I wait.”

First, the application would need to be hosted by the University – not by RStudio – because access to Pure Web Services is restricted to the University network, and for a good reason. CRIS contains a wealth of data, especially about persons. The more information, the more responsibility. Like Wouter Gerritsma has mentioned, CRIS has in fact more common with backend systems than with those in front.

Second, response time. I seriously doubt that I could boost the performance of the application to do all necessary steps within 10 seconds. In practice I would need to be proactive, and have all data ready in the background, up-to-yesterday. From the overall CRIS performance point of view, this would also be the only sensible solution. If you query the WS all the time, it slows down the whole Pure system.

For those of you interested to see the R code of the 2amconf dashboard application, it is accessible via DOI http://dx.doi.org/10.5281/zenodo.32108. Note that the various data files that the application imports are pre-processed. If you’d like to know what they look like and how they are made, drop me a line. Drop me a line or tweet anyway!


This is a guest post from Simon Porter,VP Academic Relationships and Knowledge Architecture at Digital Science. 

It’s been just over a month since attention surrounding articles published in the The Conversation was first tracked by Altmetric, so it seems a good time to see how well this new form of publishing does in engaging the public when compared to other titles in the database. As The Conversation’s mission is to “break down the barriers to expert knowledge and opinion – to make it easier for academics to communicate their expertise to a more diverse audience”, it is likely that their ability to engage the public should will be very good. But how good, how does it compare to standard research publications that are also tracked in Altmetric, and what does this mean for individual institutions?

Using the Altmetric Explorer, and filtering for articles mentioned in the last month, The Conversation compares favourably with Nature and Science both in terms of engagement with individual articles and number of articles that have been mentioned.

Screen Shot 2015-10-22 at 15.36.18

Using the Altmetric API, we can also compare the total articles mentioned and portion of Screen Shot 2015-10-22 at 15.44.06Altmetric score gained in the last week from The Conversation to all of the articles indexed in PubMed. The results are equally impressive: The Conversation represents 6% of articles, but 30% of accumulated Altmetric Score:

Screen Shot 2015-10-22 at 15.41.01Screen Shot 2015-10-22 at 15.41.44







Which institutions are benefiting from attention in The Conversation?

As articles in The Conversation are typically authored by single researchers, using a combination of the Altmetric API, and affiliation metadata stored on The Conversation articles themselves, it  is possible to show which institutions are benefiting most (this week). Listing the top 30 with supplementary data from Digital Science’s GRID dataset, it is easy to see a predominance of Universities from Australia and the UK: 

Screen Shot 2015-11-09 at 10.53.58

Looking back over the last month for the top ten of these institutions in the last week, that data suggests that that The Conversation is also providing a reasonable measure of consistent attention to researchers at these institutions over time:

Screen Shot 2015-10-22 at 15.35.40

So if you are considering strategies to increase engagement with research at your institution, should you encourage your researchers to consider contributing to The Conversation?

Based on the data in Altmetric, the answer would be Yes!

50We’re delighted to see Altmetric’s Founder Euan Adie named as one of the Uk’s Top 50 Data Leaders and Influencers, in a list released today by Information Age.

“The prestigious Data 50 list was whittled down from more than 200 nominations and shines a light on those transforming organisations, enhancing decision-making and driving business value through the use of data, as well as managing its proliferating growth.”

Euan is described as being “at the forefront of the alternative metrics movement in academic research, which is changing the way scientists get credit for their work and how funders and governments assess the value of the research they pay for.”

They add, “Altmetric, the company he founded, delivers data on hundreds of thousands of research articles and other outputs each day to help academics worldwide show the impact of their research. It does this by automatically pulling together and processing large amounts of data from policy makers, scholarly reference managers, newspapers and magazines and social media, then giving them smart ways to visualise and understand it.”

Mark Hahnel, Founder of figshare, and Martin Szomszor, Head of Data Science at Digital Science, were also included in the final list – congrats to them!

The full list of winners can be found here. The 50 winners will be  celebrated by the industry’s top players at The Data 50 Awards on 25 February 25th in London, where they will also reveal the winners of ten special ‘Best in Class’ categories.

2015CWThis month’s Altmetric Ambassador of the Month is Colleen Willis, a librarian at the National Academies of Sciences, Engineering and Medicine in the United States.

Colleen is what one might call a “jack of all trades” at the Academies Libraries, providing essential services for many leading international scholars. We sat down (virtually, of course) with Colleen to ask her about how she uses altmetrics in her current role, and what tips she has for others working with government or library services that want to advocate for altmetrics at their institutions.

Tell me about your current work at the National Academies of Sciences, Engineering and Medicine Library & Research Center. What does a typical day look like for you?

Definitely coffee first.  After the proper amount of caffeine I spend my day managing our electronic resources, websites, interlibrary loan, cataloging and impact services.  In between I answer reference questions, teach workshops, evaluate new digital tools and platforms and spend a large chunk of time reaching out to staff and inserting the library services into their workflow.

How do you use altmetrics in your current job?

We’ve created an Impact LibGuide Service for our staff. By request, the library curates information on the impact an NAS product has made in the press, legislation, peer reviewed research and altmetrics.  Collecting the traditional metrics takes time, so by adding the altmetrics piece we can now provide immediate qualitative feedback. We compile all of this data into an Impact Summary, which staff can use to communicate the impact of their work to executives and sponsors and hopefully increase business opportunities for the institution.

We work closely with the National Academies Press and they have embedded the Altmetric badge into their website alongside additional usage metrics like downloads and views for each report. This is a terrific marketing tool that allows a web page visitor to view online attention and gain valuable insight into what the public and the scientific community think about a report. Click on the “stats” tab here for an example.

We also teach a workshop, “Motivational Metrics: Using Data to Communicate Impact”.  We demonstrate all of the metric tools and impact services available to the staff which includes a large portion on the Altmetric Explorer for Institutions platform. We also take this time to query staff about how they might want to use impact data and what their challenges are in collecting and using the information.

Where did you first learn of altmetrics?

About two years ago, I was in the middle of evaluating the current altmetrics tools available and two of my colleagues came back from different professional conferences and shared the information.  Altmetrics was a hot topic at that time.

What advantages do you think altmetrics can offer researchers and librarians working in government, specifically?

The information you can collect about a report, workshop summary, article or other product using altmetrics helps you understand its national or global influence. You can package that information for your executives, sponsors, and the consumer to demonstrate the value in the products you deliver.  Ideally, altmetrics will help decision makers and national leaders understand that what you do has tangible value.

What advice do you have for other librarians interested in advocating for altmetrics at their institutions?

Start the conversation. Reach out to staff and find out what kind of information is important to their work. Do they use social media to communicate impact? Are they required to report to executives or sponsors about impact? Once you have a feel for what staff need, offer regular workshops that highlight multiple methods, sources, and tools to collect and analyze quantitative and qualitative impacts.  Marketing altmetrics as a compliment to traditional metrics and demonstrating their ability to track immediate response from both researchers and the general public worked well for my institution.

Thanks, Colleen!

Colleen is one of over 200 librarians, researchers, administrators, publishers, and students who are so passionate about Altmetric that they’ve volunteered to be an Altmetric Ambassador. We’re grateful for her service, and are proud to call her an Ambassador!

On Friday 9th October I attended the altmetrics15 workshop in Amsterdam with several of my colleagues. This was the last of three days of conferences, as the 2:AM altmetrics conference had taken place on the Wednesday and Thursday of the same week.

At the 2:AM conference, the delegates had wrestled with some of the larger overarching issues to do with altmetrics – how do we define impact? How can we measure the impact of Public Engagement with Science initiatives? How can we arrive at standards for altmetrics? At the end of the two days, a panel consisting of Jason Priem, Cameron Neylon, Dario Taraborelli and Paul Groth called for “more sources, more data, more research and more theory”. As I watched the morning’s events unfold at Altmetrics15, I wondered if the panel felt they had got their wish. Altmetrics15 provided a platform for academics and altmetrics specialists to present their work in concise ten minute talks.

altmetrics15_sq_b_shMojisola Erdt from Nanyang Technological University in Singapore kicked off proceedings with a very “meta” presentation on “the altmetrics of altmetrics literature”. Mojisola and her team had conducted a Scopus search with the keyword “altmetrics” and had used the Altmetric API to view the Altmetric data for the 391 results. The team stated that “research literature on altmetrics has been growing at a fast pace” since the term was first coined in 2010. Of the literature they examined, 90% of the articles had been mentioned on Twitter, and 77% had Mendeley readers, suggesting highly developed altmetrics practices in those communities. Next, Valeria Scotti presented a paper about altmetrics for biomedical research. Her team suggested that more formal validation of altmetrics was needed for greater uptake in the biomedical community. Following this, Judit Bar-Ilan presented research on citation counts for Altmetric literature, applying bibliometric measures to altmetric outputs. The team found that on average, altmetrics research defined as “discussion” had attracted more citations than pure “research” papers. They also found that the citation counts decreased significantly for more recently published papers.

The first group of sessions focused heavily on the quantitative data, and the raw counts of Altmetric mentions and citations from different sources. By contrast, the later sessions focused more on the underlying qualitative data, and on practical applications for it. Ad Prins and Jack Spaapen called for better altmetrics coverage across different types of research output, to make altmetrics useful for social sciences and humanities departments for research evaluation purposes. Following on from this, Altmetric’s very own Stacy Konkiel presented the results of a survey designed to shed some light on the librarian use case for altmetrics. The survey found that librarians are currently unlikely to use altmetrics in collection development and tenure promotion decisions.

Session three consisted of talks concerning the “quality” of altmetrics data. Zohreh Zahedi and her team highlighted discrepancies in data from Mendeley, Lagotto and Altmetric, while Rodrigo Costas and Grischa Fraumann discussed trends in the way research outputs are discussed in blog and news sources. William Gunn raised the point that altmetrics providers need to support identifiers for multiple document versions, and decide whether to disambiguate the metrics that accumulate for the different versions.

The first three sessions suggested that in the last five years, we’ve learnt a lot about the limitations of the data for research and research evaluation purposes. Because there are different altmetrics providers with different ways of collecting the data, it’s very difficult to uncover the “true” numbers and use them to come to any concrete conclusions about donutresearch dissemination practices. One of the later talks I enjoyed the most came from Cameron Neylon, as he stressed that the data still has the potential to tell us interesting stories. Cameron stated that the online mentions of a piece of research are digital footprints, and that we can use these footprints to create a pathway for a certain type of impact. For example, can we plot a pathway to academic impact if someone tweets a paper, then reads it on Mendeley, then cites it in a paper of their own? I also enjoyed Stephanie Haustein’s  talk on whether it is possible to perform sentiment analysis on tweets. Haustein and her team found that across a survey of 270 randomly selected tweets, hardly any of the tweeters expressed a positive or negative sentiment about the research. This suggests Twitter is more for pure research dissemination than opinion-based posts.

Overall, I thought altmetrics15 was a great success. The organizers had managed to get altmetrics specialists from across the globe in one room, and the format allowed lots of researchers to share and discuss their findings with like-minded academics. It’s been five years since the Altmetrics Manifesto was first published, and it’s clear that five years in, the data is still throwing up a lot of questions and generating a lot of debate in the academic community. However, the research presented at altmetrics15 suggested that we do have some answers. We now know that although there are some impacts we can’t possibly hope to measure, we can use altmetrics to gain an impression of usage and attention that is missing from the picture offered by bibliometrics. As Kim Holmberg and colleagues were saying in their session, the question now is about how to develop a gold standard for these metrics, make them more sophisticated, and think about possible ways of aggregating the numbers and categorising types of impact.

The full schedule from the altmetrics15 workshop is available here.