Identifying the right literature to spend time reading has long been a challenge for researchers – often it is driven by table of contents alerts sent straight to an inbox, or a recommendation from a superior or colleague. Libraries have invested in systems to make the most relevant content easily accessible and above all, easily discoverable. But a search in a discovery platform can draw hundreds of results, and it is sometimes difficult just from those to make an informed decision about what might be worth digging further in to.

Screen Shot 2014-11-20 at 14.49.24This is where altmetrics might be able to help. Including the Altmetric badges and data for an article within a discovery platform makes it easy for a researcher to determine which of those articles have been generating a buzz or picking up a lot of attention online, and with just a few clicks they can view the full Altmetric details page to identify if the attention is coming from news articles, blogs, policy makers, or being shared a lot on a social network such as Twitter or Facebook.

But it’s not just about what’s popular – it’s about context: this level of detail makes it easy to understand who is talking about the research and what they thought of it. Insight such as this may be particularly useful for younger researchers who are still building their discipline knowledge and looking for new collaborators and wider reading material.

At Altmetric we’re already supporting the implementation of our data and badges in platforms such as Primo (from ExLibris) and Summon (from ProQuest).

 

Primo
There’s a free plugin which can be added to any Primo instance. Anybody can download primoand install it, enabling  their users to see scores and mentions for any articles matched in the system via a new “metrics” tab on the item details page.

Clicking through on the donut brings you to the Altmetric ‘details page’, which displays the original mentions for the article. If you get in touch we can open up the data so that your users can see all of the mentions from each source – otherwise they’ll see just 3 of each type.

You can find the documentation that details the long and short form badge embeds on the Primo Developer Network . Here’s an example of an implementation at Wageningen UR:

wageningen

 

Summon
Summon-logo-withtextSummon clients using a custom interface (like Heidelberg and University of Toronto) can easily integrate the Altmetric badges themselves .

You’ll need to use the JSON API, and as long as the results have identifiers (such as a DOI or PubMed ID) you’ll be able to display the altmetrics data for your articles.

And again, please do let us know once you’ve got them up and running so that we can ensure your users can click through and see all of the mentions for each article (not just the first 3 from each source).

 

If you’re running another discovery service and would like to find out if you can integrate the Altmetric badges, please drop us a line and we’ll see what we can do to help.

Articles and other research outputs don’t always get attention for the reasons we might first assume. There’s a reason you shouldn’t ever rely on numbers alone…

This was demonstrated in spectacular form once again this week when the Twittersphere jumped on a recent article that contained a rather unfortunate error – an offhand author comment asking “should we cite the crappy Gabor paper here”?

The article got a lot of attention – it is now one of the most popular items we’ve picked up mentions for this week (here’s another), rocketing to near the top of the rankings for the journal as the error was shared.

Indicators like the attention score we use reflect the fact that lots of people were talking about the article but not that the attention was, and here we’re just guessing, probably unwanted.

This isn’t the first time we’ve seen cases like this. As you would expect articles get attention for all sorts of reasons which aren’t just to do with the quality of the research.

A few favourite examples we’ve come across over the years include this paper authored by a Mr Taco B. Monster – currently claiming an Altmetric score of 485, with almost 600 mentions to date, and also brought to our attention this week was the tale of the disappearing teaspoons – which is still causing quite a stir ten years after it was first published:

teaspoons 

Flawed Science
A more serious example of attracting attention for all the wrong reasons which comes to mind is in relation to this article published in Science in 2011. The researchers suggested that a type of bacteria could use arsenic, as opposed to the phosphorus used by all other life on the planet, to generate DNA. The article initially received a huge amount of press attention but other scientists quickly pointed out errors – you can dive into some of the relevant mentions by looking at the Altmetric details page

neutrino

Similarly, a suggestion that neutrinos may have been measured as travelling faster than the speed of light did not stand up to further scrutiny, although the truth was only uncovered months later following numerous (successful, but flawed) re-tests.

 

probably

Amongst the blogs, news outlets, general public and other scientists questioning the results coming out of CERN, this article, published just weeks after the original data was made available, generated some impressive altmetrics of its own, most likely due to its humorous abstract. 

 

Playing politics
Typically we’ll also see a high volume of attention around research that is particularly topical or controversial at the time. An article published in the Lancet this year which examined the privatisation of the NHS in Scotland with relation to a yes or a no vote in the recent referendum received a very high volume of tweets as those in the ‘yes’ campaign shared it to encourage their followers to vote in favour of independence:

We’ll be releasing our Top 100 most mentioned articles for 2014 in a couple of weeks (you can see the results for 2013 here) – it’ll be interesting to explore why and how those that make the list caught the public and academic imagination this year.

One of the things that appealed to me when I joined Altmetric recently was the distinctive visual ‘donut’ that illustrates the various different sources of attention that an article has attracted.

Introducing Altmetric's new bar visualisation.

Introducing Altmetric’s new bar visualisation.

I really like how the donut’s fixed number of slices forces the eye to appreciate the approximate proportions of an article’s sources. Any visualisation that is more precise, such as a more conventional pie chart, tempts us to look too closely at proportions of one source against another, as well as potentially allowing one particular source which has generated loads of mentions to completely overshadow the others.

I happen to think that including lots of donuts on a single page can look pretty awesome, especially in views such as Altmetric Explorer’s Tiled mode. But when including badges on your own site, it may be the case that what works for our site isn’t quite right for your own.

That’s partly the reason why we’ve always offered a variety of badge styles – donuts in three different sizes, along with smaller badges that contain a simpler button and score. And for most uses of our embeddable badges, I think the range of sizes and popover options should enable you to get badged up really easily and effectively.

Much as I love the donut, though, it may not be appropriate in every situation – but the smaller buttons with just an Altmetric score lack that proportional, at-a-glance view that makes the donuts so appealing.

So now, we offer another visualisation type for your site: the bar. It’s got the same colour scheme as the donut, in the shape of a horizontal strip. Think of it as the cruller to our usual ring donut.

Available in three fixed sizes, bar badges work best when space is at a premium, such as within tabular lists of articles where even the smallest donut would make each row of the table far too deep. We’re using such an arrangement in the summary report pages within our Altmetric for Institutions pages.

The bar visualisation as it appears in Altmetric for Institutions.

The bar visualisation as it appears in Altmetric for Institutions.

 

As you can see from the screenshot above, the bars’ scoreless display emphasises the proportionate attention each paper has been receiving. You can provide more information even at this level by using our optional popovers with statistical breakdowns – and if your table data clicks through to more details about an article, you can of course continue to use the traditional donut on those pages.

 

Installing the bar visualisation on your pages

Badge builder

The interactive badge builder on our embeddable badges documentation.

The bars are available right now – all you need to do is specify bar, medium-bar or large-bar as the badge style in your embed code. If you head to our embeddable badges documentation page, you can see for yourself in the interactive badge builder.

If you’re using our badges on your site already, you can put the new designs to use straight away. If you’re not using our badges yet, now’s the perfect time to try, and it’s really easy – see our documentation for a step-by-step guide.

Let us know how you get on with the new bars, as well as our other badge styles, and send us examples of how you’re including them on your website – we love to see how people are putting Altmetric’s information to use!

You might have seen the article published recently in Nature which look at the top 100 most highly cited papers from 1900 onwards, based on data from the Thomson Reuters Web of Science Database.

The article highlighted that it is (perhaps unsurprisingly) much older articles that have accrued the majority of citations to date and therefore dominate the list – with more recent breakthroughs and nobel-prize winning advances struggling to compete with the 12,119 citations it would take to rank in the top 100.

So we were curious to see what other kind of attention these articles might have been receiving in recent years. At Altmetric our data goes back reliably until November 2012 – meaning that we have been tracking our sources for any mentions of those articles since then. A search in the Altmetric Explorer tells us that since November 2012 we have seen 287 mentions in total for the 52 of the 84 articles in the top 100 list that were listed with a DOI or other unique identifier.

The oldest paper from the list that our database contains a mention of is The attractions of proteins for small molecules and ions, published in 1949 in the Annals of the New York Academy of Sciences. The (joint) 3rd oldest article in the top 100 list, in April 2013 we picked up a mention of the paper from a Japanese researcher on Twitter:

The tweet went on to be favourited by 2 other Twitter users, both researchers themselves; an interesting example of how core literature is being shared amongst peers online, even decades after publication.

Of the 52 articles we had picked up mentions for, 5 had been mentioned in mainstream news outlets in the last 2 years:

Electric Field Effect in Atomically Thin Carbon Films
Science

 

Improved patch-clamp techniques for high-resolution current recording from cells and cell-free membrane patches
Pflügers Archiv

 

van der Waals Volumes and Radii
The Journal of Physical Chemistry

 

Clinical diagnosis of Alzheimer’s disease
Neurology

 

Continuous cultures of fused cells secreting antibody of predefined specificity
Nature

 

 

 Just one article in the set, A rating scale for depression, had been referenced in a policy document source we track: Lithium or an atypical antipsychotic drug in the management of treatment-resistant depression: a systematic review and economic evaluation - part of the NICE Evidence Search Collection. Our policy documents sources are expanding every week so this statistic may change over time as our coverage grows.

It’s interesting to see that many of these older articles, quite apart from just citations, are still generating attention online. The original data is available below if you’d like to take a look at all of the mentions we’ve seen for each article.

Our full dataset for mentions of these articles is available here.

And the original Nature article can be found here.

Ahead of the recent 1:AM altmetrics conference we ran a hack day at the Macmillan offices in King’s Cross. Lots of exciting ideas and developments came out of the day, and there was one in particular we wanted to share…

Altmetric have been a supporting member of ORCID, an organization which enables researchers to create a unique identifier for themselves that they can then associate with all of their research outputs, since early 2013.

It’s always been possible for our  database to capture and store the journal and publisher information for the 2.5 million+ published articles and datasets we’ve seen mentions of online in the last few years, but matching those outputs back to an author represents more of a challenge.

Whether it is a researcher with the same name as another, a different spelling or use of just an initial and surname, or a change of family name, it can be very hard to generate a consistent and reliable record of scholarship. ORCID offers a solution for many of these roadblocks,and it’s adoption is being encouraged globally by institutions, funders and publishers.

We were therefore very excited when one of the teams at the hack day decided to focus their efforts on building a tool that brings the Altmetric data and ORCID IDs together – meaning you could easily find and browse the altmetrics data for any output that was associated with a specific ORCID ID (i.e. a specific researcher).

A test version of what was built can be found here; feel free to try it out! Just enter in the ORCID ID for any researcher and (complete with magical spinning donut) you’ll get back all of the Altmetric data and a breakdown of the mentions for all of that authors output. It should run without too many issues but please do bear in mind that this was built in a day and hasn’t been rigorously tested – be gentle.

We’ve used our Product Development Manager Jean’s ORCID ID in this example:

profile

 

 

 

 

 

 

 

 

 

We’d love to know what you think and will be looking to build ORCID integration further into Altmetric tools in future.

 

 

You can get the dataset from and the PDF of this post here:
Adie, Euan (2014): Attention! A study of open access vs non-open access articles. figshare.
http://dx.doi.org/10.6084/m9.figshare.1213690

There are lots of good reasons to publish in open access journals. Two of the most commonly given ones are the beliefs that OA articles are read more widely and that they generate higher citations (for more on this check out slide 5 of Macmillan’s Author Insights Survey, which is up on figshare).

Do open access articles get higher altmetric counts?

In celebration of Open Access week we decided we’d take a look at some hybrid journals to see if there was any discernible difference in the quantitative altmetrics between their open access and reader pays articles. We picked Nature Communications to look at first as it’s a relatively high volume, multi-disciplinary-within-STM hybrid journal (at least it was during our study period – it has gone fully OA now), selects articles for publication blind to OA / non-OA status and clearly marks up authors, license and subject areas in its metadata. Plus we sit in the same building.

Coincidentally Nature Publishing Group recently commissioned a study from RIN that indicates that the OA articles in Communications get downloaded more often than their reader pays counterparts. So does that hold true when looking at other altmetrics sources?

Prepping the data & first impressions

Using a combination of the Altmetric API and web scraping we pulled together data on all the Communications papers published between 1st October 2013 and 21st October 2014. You can find all of it on figshare.

The short answer is that yes, there does seem to be a significant difference in the attention received. We’re going to cover some of the highlights below, but feel free to take the dataset and delve deeper – there’s only so much we can cover in a blog post.

First let’s characterize the dataset. It contains 2,012 articles of which 1,395 (70%) are reader pays. The bulk of articles – 1,181 (59%) – are tagged ‘Biological sciences’ by the journal. 519 (26%) are ‘Physical sciences’, 193 (10%) ‘Chemical sciences’ and 104 (5%) ‘Earth sciences’. Only 4 of the 2,012 are reviews.

We grouped articles by month of publication so that we can control for the fact that some kinds of altmetric data accrues over time. You can see this clearly in the graph below – the median number of Mendeley readers for articles published in each month is the line in red.

Screen Shot 2014-10-23 at 17.28.28

A tangent: every source is different

“Older articles have more” doesn’t hold true for all sources. I’ve plotted the median number of unique Twitter accounts talking about each paper by month of publication above too, in blue. Notice that the median actually trends down very slightly as we look at older papers.

This is because: (1) most tweeting happens very quickly after publication and (2) the Twitter userbase is growing incredibly rapidly so there are more people tweeting papers each month.

Think about it this way: if you compared a paper published in 2009 to a paper published in 2014, the 2009 paper would have lots of citations (accrued over time) and hardly any tweets (as not many researchers were tweeting when it was first published – Twitter was still very new). The 2014 paper would have hardly any citations but lots of tweets (as there is now a large number of tweeting researchers).

This is sometimes addressed in novel ways in altmetrics research: Mike Thelwall’s paper in PLoS One presents one elegant solution to a similar issue.

An initial hypothesis

Let’s get back to OA vs reader pays. Here in the office our initial hypothesis was that there would be an OA advantage for tweets in general as a larger audience would be more inclined to read and tweet the paper, but that the effect would be much less pronounced in Mendeley readership and amongst people who regularly tweet scientific papers.

Here’s the median number of tweeters over time, comparing the two cohorts in each month of publication:

Screen Shot 2014-10-23 at 17.37.57

And the median number of Mendeley readers (remember that newer articles won’t have many Mendeley readers yet):

Screen Shot 2014-10-23 at 17.38.07

To get a feel for the data we graphed means and 3rd quartiles too. Here’s the mean number of tweeters who regularly tweet scientific papers:

Screen Shot 2014-10-23 at 17.46.30

There’s a lot of light blue in these graphs and just eyeballing the data does seem to indicate an advantage for OA papers. But is it significant? Once we establish that we can start considering confounding factors.

If we look at all of the articles published in Q4 ’13 (to give ourselves a decent sized sample) we can compare the two cohorts in detail and do some sanity checking with an independent t-test. We’ll look at average author and references counts too in case they’re wildly different, which might indicate an avenue for future investigation.

Here are the results:

Screen Shot 2014-10-23 at 18.13.32

It seems like there is a difference between the number of tweets, the number of tweets by ‘frequent article tweeters’ and Mendeley readers.

The idea that the effect may be less pronounced for Mendeley doesn’t really hold water – a median of 23 readers for OA articles vs 13 for the reader pays is a pretty big difference.

Interestingly we didn’t see much difference in the number of news outlets or blogs covering papers in the two cohorts. A lot of news coverage is driven by press releases, and on the Nature side there is no preference for OA over reader pays when picking papers to press release (we checked).

Confounders

If we accept that the articles published as open access did get more Twitter and Mendeley attention the next obvious question is why?

Two things to check spring immediately to mind:

  1. Do authors select open access for their ‘best’ papers, or papers they think will be of broader appeal?
  2. People tweet about life sciences papers more than they do physical sciences ones. Perhaps the OA cohort has a higher number of biomedical papers in it? Notice that the OA cohort also has more authors, on average, than reader pays cohort. Might that be an indicator of something?

Do authors select only their ‘best’ papers for open access?

It doesn’t seem like we can discount this possibility. Macmillan’s author insight survey (warning: PDF link) have 48% of scientists saying “I believe that research should be OA” as a reason to publish open access, which leaves 52% who presumably have some other reason for wanting to do so. 32% have “I am not willing to pay an APC” as a reason not to go OA. The APC for Nature Communications is $5,200.

Are the higher altmetrics counts a reflection of subject area biases?

Screen Shot 2014-10-23 at 21.21.01

There doesn’t seem to be that much difference when we look at top level subjects, though it might be worth pulling out the Earth Sciences articles for a closer look.

That said, some disciplines definitely see more activity than others: if we look only at articles with the keyword ‘Genetics’ across our entire dataset, taking the median of unique tweeters per article each month then the ‘median of medians’ for OA is 21 and 6 for reader pays.

Compare that to ‘Chemical Sciences’ where the OA median of medians is only 3, and for reader pays it’s 2.

Wrapping up

Open access articles, at least those in Nature Communications, do seem to generate significantly more tweets – including tweets from people who tweet research semi-regularly – and attract more Mendeley readers than articles that are reader pays.

It seems likely that the reasons behind this aren’t as simple as just a broader audience. We’ve also only been looking at STM content.

Would we find the same thing in other journals? We deliberately looked within a single journal to account for things like differences in how sharing buttons are presented and to control for different acceptance criteria, and the downside to this is we can’t generalise, only contribute some extra datapoints to the discussion.

We’ll leave further analysis on those fronts as an exercise to the reader. Again, all the data is up on figshare. Let us know what you find out and we’ll follow up with another blog post!

hello

Here in the UK, HEFCE (the Higher Education Funding Council for England, which distributes central funding to English universities) is currently running an independent review on the role of metrics in research assessment.

As part of that, a couple of weeks ago the review panel convened a workshop at the University of Sussex: In Metrics We Trust? Prospects & pitfalls of new research metrics. I was lucky enough to attend and thought it was a really useful day, not least because it was a chance to hear some pretty compelling points of view.

I’m excited that altmetrics are in the mix of things being considered, and that time is being taken to carefully assess where metrics in general may be able to help with assessment as well as, probably more importantly, where they can’t.

How can altmetrics be used in a REF like exercise?

write_a_story

Before anything else, here’s my perspective on the use of altmetrics data in the context of REF style formal assessment exercise (there are lots of other uses within an institution, which we shouldn’t forget. Research isn’t all about the post publication evaluation, even if it sometimes feels that way).

When I say “altmetrics data” I mean the individual blog posts, newspaper stories, policy documents etc. as well as their counts, the number of readers on Mendeley etc. Not just numbers.

  • If we’re going to look at impact as well as quality, we must give people the right tools for the job
  • Numbers don’t need to be the end goal. They can be a way of highlighting interesting data about an output that is useful for review, with the end result being a qualitative assessment. Don’t think ‘metrics’ think ‘indicators’ that a human can use to do their job better & faster
  • On that note, narratives / stories seem like a good way of addressing a broad concept of impact.
  • Altmetrics data can help inform and support these stories in two main ways.
  • Figuring out which articles have had impact and in what way, then finding supporting evidence for it manually takes a lot of effort. How do you know what direction to take the story in? Automatically collected altmetrics indicators could save time and effort, showing areas that are worth investigating further. Once you have discovered something interesting altmetrics can help you back up a story with the quantitative data.
  • They may also highlight areas you wouldn’t otherwise have discovered without access to the data. For example, altmetrics data may surface attention from other countries, sources or subject areas that you wouldn’t have thought to search for
Using altmetrics data to inform & support: an example

Alice is an impact officer at a UK university. She identifies a research project on, say, the contribution of climate change to flood risk in the UK that is a good candidate for an impact case study.

She enters any outputs – datasets, articles, software, posters – into an altmetrics tool, and gets back a report on the activity around them.

On a primary research paper:

http://www.altmetric.com/details.php?citation_id=269428

… she can quickly some uptake in the mainstream media (the Guardian, the New York Times) and magazines (Time, New Scientist). She can see some social media activity from academics involved in the HELIX climate impacts project at Exeter, a Nature News correspondent, the science correspondent for Le Monde and the editor for CarbonBrief.org.

Switching to the policy side she can see that there are two citations tracked from government / NGO sources: a report from the Environment Agency and one from Oxfam.

These are documents from UK organizations that Alice’s institution may have already been tracking manually. But research, even research specifically about the UK, can be picked up worldwide:

http://www.altmetric.com/details.php?citation_id=1903399

For example above by the AWMF, which is similar to NICE in the UK.

Alice can support her assessment of what it all means with other indicators: by checking to see if it’s normal for papers on anthropogenic climate change and flood risks to get picked up by the international press. She can see how the levels of attention compare to other articles in the same journal.

She can do all this in five minutes. It doesn’t help with the next, more important part: Alice now needs to go and investigate if anything came of that attention, how the report from the Environment Agency used the article (in this case, only to show that research is still in the early stages), if the report was used if at all, whether or not anything came out of the interest from journalists. She still needs to speak to the researcher and do the follow up. The altmetrics data, though, gave her some leads and a running start.

Because she’s supported by the right tools and data she can get relevant data in five minutes.

As time goes on and the relevant tools, data sources and our understanding of what kinds of impact signals can be picked up and how improves, so will the usefulness of altmetrics.

Why would it ever be useful to know how many Facebook shares an article got?

In the example above we talk about news mentions and policy documents. Facebook came up in the panel discussion.

If you have a ten papers and the associated Facebook data it would be a terrible, terrible idea for almost any impact evaluation exercise to use metrics as an end point and, say, rank them by the number each one was shared, or their total Altmetric score or something. On this we should all be agreed.

However, if nine papers have hardly any Facebook data associated with them, and one has lots, you should check that out and see what the story is by looking at who is sharing about it and why, not ignore the indicator on the principle that you can’t tell the impact of a work from a number. The promise of altmetrics here is that they may help you discover something about broader impact that you wouldn’t otherwise have picked up on, or to provide some ‘hard’ evidence to back up something you did pick up on some other way.

There are lots of ways in which indicators and the underlying data they point to can be used to support and inform assessment. Equally there are many ways you can use metrics inappropriately. In my opinion it would be a terrible waste – of potential, but also time and money – to lump these together with the valid uses and suggest that there is no room in assessment for anything except unsupported (by tools and supplementary data) peer review.

What’s in a name? That which we call a metric…

One opening statement at the workshop that particularly struck a chord with me was from Stephen Curry – you can find a written version on his blog. Stephen pointed out that ‘indicators’ would be a more honest word than ‘metrics’ considering the semantic baggage it carries:

I think it would be more honest if we were to abandon the word ‘metric’ and confine ourselves to the term ‘indicator’. To my mind it captures the nature of ‘metrics’ more accurately and limits the value that we tend to attribute to them (with apologies to all the bibliometricians and scientometricians in the room).

I’ve changed my mind about this. Before I would have suggested that it didn’t really matter, but I now agree absolutely. I still think that debating labels can quickly become the worst kind of navel gazing…. but there is no question that they shape people’s perceptions and eventual use (believe me, since starting a company called “Altmetric” I have become acutely aware of naming problems).

Another example of names shaping perception came up at the 1:AM conference: different audiences use the word “impact” in different ways, as shorthand for a particular kind of influence, or as the actual, final impact that work has in real life, or for citations, or for usage.

During the workshop Cameron Neylon suggested that rather than separate out “quality” and “impact” in the context of REF style assessment we should consider just the “qualities” of the work, something he had previously expanded on in the PLoS Opens blog:

Fundamentally there is a gulf between the idea of some sort of linear ranking of “quality” – whatever that might mean – and the qualities of a piece of work. “Better” makes no sense at all in isolation. Its only useful if we say “better at…” or “better for…”. Counting anything in isolation makes no sense, whether it’s citations, tweets or distance from Harvard Yard. Using data to help us understand how work is being, and could be, used does make sense.

I really like this idea but am not completely sold – I quite like separating out “quality” as distinct to other things because frankly some qualities are more equal than others. If you can’t reproduce or trust the underlying research then it doesn’t matter what audience it reached or how it is being put into practice (or rather it matters in a different way: it’s impact you don’t want the paper to have).

Finally, I belatedly realized recently that when most people involved with altmetrics talk about “altmetrics” they mean “the qualitative AND quantitative data about outputs” not “the numbers and metrics about outputs”, but that this isn’t true outside of the field and isn’t particularly intuitive.

We’ve already started talking internally about how to best tackle the issue. Any suggestions are gratefully received!

Mendeley LogoDiving deeper into scholarly attention with Mendeley

Lately at Altmetric, we’ve been thinking about how to better showcase readership statistics from academics. We already do basic tracking of Twitter user demographics (which does include academics) but from that set of data, we weren’t been able to give much more detail on academic attention.

And so it seemed logical for us to turn to a different service, like Mendeley, which already tracks readership information in quite some detail. Mendeley is a software platform that is very popular amongst scholars as a reference manager and e-reader. A user who saves a paper to their Mendeley library is termed a “reader”.

In a recent blog post, a product manager at Mendeley described their readership statistics as follows:

“Mendeley Readership is one measure of how researchers engage with research on Mendeley. Simply put, it is the number of Mendeley users who have added a particular article into their personal library.”

Altmetric has already been displaying Mendeley readership counts for quite a long time, but the integration up until now has been fairly simple. (Within each Altmetric article details page, we already showed “Mendeley reader counts” on the left-hand side of the page, alongside the various other metrics.)

Because Mendeley also collects many interesting anonymised demographic stats about their users (such as location, professional status, and disciplines of research), it made a lot of sense for us to start displaying these data in addition to the reader counts.

And so we’re pleased to announce today that Altmetric now displays Mendeley readership stats and a map of reader locations. Specifically, for all articles that appear in the Altmetric database, you can now view a map of all readers, as well as a breakdown by discipline and by professional status. You can also get a link to the article’s page on Mendeley so that, if you’re a Mendeley user, you can save the paper to your own library. (Read the press release here.)

Here’s an example of an Altmetric article details page that includes Mendeley readership information (in the Demographics tab).

 

You can access the new Mendeley readership data in two ways:

1. Click on the number of Mendeley readers listed on the left-hand side of an article details page, and it’ll scroll to the appropriate spot on the Demographics tab:

Accessing Mendeley counts

 

2. Click on the Demographics tab and scroll down to view the Mendeley attention, which looks something like this:

Mendeley attention

For more information, please check out the press release.

Like this feature? Let us know by e-mailing us at info@altmetric.com or tweeting us at @altmetric.

Are you interested in altmetrics, but aren’t really sure what they are, how they might be useful for your institution, or what the Altmetric for Institutions platform can offer?

Sign up for one of our upcoming webinars to learn more – there’ll be a run through of the basics and we’ll take a look at some ways librarians, research managers, communications offices and faculty management are using the data.

Just select the session you’d like to join from the list below and click the link to sign up.

Upcoming sessions:

Wednesday 26th November, 9am ET/2pm GMT - register here

Wednesday 10th December, 10am ET/3pm GMT – register here

It’s been a week now since the 1AM conference, which we organized along with Springer, eLife, Elsevier, PLoS and the Wellcome Trust.

To get a flavour of the event here are some posts from Eleanor Beal (Royal Society), Lucy Lambe (Imperial), Brian Kelly (Cetis), a news piece in THE and Andy Tattersall (Sheffield). Barring a couple of sessions where there were technical difficulties the whole thing was streamed and you can watch it back on YouTube. We’ll be putting slides up on the website and you can already find some on the Lanyrd page.

Even better, all sessions were covered by invited bloggers and you can find those posts on the 1AM blog.

We wanted the event to be inclusive (there was absolutely no restriction on who could come, tickets were £15 and we had an extensive travel grants – I think almost everybody who applied ended up being covered) and to focus on people using alternative data & alternative outputs rather than jut present two days worth of demos from tool makers.

To that end we compressed all the product update stuff into the first hour and a half of the schedule, then used the rest of the two days to hear from some great speakers covering librarian, funder, publisher and researcher viewpoints.

What became clear I think was just how broad the field is, and how that can cause problems when people from different communities come together to discuss it: ‘impact’ means different things to a publisher than to a funder, and the end goals for altmetrics in general vary from user to user. In some areas people are rushing ahead with new data and approaches, and in others they are keener to move slowly and balance promise with the desire to ensure that the data is meaningful.

A highlight for me was the discussion groups on each of the days – I thought that lots of people were engaged and many good suggestions and questions were raised. On that note we probably could have done with longer coffee breaks so that people had a chance to talk to each other more frequently.

Here’s the wrap-up we did at the end (hurriedly put together from notes taken over the two days):

If you came along – thanks again! There are going to be feedback forms going out soon so definitely highlight what worked (and what didn’t).

If you didn’t make it this year – 2AM is set for 2015, see you there!