hello

Here in the UK, HEFCE (the Higher Education Funding Council for England, which distributes central funding to English universities) is currently running an independent review on the role of metrics in research assessment.

As part of that, a couple of weeks ago the review panel convened a workshop at the University of Sussex: In Metrics We Trust? Prospects & pitfalls of new research metrics. I was lucky enough to attend and thought it was a really useful day, not least because it was a chance to hear some pretty compelling points of view.

I’m excited that altmetrics are in the mix of things being considered, and that time is being taken to carefully assess where metrics in general may be able to help with assessment as well as, probably more importantly, where they can’t.

How can altmetrics be used in a REF like exercise?

write_a_story

Before anything else, here’s my perspective on the use of altmetrics data in the context of REF style formal assessment exercise (there are lots of other uses within an institution, which we shouldn’t forget. Research isn’t all about the post publication evaluation, even if it sometimes feels that way).

When I say “altmetrics data” I mean the individual blog posts, newspaper stories, policy documents etc. as well as their counts, the number of readers on Mendeley etc. Not just numbers.

  • If we’re going to look at impact as well as quality, we must give people the right tools for the job
  • Numbers don’t need to be the end goal. They can be a way of highlighting interesting data about an output that is useful for review, with the end result being a qualitative assessment. Don’t think ‘metrics’ think ‘indicators’ that a human can use to do their job better & faster
  • On that note, narratives / stories seem like a good way of addressing a broad concept of impact.
  • Altmetrics data can help inform and support these stories in two main ways.
  • Figuring out which articles have had impact and in what way, then finding supporting evidence for it manually takes a lot of effort. How do you know what direction to take the story in? Automatically collected altmetrics indicators could save time and effort, showing areas that are worth investigating further. Once you have discovered something interesting altmetrics can help you back up a story with the quantitative data.
  • They may also highlight areas you wouldn’t otherwise have discovered without access to the data. For example, altmetrics data may surface attention from other countries, sources or subject areas that you wouldn’t have thought to search for
Using altmetrics data to inform & support: an example

Alice is an impact officer at a UK university. She identifies a research project on, say, the contribution of climate change to flood risk in the UK that is a good candidate for an impact case study.

She enters any outputs – datasets, articles, software, posters – into an altmetrics tool, and gets back a report on the activity around them.

On a primary research paper:

http://www.altmetric.com/details.php?citation_id=269428

… she can quickly some uptake in the mainstream media (the Guardian, the New York Times) and magazines (Time, New Scientist). She can see some social media activity from academics involved in the HELIX climate impacts project at Exeter, a Nature News correspondent, the science correspondent for Le Monde and the editor for CarbonBrief.org.

Switching to the policy side she can see that there are two citations tracked from government / NGO sources: a report from the Environment Agency and one from Oxfam.

These are documents from UK organizations that Alice’s institution may have already been tracking manually. But research, even research specifically about the UK, can be picked up worldwide:

http://www.altmetric.com/details.php?citation_id=1903399

For example above by the AWMF, which is similar to NICE in the UK.

Alice can support her assessment of what it all means with other indicators: by checking to see if it’s normal for papers on anthropogenic climate change and flood risks to get picked up by the international press. She can see how the levels of attention compare to other articles in the same journal.

She can do all this in five minutes. It doesn’t help with the next, more important part: Alice now needs to go and investigate if anything came of that attention, how the report from the Environment Agency used the article (in this case, only to show that research is still in the early stages), if the report was used if at all, whether or not anything came out of the interest from journalists. She still needs to speak to the researcher and do the follow up. The altmetrics data, though, gave her some leads and a running start.

Because she’s supported by the right tools and data she can get relevant data in five minutes.

As time goes on and the relevant tools, data sources and our understanding of what kinds of impact signals can be picked up and how improves, so will the usefulness of altmetrics.

Why would it ever be useful to know how many Facebook shares an article got?

In the example above we talk about news mentions and policy documents. Facebook came up in the panel discussion.

If you have a ten papers and the associated Facebook data it would be a terrible, terrible idea for almost any impact evaluation exercise to use metrics as an end point and, say, rank them by the number each one was shared, or their total Altmetric score or something. On this we should all be agreed.

However, if nine papers have hardly any Facebook data associated with them, and one has lots, you should check that out and see what the story is by looking at who is sharing about it and why, not ignore the indicator on the principle that you can’t tell the impact of a work from a number. The promise of altmetrics here is that they may help you discover something about broader impact that you wouldn’t otherwise have picked up on, or to provide some ‘hard’ evidence to back up something you did pick up on some other way.

There are lots of ways in which indicators and the underlying data they point to can be used to support and inform assessment. Equally there are many ways you can use metrics inappropriately. In my opinion it would be a terrible waste – of potential, but also time and money – to lump these together with the valid uses and suggest that there is no room in assessment for anything except unsupported (by tools and supplementary data) peer review.

What’s in a name? That which we call a metric…

One opening statement at the workshop that particularly struck a chord with me was from Stephen Curry – you can find a written version on his blog. Stephen pointed out that ‘indicators’ would be a more honest word than ‘metrics’ considering the semantic baggage it carries:

I think it would be more honest if we were to abandon the word ‘metric’ and confine ourselves to the term ‘indicator’. To my mind it captures the nature of ‘metrics’ more accurately and limits the value that we tend to attribute to them (with apologies to all the bibliometricians and scientometricians in the room).

I’ve changed my mind about this. Before I would have suggested that it didn’t really matter, but I now agree absolutely. I still think that debating labels can quickly become the worst kind of navel gazing…. but there is no question that they shape people’s perceptions and eventual use (believe me, since starting a company called “Altmetric” I have become acutely aware of naming problems).

Another example of names shaping perception came up at the 1:AM conference: different audiences use the word “impact” in different ways, as shorthand for a particular kind of influence, or as the actual, final impact that work has in real life, or for citations, or for usage.

During the workshop Cameron Neylon suggested that rather than separate out “quality” and “impact” in the context of REF style assessment we should consider just the “qualities” of the work, something he had previously expanded on in the PLoS Opens blog:

Fundamentally there is a gulf between the idea of some sort of linear ranking of “quality” – whatever that might mean – and the qualities of a piece of work. “Better” makes no sense at all in isolation. Its only useful if we say “better at…” or “better for…”. Counting anything in isolation makes no sense, whether it’s citations, tweets or distance from Harvard Yard. Using data to help us understand how work is being, and could be, used does make sense.

I really like this idea but am not completely sold – I quite like separating out “quality” as distinct to other things because frankly some qualities are more equal than others. If you can’t reproduce or trust the underlying research then it doesn’t matter what audience it reached or how it is being put into practice (or rather it matters in a different way: it’s impact you don’t want the paper to have).

Finally, I belatedly realized recently that when most people involved with altmetrics talk about “altmetrics” they mean “the qualitative AND quantitative data about outputs” not “the numbers and metrics about outputs”, but that this isn’t true outside of the field and isn’t particularly intuitive.

We’ve already started talking internally about how to best tackle the issue. Any suggestions are gratefully received!

Mendeley LogoDiving deeper into scholarly attention with Mendeley

Lately at Altmetric, we’ve been thinking about how to better showcase readership statistics from academics. We already do basic tracking of Twitter user demographics (which does include academics) but from that set of data, we weren’t been able to give much more detail on academic attention.

And so it seemed logical for us to turn to a different service, like Mendeley, which already tracks readership information in quite some detail. Mendeley is a software platform that is very popular amongst scholars as a reference manager and e-reader. A user who saves a paper to their Mendeley library is termed a “reader”.

In a recent blog post, a product manager at Mendeley described their readership statistics as follows:

“Mendeley Readership is one measure of how researchers engage with research on Mendeley. Simply put, it is the number of Mendeley users who have added a particular article into their personal library.”

Altmetric has already been displaying Mendeley readership counts for quite a long time, but the integration up until now has been fairly simple. (Within each Altmetric article details page, we already showed “Mendeley reader counts” on the left-hand side of the page, alongside the various other metrics.)

Because Mendeley also collects many interesting anonymised demographic stats about their users (such as location, professional status, and disciplines of research), it made a lot of sense for us to start displaying these data in addition to the reader counts.

And so we’re pleased to announce today that Altmetric now displays Mendeley readership stats and a map of reader locations. Specifically, for all articles that appear in the Altmetric database, you can now view a map of all readers, as well as a breakdown by discipline and by professional status. You can also get a link to the article’s page on Mendeley so that, if you’re a Mendeley user, you can save the paper to your own library. (Read the press release here.)

Here’s an example of an Altmetric article details page that includes Mendeley readership information (in the Demographics tab).

 

You can access the new Mendeley readership data in two ways:

1. Click on the number of Mendeley readers listed on the left-hand side of an article details page, and it’ll scroll to the appropriate spot on the Demographics tab:

Accessing Mendeley counts

 

2. Click on the Demographics tab and scroll down to view the Mendeley attention, which looks something like this:

Mendeley attention

For more information, please check out the press release.

Like this feature? Let us know by e-mailing us at info@altmetric.com or tweeting us at @altmetric.

Are you interested in altmetrics, but aren’t really sure what they are, how they might be useful for your institution, or what the Altmetric for Institutions platform can offer?

Sign up for one of our upcoming webinars to learn more – there’ll be a run through of the basics and we’ll take a look at some ways librarians, research managers, communications offices and faculty management are using the data.

Just select the session you’d like to join from the list below and click the link to sign up.

Upcoming sessions:

Wednesday 22nd October, 10am ET/3pm BST – register here

Wednesday 29th October, 11am ET/4pm GMT – register here

Wednesday 5th November, 10am ET/3pm GMT – register here

It’s been a week now since the 1AM conference, which we organized along with Springer, eLife, Elsevier, PLoS and the Wellcome Trust.

To get a flavour of the event here are some posts from Eleanor Beal (Royal Society), Lucy Lambe (Imperial), Brian Kelly (Cetis), a news piece in THE and Andy Tattersall (Sheffield). Barring a couple of sessions where there were technical difficulties the whole thing was streamed and you can watch it back on YouTube. We’ll be putting slides up on the website and you can already find some on the Lanyrd page.

Even better, all sessions were covered by invited bloggers and you can find those posts on the 1AM blog.

We wanted the event to be inclusive (there was absolutely no restriction on who could come, tickets were £15 and we had an extensive travel grants – I think almost everybody who applied ended up being covered) and to focus on people using alternative data & alternative outputs rather than jut present two days worth of demos from tool makers.

To that end we compressed all the product update stuff into the first hour and a half of the schedule, then used the rest of the two days to hear from some great speakers covering librarian, funder, publisher and researcher viewpoints.

What became clear I think was just how broad the field is, and how that can cause problems when people from different communities come together to discuss it: ‘impact’ means different things to a publisher than to a funder, and the end goals for altmetrics in general vary from user to user. In some areas people are rushing ahead with new data and approaches, and in others they are keener to move slowly and balance promise with the desire to ensure that the data is meaningful.

A highlight for me was the discussion groups on each of the days – I thought that lots of people were engaged and many good suggestions and questions were raised. On that note we probably could have done with longer coffee breaks so that people had a chance to talk to each other more frequently.

Here’s the wrap-up we did at the end (hurriedly put together from notes taken over the two days):

If you came along – thanks again! There are going to be feedback forms going out soon so definitely highlight what worked (and what didn’t).

If you didn’t make it this year – 2AM is set for 2015, see you there!

It’s a busy time for the Altmetric team this week – tomorrow we’ll be running a hack day for anyone and everyone with an interest in science or altmetrics development, and on Thursday and Friday we are off to the Wellcome Collection for the 1:AM altmetrics conference, which we’ve been involved in organising with the support of some great folks from PLOS, Elsevier, Springer, and the Wellcome Trust.

Both have generated a lot of interest and a fantastic amount of support from across the academic community – it’s been great for us to see how keen people are to get involved and we’re hoping that both events will serve to further stimulate and drive the evolution of altmetrics as a useful tool for researchers, funders, institutions and publishers alike.

The conference will give us a chance to hear from a wide variety of groups, and we’ll be happy to stop and chat with anyone who wants to discuss anything in more detail. Feel free to grab one of the team as you come across them, email, or tweet any questions that you’d like answered.

For those who aren’t able to join us at the conference, don’t forget you can still get involved remotely. You might also like to keep an eye out for tweets and blog posts relating to the metrics workshop being hosted by HEFCE and the Science Policy Research Unit (SPRU) at the University of Sussex on the 7th of October, where Altmetric Founder Euan and a number of other representatives from across publishing and academia will debate the propects and pitfalls of new research metrics.

I don’t think the top three science stars on Twitter are Neil deGrasse Tyson, Brian Cox and Dawkins. The honour, I think, should go to a disembodied brain, a Japanese science journalist and a health blogger from Thailand. Obviously.

Here’s our list:

neuro_skeptic @neuro_skeptic Neuroscience, psychology and psychiatry through a skeptical lens. Just a brain with some eyes.
yuji_ikegaya @yuji_ikegaya Google translation from Japanese: Ikeya Yuji brain researchers. [...] Serialized in Weekly Asahi, Yomiuri Shimbun, economist, at Kooyong other. I will introduce the latest information on brain research at Twitter.
thidakarn @thidakarn Google translation from Thai: Doctor lazy feline Issued in 11 volumes I want to be healthy, Thailand . I have no patients for cats. . Gosh, ^^ doctor .
edyong209 @edyong209 Science writer, freelance journalist, husband. I CONTAIN MULTITUDES–on partnerships between animals & microbes–out in 2016.
ananyo @ananyo Science journalist. Community editor for @TheEconomist. Opinions expressed are my own. Especially those that happen to be correct.
aller_md @aller_md Allergist – Twittering on #allergy, #asthma & #immunology. Associate Professor of Immunology. Del Salvador University, Buenos Aires. Chief Editor WAO website
erictopol @erictopol Cardiologist, researcher, Editor-in-Chief, Medscape, author of The Patient Will See You Now (to be released 1/15)
noahwg @noahwg Senior Editor @nature | Engagement Editor @FrontYoungMinds. These thoughts are mine alone since nobody else will take responsibility.
andybeetroot @andybeetroot Professor of Applied Physiology at Exeter University. Endurance sports training, physiology and nutrition expert. Not as cool as Gary Numan.

 

Some context: Science this week is carrying a news piece on the top 50 science stars of Twitter. Metrics, science and Twitter! I was going to go to bed early for once tonight but if ever there was a time for an opportunistic blog posting then this is it.

The article is plainly meant to be taken lightheartedly, like the K-index paper, but interestingly both have come in for some (fair, I reckon, if sometimes harshly delivered) criticism for not covering / valuing science communicators.

Selection problems aside I think the Science methodology is fine, but you do end up with a lot of stars who happen to be scientists and are on Twitter rather than people who are stars because of what they do on Twitter, if that makes sense. I reckon a better system would start off by taking everybody on Twitter and then look at:

  • how often do they tweet about research, and how often are those tweets retweeted, hat tipped or ‘via’ed (let’s treat all of these – RTs, MTs, HTs, vias – as retweets)

And we could help people interpret that data by also pulling in:

  • how many unique accounts are doing the retweeting
  • how global those accounts are – how many unique countries are they from?
  • what’s the reach of those accounts? What’s their total number of followers?

This sort of approach opens up the list to science communicators. The caveat is that a lot depends on how you define ‘research’. Let’s say we go for the Altmetric definition, which is that we consider a tweet to be about research if it links to a paper, book or dataset with a scholarly identifier. This means news stories and blog posts won’t get included. So this kind of stuff mentioned in the Science piece is a no go:

“Gilbert says he prefers to tweet materials that appeal to a general audience, rather than complex scientific papers”

But we will be measuring the kind of activity that @erictopol likes:

“Now, he starts his workday browsing through his Twitter feed for news and noteworthy research in his field”

Eric obviously contributes to Twitter as well as consuming data from it – he jumps from 17th place on the Science list to 7th on a list ordered by retweets.

Conveniently we have all this data going back to around Jan 2012, which is how I can tell. I’ve uploaded the numbers for the ‘top’ 1000 accounts by number of retweets to figshare (which comes in at #57, incidentally).

Account Papers tweeted, then retweeted by others Retweets Unique retweeters Sum of followers of unique retweeters Number of unique countries of retweeters
neuro_skeptic 5,213 45,442 14,133 9,528,055 108
yuji_ikegaya 193 27,631 15,272 8,141,617 88
thidakarn 194 25,501 17,121 2,745,015 73
edyong209 1,005 15,324 9331 10,844,071 93
ananyo 611 15,144 10,144 5,820,330 110
aller_md 3,297 11,490 751 274,999 40
erictopol 646 11,439 5,799 4,223,219 83
noahwg 488 10,938 8,224 5,151,045 95
andybeetroot 954 10,815 3,461 1,112,243 43
bengoldacre 280 10,064 7,686 7,553,402 75
uranus_2 4,400 9,582 1,936 910,825 30
rami_shaath 55 9,251 6,052 8,598,032 63
trishgreenhalgh 1,211 9,249 4,080 2,594,950 62
hayano 184 9,151 5,159 4,062,774 63
lulu__19 2 7,165 5,483 3,592,075 33

 

That’s the top 25 table for people, rather than people + organizations. The real star if we don’t discriminate against non-human accounts is @naturenews with an epic 174k retweets from 66k different people who have a combined upper bound follower count of 39M.

(that said the follower count number should be taken with a pinch of salt. It’s simply a sum of followers and doesn’t take duplicates into account, but many of the retweeters will share people on their followers list. That’s why it can only be considered an upper bound).

Science, NEJM and the BMJ come pretty close behind. There’s quite a lot of overlap with the Science list – Ben Goldacre, Jonathan Eisen and Vaughn Bell are all still there, but they’re joined by people like Carl Zimmer, Mo Costandi and Trish Greenhalgh. I haven’t looked at genders, but the data’s all there on figshare, so feel free to investigate.

I quite like the fact that @uberfacts also makes an appearance. Uberfacts is a funny fact of the day type service but has only tweeted about papers seven times since we started tracking Twitter. In fact in Uberfacts’ case it’s the same paper they’ve tweeted seven times but it in turn has been retweeted by eight and a half thousand people.The paper in case you’re wondering is perennial altmetrics favourite Winnie the Pooh: A Neurodevelopmental Perspective.

So finally, on that note… don’t take lists like this too seriously.

Account Papers Retweets Unique retweeters Sum of followers Unique countries
naturenews 4633 174528 66957 39528901 181
sciencemagazine 5382 52397 21624 13385965 136
neuro_skeptic 5213 45442 14133 9528055 108
nejm 1943 42586 18284 7322764 130
bmj_latest 3489 34067 16194 6429678 114
hiv_insight 14301 31354 3723 1617000 82
thelancet 1729 31087 16317 10235287 133
yuji_ikegaya 193 27631 15272 8141617 88
thidakarn 194 25501 17121 2745015 73
jama_current 1570 20645 9692 3371884 101
blackphysicists 11245 15723 1515 1374120 63
edyong209 1005 15324 9331 10844071 93
naturemagazine 1316 15268 9822 5084348 105
ananyo 611 15144 10144 5820330 110
plosone 3077 14145 6317 4132059 93
the_bdj 1789 12616 2754 501496 62
aller_md 3297 11490 751 274999 40
erictopol 646 11439 5799 4223219 83
noahwg 488 10938 8224 5151045 95
andybeetroot 954 10815 3461 1112243 43
bengoldacre 280 10064 7686 7553402 75
uranus_2 4400 9582 1936 910825 30
rami_shaath 55 9251 6052 8598032 63
trishgreenhalgh 1211 9249 4080 2594950 62
hayano 184 9151 5159 4062774 63
uberfacts 7 8735 8631 871744 87
astrophypapers 5432 7397 960 909608 45
biomedcentral 2218 7264 2942 1330733 79
scphrp 2223 7193 1973 686923 38
lulu__19 2 7165 5483 3592075 33
mocost 1093 7105 3698 4045278 79
sientetegood 572 7021 2475 514634 34
jeukendrup 357 7008 3118 1097030 50
naturemedicine 685 6923 3895 2738129 72
naturebiotech 1013 6808 3136 1804731 66
whsource 1000 6798 2549 1758456 47
miakiza20100906 910 6583 1752 1950709 26
caloriesproper 1757 6520 1499 889925 36
bjsm_bmj 322 6386 2945 893033 50
ibis_journal 2298 6330 1333 481663 43
chemstation 557 6145 3623 1130853 38
genetics_blog 1437 6000 1733 752207 53
dr_chasiba 415 5944 2781 2692085 38
sharethis 2895 5939 2962 1309665 71
nsca 858 5860 2282 429472 41
tarareba722 3 5780 5757 2252308 58
prison_health 3220 5779 1529 696987 38
neuroconscience 1897 5724 2045 1201640 57
conradhackett 25 5675 5185 6447081 100
bmc_series 2015 5569 1637 650866 59
juancivancevich 1909 5481 363 197059 29
medskep 808 5310 2418 2108379 57
sagesociology 1464 5284 2281 1059603 67
mathpaper 4645 5086 325 146787 20
rincondesisifo 1605 4903 1229 743276 22
exerciseworks 1040 4872 2496 769800 44
figshare 1514 4862 2516 1494532 69
richardhorton1 627 4808 2806 2461315 70
brodalumab 60 4801 3538 865513 63
greenjournal 1290 4790 1291 418062 54
annalsofsurgery 946 4776 1039 259576 44
naturerevmicro 1621 4710 1503 497756 56
mackinprof 475 4599 2140 745163 41
addthis 3348 4491 2418 1150052 81
wbpubs 148 4467 2746 1476093 116
vaughanbell 432 4373 2812 3129213 78
academicssay 18 4336 4103 1838349 70
carlzimmer 247 4309 3278 3993438 70
adammeakins 519 4250 2095 733219 42
plosmedicine 574 4243 2447 2024688 71
scireports 1584 4224 2153 870578 53
hughesdc_mcmp 1137 4219 1218 388652 32
msseonline 557 4136 1743 360889 33
nature 533 4121 3018 1391217 69
keith_laws 1470 4105 1589 1635308 45
jaapseidell 931 4104 1295 400341 16
eqpaho 831 3924 1683 1087857 59
moorejh 1197 3924 1589 1271985 56
hotsuma 868 3874 1825 945085 35
dylanwiliam 220 3869 2518 986642 38
darwin2009 309 3862 2692 960746 70
hlth_literacy 2826 3797 1423 913847 37
rawhead 6 3716 3705 2247854 52
jamapeds 375 3715 1293 592233 43
drjcthrash 1682 3682 989 340171 43
trished 1059 3648 1867 2065425 46
angew_chem 2378 3648 997 227161 47
natrevneurol 1778 3628 1127 365716 43
critcaremed 805 3615 879 175567 47
tapasdeciencia 647 3491 2367 702467 46
blogdokter 57 3491 3114 1217202 41
health_affairs 371 3473 2041 1799809 44
m_m_campbell 814 3472 2471 1835724 63
jamainternalmed 655 3375 1635 640007 48
wiringthebrain 1029 3362 1346 1085261 44
jadvnursing 598 3335 967 313516 27
profabelmendez 1644 3333 963 1054885 48
juangrvas 797 3332 1199 481684 27
feedly 2264 3292 1004 619024 52

This Tuesday marked the first ever Altmetric and figshare publisher product day. Invited delegates from across the industry came together in Camden to hear from our guest speakers, and to participate in product workshops to help shape future development.

Up first were Mark Hahnel, founder of figshare, and Euan, from Altmetric, to give an update on what each company had been working on, and what we were hoping to get from the day.

davidThey were followed by David de Roure of the ESRC, who gave an interesting insight into how the reporting of research is evolving beyond the standard PDF and into the open distribution of results, data, and alternative outputs. David’s talk included some fascinating anecdotes and prompted us all to give thought to where our own tools and services were leading.

Natasha Martineau, Head of Research Communications at Imperial College London took to the stage next. Natasha gave a great overview of all of the work that the Communications and Public Affairs Division at Imperial does to raise the profile and impact of its research – from the online news site for news and longer form pieces about research, to working with journalists, and running public engagement events. She also talked about how they evaluate this work; how they work with researchers to report of the results of their efforts, and how communications and public engagement can increase the impact of research. Natasha’s talk raised a lot of interesting talking points about how publishers can work better with institutions to ensure effective coverage, and how Imperial academics had responded to the new initiatives.

We also asked along innovators Sparrho and Writelatex, who each presented a quick overview of their platforms and further demonstrated how disruptive technologies can be effective in supporting and helping to evolve established processes.

Insight into how publishers are making our current tools work for them came from Julie Sutton at Taylor and Francis, who gave a great run through of their figshare implementation (including some very clever looking data!), and there were some suggestions from Euan on how publishers could make best use of the Altmetric data across marketing, editorial, and sales departments, as well as ensuring website users are understanding the context of the article level metrics.

photo 3 (3)

The afternoon was dedicated to product sessions, with attendees given the opportunity to discuss 2 products of their choice in a group format.

Run by the Altmetric and figshare teams, these sessions gave us a chance to ask some key questions of our audience and get valuable feedback, which we now hope to align with when establishing future development priorities.

It was a really interesting day – having never run one before we weren’t quite sure what to expect, and were delighted that everyone who attended was so willing to give us their time and share their opinions. We hope to run more in the future so do drop us a line if this sounds like your kind of thing – we are always looking for feedback on how we can make our tools work better for all of our users.

The Altmetric team is going to be busy over the next few months! It’s back to school time and we’ll be hitting the road again to meet, greet, present and learn from all of you at loads of different events in the run up to Christmas (too early?! get shopping…)

If you’d like to hear more about about how we work and the tools we offer for publishers, institutions and funders, do get in touch to arrange a meeting or stop by our sessions at the following events:

ALPSP Annual Conference
10th – 12th September, London, UK
Altmetric Founder Euan Adie will be speaking in the Metrics and More session along with representatives from Elsevier and Wiley on the morning of Thursday the 11th – drop by the session or email us to arrange a chat.

1:AM Altmetrics Conference
25th – 26th September, London, UK
The Altmetric team will be running, presenting and attending the 1st altmetrics conference, to be held at the Wellcome Collection in London. Over the course of 2 days we will hear from publishers, funders and institutions to see how they are using (or hope to use) altmetrics, and to discuss how the discipline might develop. The event will be live streamed online, so tune in for further details!

Frankfurt Bookfair
8th – 12th October, Frankfurt, Germany
It’s that time of year again already! We’ll be at the Frankfurt Bookfair to meet and greet publishing’s finest – and there’ll be a half hour presentation from Euan on the Professional and Scientific Information hotspot stage in Hall 4.2 at 2pm on Thursday the 9th. Do come along to find out more about how we can help you deliver more value to your authors, readers, and internal teams, and get in touch if you’d like to arrange a time to meet.

Society of Research Administrators Annual Meeting
18th – 22nd October, San Diego, CA
Sara Rouhi, Altmetric Product Sales Manager, will be running a lunch time session Altmetrics: A Practical Introduction on Tuesday the 21st - she’ll also be on hand at booth #604 to answer any questions you might have. Email Sara if you’d like to arrange a meeting.

Internet Librarian International
20th – 22nd October, London, UK
We love librarians! For some time now we’ve offered free tools to librarians, researchers and institutional repository managers to help them get started and find their way around altmetrics.

We’ll be at internet librarian to spread the word; Cat will be running a workshop session on the 20th in conjunction with the nice folks at the University of Sheffield and Wolverhampton University, entitled Altmetrics in the Academy, and Euan will be speaking in the plenary on altmetrics on Wednesday the 22nd. It’s shaping up to be an interesting event – do join us for one of the sessions, or email us for more details.

Digital Library Forum
27th – 29th October, Atlanta, GA
Altmetric product Sales Manager Sara Rouhi will be running a lunch time session, Applied Altmetrics: Implementations and Uses within an Institution on Monday the 27th – join her to find out more about how altmetrics can help you and your faculty in gathering evidence of the wider impact of your research.

Charleston 2014
5th – 8th November, Charleston, SC
Stop by our lunch time discussion, The promise and perils of “alternative metrics”: What librarians need to know about the #altmetrics landscape, to get to grips with the latest developments and new ideas. Altmetric’s Sara Rouhi will be there to answer all and any questions you might have – so feel free to ask!

UKSG One Day Conference and Forum
20th – 21st November, London, UK
We’ll be present on both days so stop by the booth to say hi! Terry Bucknell will be giving a lightning talk overview of all things Digital Science on the afternoon of the 20th.

7th UNICA Scholarly Communication Seminar
27th – 28th November, Rome, Italy
Details TBC!

And likely a few more to follow over the next few months! Do feel free to drop us a line if you have any questions or would like to arrange a chat.

We listened to your feedback…

… and we’ve made it easier to export article- and journal-level attention data from the Altmetric Explorer! Any users who frequently insert Altmetric data into custom reports, spreadsheets, and other documents, will now find the exporting capabilities to be more reliable.

The new improvements specifically affect the “Export articles” and “Export journals” buttons, found in the Altmetric Explorer’s Articles and Journals tabs, respectively:

Export - Articles and Journals

The changes also apply to the “Export to Excel” buttons for any saved workspaces (formerly known as Reports) in the “My Workspaces” dashboard:

Export - My Workspaces

 

What’s new?

First of all, exported data from the Explorer will now be provided as a spreadsheet in .csv format (instead of in .txt format, as was previously the case). This means that you’ll now be able to open the file directly in Microsoft Excel (or your spreadsheet application of choice), without having to use an import wizard first.

Because exported spreadsheets always tend to be rather massive in size, we also decided to switch to delivering the spreadsheets by e-mail in a download link, rather than by a direct download from the Explorer. As a result, you’ll see a message like this when you click on an Export button:

Export articles box

By processing data in this way, we’re able to produce the exports much more quickly and efficiently. As such, when you request a data export, the resulting spreadsheet will now be delivered to the e-mail address that is registered to the Altmetric Explorer account. You don’t need to worry about large attachments showing up in your inbox though; a link to download the spreadsheet (valid for 7 days) will be provided in the e-mail message.

You should get the export in your inbox within minutes of requesting the spreadsheet. However, you should ensure that messages from our e-mail address reports@altmetric.com don’t get sent to your spam folder.

 

Like this feature? Want others? Get in touch with the team at support@altmetric.com or suggest some ideas here.

euanamaYesterday Altmetric founder Euan took part in our first ever Ask Me Anything on the science subreddit. With the session title “Misuse of the Journal Impact Factor and focusing only on citations sucks, Ask Me Anything” the stage was set for an interesting and provocative discussion – and that is exactly what we got .

You can view the full session here.

Questions came in thick and fast from researchers and institutions, and they varied from how funders can or should be making use of altmetrics, to how we might encourage their take up amongst the wider research community, to how research is typically reflected in the media.

For an hour long session time went very quickly, and Euan was certainly having to think on his feet as he put together his answers (all of which he was keen to give due attention to before posting).

It was great to see so many different people actively involved in the discussion – you can recap what happened on twitter via the #askeuan hashtag – and as always if you have any questions or comments for us feel free to ask @altmetric, or email info@altmetric.com.

There are still a few questions on Reddit waiting for an answer, and Euan is hoping to get to them in the next few days. Thanks to all who contributed!