We listened to your feedback…

… and we’ve made it easier to export article- and journal-level attention data from the Altmetric Explorer! Any users who frequently insert Altmetric data into custom reports, spreadsheets, and other documents, will now find the exporting capabilities to be more reliable.

The new improvements specifically affect the “Export articles” and “Export journals” buttons, found in the Altmetric Explorer’s Articles and Journals tabs, respectively:

Export - Articles and Journals

The changes also apply to the “Export to Excel” buttons for any saved workspaces (formerly known as Reports) in the “My Workspaces” dashboard:

Export - My Workspaces

 

What’s new?

First of all, exported data from the Explorer will now be provided as a spreadsheet in .csv format (instead of in .txt format, as was previously the case). This means that you’ll now be able to open the file directly in Microsoft Excel (or your spreadsheet application of choice), without having to use an import wizard first.

Because exported spreadsheets always tend to be rather massive in size, we also decided to switch to delivering the spreadsheets by e-mail in a download link, rather than by a direct download from the Explorer. As a result, you’ll see a message like this when you click on an Export button:

Export articles box

By processing data in this way, we’re able to produce the exports much more quickly and efficiently. As such, when you request a data export, the resulting spreadsheet will now be delivered to the e-mail address that is registered to the Altmetric Explorer account. You don’t need to worry about large attachments showing up in your inbox though; a link to download the spreadsheet (valid for 7 days) will be provided in the e-mail message.

You should get the export in your inbox within minutes of requesting the spreadsheet. However, you should ensure that messages from our e-mail address reports@altmetric.com don’t get sent to your spam folder.

 

Like this feature? Want others? Get in touch with the team at support@altmetric.com or suggest some ideas here.

euanamaYesterday Altmetric founder Euan took part in our first ever Ask Me Anything on the science subreddit. With the session title “Misuse of the Journal Impact Factor and focusing only on citations sucks, Ask Me Anything” the stage was set for an interesting and provocative discussion – and that is exactly what we got .

You can view the full session here.

Questions came in thick and fast from researchers and institutions, and they varied from how funders can or should be making use of altmetrics, to how we might encourage their take up amongst the wider research community, to how research is typically reflected in the media.

For an hour long session time went very quickly, and Euan was certainly having to think on his feet as he put together his answers (all of which he was keen to give due attention to before posting).

It was great to see so many different people actively involved in the discussion – you can recap what happened on twitter via the #askeuan hashtag – and as always if you have any questions or comments for us feel free to ask @altmetric, or email info@altmetric.com.

There are still a few questions on Reddit waiting for an answer, and Euan is hoping to get to them in the next few days. Thanks to all who contributed!

Has your work been referenced in public policy?

Whenever we talk to people about altmetrics, we explain how the attention data we collect may be able to help identify non-traditional forms of impact. So rather than only relying on citation counts and other traditional bibliometrics, we’re also interested in finding about the impact of research in society at large.

As you might have already learned from our June press release announcing the launch of Altmetric for Institutions, we recently started tracking some highly impactful new sources of attention: policy and guidance documents. Specifically, we are now looking for references made to research papers within these documents. We’re really proud of this addition, (represented by a violet stripe in the donut) as policy documents are arguably some of the most important sources we’ve ever tracked.

 

How does our policy tracking work?

Policy documents are often published in PDF format. Sometimes these documents will have reference sections, which list all the articles, books, and other publications that have been cited in the text. We begin by automatically processing each PDF document, pulling out plain text and searching for possible references to scholarly articles, line-by-line. Afterwards, the references are checked in PubMed and CrossRef databases to determine whether or not they unambiguously refer to actual scholarly articles. Once a match is made, the policy document is added to that article’s details page.

Here’s an example of a World Health Organization policy document on collaborative TB/HIV activities, and the details page of a BMJ article that was mentioned in the document.

WHO policy mention example

 

Which policy documents are being tracked?

Since each website that hosts policy documents is different, we actually have to write a custom crawler for each individual policy document site. (Policy documents are not stored in the same way from source to source, and can’t be retrieved from an API, hence the need for a custom solution for each site.)

We’re aiming to track over 40 policy sites by the end of 2014. Currently, some sources that you might see on Altmetric article details pages include, but are not limited to:

 

Have any policy sources to suggest?

Of course, our target list of policy sources is by no means complete! We’re always looking for further suggestions to improve the source so if you think there’s a particular organisation that has policy documents we should be tracking, please send us a message at support@altmetric.com to let us know. We’d be happy to send you some Altmetric goodies as a thank you!

euanJoin us for the first ever Altmetric Ask Me Anything at 6pm BST/1pm EST on the 21st of August. Founder Euan Adie will be live on reddit to answer all and anything in the session:

Misuse of the Journal Impact Factor and focusing only on citations sucks.

“It’s Impact Factor season in the academic publishing world, with journals getting their latest scores… but while it’s an interesting indicator, journal level stats and looking only at citations only give you part of the picture when it comes to dissemination and impact.

What about patient advocate groups, or engineers, or ER doctors? What about policy makers and standards bodies? They don’t cite work but they may read or use it. Many papers have “impact” in a very real sense that isn’t reflected in the number of citations they then receive.

The field of altmetrics is about looking at a wider variety of indicators – social media, mainstream media, patents, policy documents, download stats – that relate to a wider variety of outputs – datasets and software as well as papers – to supplement peer review and citations. You may have seen the altmetrics manifesto already at altmetrics.org.

I think altmetrics and the changes that they’re helping to drive present a great opportunity over the next few years for scientists to change how they are assessed for tenure, promotion and for grants: choosing what they should be judged on, making metrics their servant not master and democratising the data behind it all.

But should we care about wider impact? Impact definitely isn’t the same thing as quality, is quality all we should be concerned about?

Have you ever been in the situation where you felt it would have been helpful to be able to demonstrate success beyond citations or the IF of the journal you published in?”

If you wish to participate in the Ask me Anything forum, you will need to register with Reddit. Keep an eye on the Altmetric twitter feed for real-time updates as the session takes place – and get your questions ready!

David Nutt's tweetHow do you usually share your article’s Altmetric data?

We’ve seen many authors (and sometimes their publishers!) get excited about the level of attention their article has received, and then tweet a link to the corresponding details page. The cool thing about tweeting the article details page is that under the tweet, you can actually see a snippet of the details page with the article’s title and Altmetric score. Of course, clicking through to the details page itself gives you the breakdown of attention by source, with the actual mentions listed.

 

What happens when I print an article details page?

Maybe you want to show people in your department what altmetrics are about, and need some concrete examples with actual data and real conversations. It’s nice to be able to tweet a link to your article’s details page, but if you wanted to share the page with your colleagues during an offline meeting, what you’d really need is something on paper.

Up until now, it hasn’t been straightforward to print out an Altmetric article details page. You could hit “Print” but you’d only get the tab you were currently viewing. If you wanted to print everything from each tab, you’d have to go click and then print each tab individually. The good news is that, thanks to some user feedback, we’ve now fixed up these printing issues to make things much, much easier!

Printable details pageTry out our new print-friendly format using this details page for a Nature Climate Change article. Just hit “Print” in your browser, and you should get all the data contained within those details pages, with each source tab separated by a horizontal line. You’ll even get the score and demographics tabs listed at the end of the document, along with expanded “Score in context” snippets on the left-hand part of the page.

Like this feature? Want others? Get in touch with the team at support@altmetric.com or suggest some ideas here.

We are excited to announce that registration is now open for the 1:AM Altmetrics conference, to be held in London on the 25th and 26th of September this year.

This conference is the first of its kind to be held in Europe, and we invite librarians, funders, publishers, researchers, and anyone interested in new forms of research evaluation and impact assessment to join us for what is sure to be an interesting few days.

1:AM logoThe aim is to provide an opportunity to share ideas and discuss how and why altmetrics are or should be applied in the scholarly arena. We will hear from publishers, funders and institutions to learn about their experiences (good and bad), and encourage delegate feedback and input in the group workshop sessions.

Amongst the many varied contributors will be an update from James Wilsdon, Chair of the HEFCE steering group on Metrics, and a presentation from Todd Carpenter, Executive Director of the National Information Standards Organisation in the US, who are currently undertaking a review of altmetrics with the aim of developing standards across the discipline.

We are also inviting delegates to share their ideas and experiences in communicating research or the application of altmetrics to their workflow in an informal poster session. Just fill in this form to let us know you’d like to bring one along – we’ll display them in the refreshment area and during the drinks reception. It’s a great opportunity to show off some of the activity that’s been taking place within your organisation, and we can’t wait to learn about of the brilliant initiatives that are underway.

Organised (and kindly supported by) representatives from the Wellcome Trust, Altmetric, PLOS, Elsevier, Springer and eLife, delegate fees are just £15. Following a day of invited speakers and interactive sessions, the Thursday evening provides a chance to get to know colleagues from the wider community during a drinks reception – and there is also the opportunity to take a guided tour of the Wellcome Collection exhibitions.

Travel grants for the conference are on offer to librarians and researchers wishing to attend – to apply please see the details on this page (the deadline for applications is the 29th of August).

Places are likely to fill up fast so register today.

We look forward to having you join us in London!

 

We recently had party to celebrate the launch of Altmetric for Institutions, and all of the hard work that has gone into it over the last few months.

It was a nice chance to relax, have a beer (and a donut!) and show off our shiny new platform to all of our colleagues.

Altmetric developer Shane was photographer for the night – here is a selection of the action he captured:

party

We’re already busy working on the next developments (and the next box of donuts…) so stay tuned for more news from us soon.

To celebrate the launch of our new video, Founder Euan Adie takes a look back over the first few years of Altmetric – from the initial idea to building the team we are today: 

Euan Adie

What first got you interested in altmetrics?
I used to work in bioinformatics, in a lab at Edinburgh University. I wrote a blog about interesting papers or methods I’d come across, and there was a great set of computational biology blogs by others that I’d read every day.

I found those blogs far more useful than, say, journal club. I always wondered why you couldn’t see links out to blogs from journal articles, or conversely have an index of which papers were being mentioned by who. There’s a lot of good discussion happening around research online and it isn’t usually linked to where it’d be most useful to see it, next to the research in question.

So that was one thing that got me interested in these kinds of ideas. The other was a broader problem about getting credit (and funding) for your work. It’s crazy that we still have to do things like write articles about datasets or software not because people need to read the article but because without it some people will assume it cannot be formally cited, and their uses may not be recognized.

What have been the biggest challenges in the first few years of Altmetric.com?
Just staying afloat! Not in the financial sense, as we were pretty lucky to have paying customers from fairly early on. But there was always a lot to do, and we were a very small team, in some cases with no prior experience in scaling up to meet new challenges.

Getting investment from Digital Science helped with this, as they were able to offer some support beyond just money. In particular Aldo de Pape at Digital Science was really helpful in a hands on kind of way in the early days.

What’s your favourite thing about going to work every day?
Interacting with the team. Which sounds really cheesy, but is true. One advantage of having a start-up is being able to choose who you get to work with.

It makes me very happy to look around and see talented, passionate people doing a much better job than I ever could solving technical problems, leading technology or marketing, in sales and in managing product development.

That and the free snacks box.

What advice would you give to someone else considering starting up their own company?
I’d suggest they do a thought experiment. What would they say to somebody considering a career in scientific research? I reckon the two paths are pretty similar.

It’s a potentially a very satisfying career but it’ll take over your life and while you can contribute something very positive to science it’s unlikely that you’re going to get rich or win a Nobel prize at the end of it.

A lot comes down to what you want to get out of the experience. I like being able to do things as a startup rather than just discuss them with dozens of stakeholders in endless meetings. I like being able to work on the kinds of problems I love without having to answer to others. The downside is that your start-up’s responsibility to its staff and customers falls ultimately on you.

More broadly the advice I’d give is:

  • If you do decide to do it, think about a co-founder, you don’t need to go alone
  • … then don’t put off jumping in, there is never going to be a ‘good’ time.
  • Don’t assume that everybody else in the space knows what they’re doing. It’s possible succeed with hard work and luck, which is not the same thing.

If you could do it all again, would you?
Yes! Though I’d find co-founders. I wish the current team had been there from the start.

Don’t forget to check out Euan’s video interview on the Digital Science blog.

This is a guest post contributed by Paul Mucur – CTO at Altmetric.

Following on from Oliver’s recent post about the inner workings of Altmetric, I’d like to talk a little more about how we work on the development side of the company, and more specifically about how we’ve had to change our working practices as we’ve grown.

In the past year, we’ve had several new developers join the team and while we’ve been improving our infrastructure to cope with more sources and an ever increasing volume of mentions, we’ve had very different challenges scaling the team. Specifically, how can we successfully juggle the following:

More importantly, how can we do all that in a sustainable way?

When I first joined the company, we kept track of all the work we wanted to do (bugs to fix, features to develop) in a single Trello board. All work was arranged as cards in various columns such as “Unimportant”, “News”, “Pipeline”, etc. with “Doing” and “Done” at the end. However, there was one particular column which caught my eye: the ominously titled “Cabinet of Dreams”.

Cabinet of Dreams

This board contained all the various things we could pursue as a company and the Cabinet of Dreams in particular was our unfettered wish list.

As a new starter, the board was quite a bewildering thing but just because it wasn’t familiar didn’t necessarily mean it was bad. Cautious not to prematurely change something that was working, we kept a close eye out for potential issues and routinely asked ourselves, “what’s the problem with this approach?”

It wasn’t long before we noticed a pattern in our daily stand-ups: during their updates, people would frequently say “I’ve finished what I was working on. What should I pick up next?” This would require our founder, Euan, to weigh in with the next big priority. This would be fine except the team is not always in the same place at the same time and this critical dependency was quite a risky one.

Of course, it’s natural for a small company to be reliant on its founder but we grew increasingly conscious of our Bus factor: “the total number of key people who would need to be incapacitated to send the project into such disarray that it would not be able to proceed.”

The more we considered this, the more we identified places where information was concentrated within only a single person (perhaps the knowledge of a particular system or process) and trying to spread this information became an extremely important goal. Through code review and pair programming (largely following the GitHub Flow workflow for managing code changes), we tried to break these silos between developers but this larger product direction issue remained.

The other cause for concern was simply the sheer number of cards we had: after all, this was every idea we had, saved for posterity. I was reminded of Darren Taylor’s ”The Perils of the Large Backlog” in which he quotes Dan North:

How can you respond to change when you have 600 stories in your backlog?

If we were working through all these cards, how could we respond to feedback from our customers? The reality, of course, was that we never really intended to go through the whole board before responding to a customer so what purpose did it really serve?

Finally, the issue of sustainability: with priorities unclear, direction frequently needed and information concentrated unevenly, our pace was extremely inconsistent. Here, the principles behind the Agile Manifesto say it best:

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

So how could we improve on these aspects? A simple trick is to just make things visible: in this case, instead of enjoying the infinite expanse of a board in a website, we decided to transfer that same board to a physical whiteboard in our office. Following advice from Dan Brown at the Kanban Coaching Exchange, we tried to mirror the virtual board exactly: after all, our board should reflect reality and covering up the ugly parts would benefit no-one.

So, armed with a lot of sticky notes and copious amounts of magnetic tape, we produced the following real-life version of our beloved board:

Our whiteboard with hundreds of sticky notes

The message was immediately clear: we had done a great job of deciding what we could do but now was the time to decide what not to do.

By maintaining all these different columns we were deferring making the tough choices but ultimately hurting the team. We needed to focus and to prioritise, to be realistic about the amount of work you can possibly do in parallel and ultimately enable everyone to be more autonomous.

Luckily, the annoyance of having several hundred sticky notes frequently cascading onto our desks quickly incentivised this change and we made those difficult choices. Jean took on the mantle of our Product Development Manager to constantly maintain and prioritise our new product backlog and, in concert with the team, we have improved our pace and moved from “what shall I do next?” to different challenges.

If you’re interested in methods of working in software development, I recommend attending the monthly Kanban Coaching Exchange as well as watching the recent video from Spotify on their engineering culture which talks in more detail about the autonomy of their teams.

This is a guest blog post written by Mads Bomholt, Customer Support and Data Coordinator at Altmetric. Mads is also currently working towards a PhD in 19th Century Imperialism at King’s College London. 

Scholars working in the humanities and social sciences are fundamentally changing their research practices to be more compatible with the behaviours which technology is imposing on us privately, socially and professionally. Being a PhD student in history I started reflecting how this is affecting my research and, possibly, a future career as an academic in the humanities.

In this blog post I am going to talk about how altmetrics feed into the wider developments of what is now termed ‘Digital Humanities’, as well as considering how they may be used in historical research now and in the future.

Digital Humanities can be defined as the application of computer based technology in humanities and social sciences research. Albeit a relatively new field, it has nevertheless seen whole departments being established at distinguished institutions including University College London, University of North Carolina-Chapel Hill, Stanford University and my own; King’s College London.

Since 2000, the Department of Digital Humanities at King’s College London has been involved in generating more than £17 million in research income [1], giving an indication of how seriously these developments are going to affect humanities in the future.

Archives all over the world have digitized their databases, and sometimes even source material. The prime example of this is the British Library’s online Newspaper archive, which incorporates almost all published newspapers in Britain since January 1, 1710.

The process of digitizing has not been without difficulties. Handwritten sources in particular have meant that some of the digitization is either incomplete or ambiguous, and in cases even facetious. For instance, in digitizing Shakespeare’s Romeo and Juliet the old writing of the letter ‘f’, looks like a contemporary ‘s’ led to some rather erotic alterations of the classical story. Suddenly, ‘Death had not “suck’d” the honey of thy breath’, but something far more inappropriate starting with ‘f’ [2].

Altmetrics feed into this process too. I asked myself the question: how could I as a PhD student in history use altmetrics? Being a slave to the notion that  ‘those who forget historiography are doomed to republish it’ [3], I have been through numerous articles and books addressing relevant (and irrelevant) issues to my thesis. Each time I have to assess and evaluate the piece; is it worth including? Does it need to be discussed in the text or simply referred to?

Here altmetrics enter the frame, at least at the article level. By finding which articles on a given subject get the most attention I can easily create a list of articles that are necessary to at least have a look at. Of course, attention is not the same as quality – nor relevance; if I were to write on a topic even remotely related to British Imperialism in the mid-19th century without mentioning Ronald Robinson and John Gallagher’s article, Imperialism of Free Trade (published in 1953 and currently showing an Altmetric score of 1), I would probably not pass the course.

Altmetric (my altmetrics provider of choice!) does not consistently track mentions made before 2011, and it appears at the moment that most historical journal articles are not being discussed much on social platforms. Such articles rarely spark much media attention compared to, for example, those published in medicine and astronomy. The lack of past data means that older yet still significant articles, such as the one above, are somewhat left out.

Whilst they may not be frequently applicable in the humanities and social sciences now, the changes that are developing within these fields now will eventually demand the implementation of more innovative approaches in order to benefit and improve the efficiency of research.

Altmetrics promise to be a powerful tool for future publications and those, including myself, who need to go through a vast historiography.

Before too long there will undoubtedly be social historians and other researchers who will look at the historical and social implications of digitalization and, of course, the World Wide Web. As a field concerned with social media and other online content as a measure of societal impact, altmetrics will likely demand significant attention from historians and other humanities and social science scholars for years to come.

____________________________

1. http://www.kcl.ac.uk/artshums/depts/ddh/about/index.aspx

2. see blog.librarything.com/thingology/2010/12/

3. Paul K Macdonald Those Who Forget Historiography Are Doomed to Republish It: Empire, Imperialism and Contemporary Debates about American Power in Review of International Studies vol. 35, No.1, 2009 pp. 45-67  - a nice pun on the phrase ‘He who doesn’t know history is dommed to repeat it’