This year’s 3:AM was held in Bucharest, where the sun still beamed warmly with complete disregard that we were in the depths of September. The conference was two days where we heard many people present give their progress reports, research findings, and hopes and dreams for altmetrics.
Here are some of the big topics from the week.
What is the take-up of Altmetrics?
There were a number of surveys asking people this, and whilst many of the responses to surveys were in low numbers, they’re beginning to show some useful insights.
Dan Penny showed a graph outlining which metrics a researcher is most interested in getting. Many of the responses showed that people felt citations and downloads counts are most interesting to them. Despite that, the number of Mendeley readers comes in as least important. This seems at odds with an interest in download counts – these two numbers should be indicative of each other. This may well show that tools like Mendeley are still picking up steam, but not used by everyone just yet.
Htet Htet Aung and colleagues at Nanyang Technological University have begun doing preliminary research looking into awareness of these non-traditional metrics. Around 30% of those asked are aware of metrics like download and viewership counts. Only 15% of researchers said they were aware of social media being used as a metric for impact.
From a number of the talks there are indications that researchers are unaware, or don’t see the importance of the “marketing” required to increase their measurable impact. This points out a clear job for us altmetrics fans: keep talking about them, spreading the word will increase usage, understanding, and standardisation of altmetrics.
Josiline Chigwada’s research was regarding the use of altmetrics in granting research funding. Of 63 institutions who received a survey, and interviews with academic institutions in Zimbabwe, a very small number of them had used altmetrics to help with any decisions. One of the reasons Josiline gave for this result is that people are still concerned about how these numbers can be misinterpreted.
So it seems that many people are still apprehensive or simply unaware of these tools. Others have fully embraced them, however.
Altmetrics as a research tool
At Altmetric, we make habit of giving away our data for research purposes. It’s great coming to these conferences and seeing what conclusions people have managed find using it, often mashing it up with some other data.
Rodrigo Costas presented an interesting review of the active research environment across Africa. Taking altmetrics data which has a location attached to it (like Twitter), Rodrigo and his team put together a map showing a surprising west-east divide in research which has been talked about. Whilst the west is dominated by goopy social and biomedical sciences, the east is far more likely to be talking about the hard sciences, like mathematics and physics. When the west was talking about the HIV/AIDs pandemic, the west talked about Higgs boson.
Rodrigo’s “descriptive altmetrics” is exactly the type of thing we want to be enabling with our data. It tells a real story about what issues are affecting the world, in an interesting way.
Shinji Mine also promoted looking past the simple numbers and taking a look at the content of mentions. In their poster, they described how academics were far more likely to be critiquing a piece of research over twitter than a non-academic is, but a non-academic is far more likely to be passively tweeting just the article’s title. This is interesting because it’s not something we take into account when giving an Altmetric Attention Score – maybe we should increase an academic’s tweet be “worth” more to the score.
Kim Holmberg presented their team’s findings on the research they did with compiling the types of metrics they received most of, and then showed us how different institutions compared. We see some trends emerging here: if you want to promote your medical and health science research, Twitter will be most receptive. We also see that natural science research outputs are more likely to be stored away in Mendeley than to be put up on Facebook.
Altmetrics in software
Viewing the impact of software was a big topic this year.
More and more, software is used to help in many ways throughout researching something, and there are a number of reasons to be wanting to cite it: giving credit, letting the author know their software is being used, even being letting others reproduce your results.
The first problem is that there’s no standardised way of citing software. That’s what Daniel Katz came to talk to us about. Through their research, they found that there are seven different ways that people indicated applications and code, ranging from mentioning a name and version number to simply saying “the software”.
Daniel goes onto give us developers some tips on how to get your work cited: add a CITATION document, archive your work on Figshare, and even submit it to the Journal of Open Source Software.
Heather Piwowar gave us an update on Depsy. Released last year, just after the 2:AM conference, it’s tackling the job of counting “informal” citations. She gave interesting insight into their thought process about adding new – potentially controversial – metrics to Depsy. Amongst these were a score measuring the impact of software (which can be benchmarked against other software). Despite Heather’s initial concern, the community didn’t have much negative to say about it!
Heather also went into what features they’re hoping to add in the future; better text mining, more repositories, and tighter ImpactStory integration. I’m excited to find out what’s to come!
So there we are!
Another great :AM conference! It’s certainly motivated us all at Altmetric, and it was great to see so many colleagues and people using our data.