Announcing the Altmetric Ambassadors program!

Become an Altmetric Ambassador today!

Icinghower - Ambassador Program

We’re proud to announce the launch of the Altmetric Ambassadors program! We’ve debuted the Ambassadors program in response to requests to help researchers and librarians worldwide spread the word about altmetrics (in general) and Altmetric.com (in particular) at their institutions.

A team of volunteer Ambassadors will:

  • Stay up-to-date with the latest altmetrics news and research

  • Host brown bags to introduce their colleagues to altmetrics and Altmetric’s tools

  • Show their colleagues how to explore the online attention surrounding their work using the Altmetric bookmarklet

  • Add Altmetric badges to their personal or lab web pages

  • Share their stories to demonstrate how altmetrics can help all researchers

In return, we’ll send you exclusive Altmetric swag, share top-secret development plans for Altmetric products, buy coffee and donuts any time you host an Altmetric-related workshop, and help connect you with a fantastic, international group of like-minded researchers and librarians.

Interested? Learn more and sign up here today!

Today we’re excited to release our new collection of teaching resources for you to use in your own Altmetric for Institutions training sessions. We’re often asked to share our training slides so we’ve created a CC-BY licensed collection for you to use in your own teaching.

We’ve designed the new teaching materials collection to include everything you need to run a successful Altmetric for Institutions session – from advertising online, posters to promote around campus and re-usable slides with explanations and hands-on activities.

3883611573_ebd77078ff_o

Image by Andy Bright on Flickr

So what’s included?

  • Introduction to Altmetric for Institutions training slides with notes – [PowerPoint]
  • Workshop activities – slides with hands-on activities – [PowerPoint]
  • Workshop activities handout – [A4 Word] [US Letter Word]
  • Text for advertising your session online – [Word]
  • Tips & Tricks: promoting your research online – flyer for researchers – [A4 PDF] [US Letter PDF]
  • Poster to advertise your session around campus – [A4 Word] [US Letter Word]

See our Teaching Resources page for full details. The collection is CC-BY licensed so feel free to remix, reuse and share your training session materials afterwards – we’d love to see what you create.

And take a look at our new tutorial videos…

Screen Shot 2015-07-20 at 11.10.12

Today we’re also launching our new tutorial videos so you can take a look around exploring data in Altmetric for Institutions and the Altmetric Details Page. Take a look:

  • Altmetric for Institutions: Navigating – find out more about Altmetric for Institutions with a walkthrough of our demo edition. We show you how to browse your institutional outputs, the summary report, access Altmetric Details Pages, view authors and departments, and expand to search across the entire Altmetric database.
  • Altmetric Details Page: We take you on a tour of the Altmetric Details Page to find attention data for a research output with a walkthrough of the Altmetric donut, sources tabs, help information and setting up email alerts for new mentions.

Feel free to embed in your help guides and share with researchers and support teams in your institution!

Here are our ideas for running a successful altmetrics training session at your institution:

  1. Plan ahead. Check your room setup, will attendees have access to PCs for hands-on activities or should they bring laptops? Or will it be more of a lecture setup? Do you have time for group discussion at the end to talk about the key learning points and altmetrics data you’ve covered during the session? Our activities slides are useful for hands-on workshops, and the Introduction to Altmetric for Institutions slides also offer a good introduction for both a workshop and a lecture-style presentation.
  2. Run a live demo of Altmetric for Institutions and get everyone to create an account – this will really help attendees get a feel for the database navigation, key functionality (exploring the data, filtering results, summary reports, author/departments, custom groups, exporting and saving etc.). See slides 21-30 of the Introduction to Altmetric for Institutions slide deck for key functionality to demonstrate.
  3. Emphasise the importance of looking beyond the score. It’s useful to highlight the value of considering the quality of the mention rather than focussing only on the score. The Altmetric score of attention is an indicator that a paper has received lots of attention. Take a look at some conversations surrounding your institutional research papers and discuss in your session. See slide 34 of the Introduction to Altmetric for Institutions slides for an example of a paper with a relatively low score but with a policy document mention that demonstrates how the journal article has contributed to NHS treatment guidelines.
  4. Cascade training and advocacy across library and training support teams and embed altmetrics in existing scholarly communications sessions (open access, bibliometric analysis, research data management, ORCID, etc.) This helps demonstrate how the Altmetric for Institutions service can be used alongside existing services to support research innovation.
  5. Discuss how altmetrics are part of a broader conversation. Share ideas for how researchers and support teams can use altmetrics data to broaden the view of research attention alongside existing analyses, e.g. traditional bibliometrics and funding awards etc.

Let us know if you create new materials you’d like to share or if there’s something you don’t see here but would find useful. Happy training!

TraceyWe spoke to Tracey DePellegrin, Executive Editor of the Genetics Society of America journals about their motivations and experience so far of integrating Altmetric badges across their journals.

As Executive Editor, Tracey is responsible for identifying new ways of author outreach, and was instrumental in driving this new program for their publications.

 

About the society

The Genetics Society of America has over 5,000 members from all areas of genetics and genomics. Alongside a busy conference and outreach program, they also publish two journals –Genetics, and the open access G3: Genes|Genomes|Genetics. The editorial teams of the journals are committed to the goals of the society; to further the field of genetics research and to encourage communication amongst geneticists worldwide. The content they publish is high quality and the society aims to position themselves as the voice of their members to policy makers.

 

Challenges and motivations

With a society founded in 1931, and Genetics first published in 1916, Tracey says that one of the challenges the journals sometimes face is being recognized as the innovative and forward thinking publications they are. In fact, GSA are often ahead of the curve in embracing new technologies and policies (for example, they’ve had an advanced open data policy, strictly enforced, for more than 5 years, and, along with Cal Tech, pioneered the use of article links to model organism databases in 2009).

The society are also keen to encourage their authors and readers to see value in content beyond the Impact Factor – although they are conscious that for some academics, particularly those in the far east, there are requirements that they publish in journals considered to be ‘high impact’. Tracey and her colleagues are keen to show and help their authors demonstrate other types of attention and engagement.

“Part of our responsibility to the community includes discussing all kinds of ‘impact’ – and helping authors to extend the short- and long-term reach of their research.”

 

A step forwardGSA

GSA first implemented altmetrics on their titles late 2013. They were keen to see what online attention their content was attracting – and, in line with the society’s aims, encourage their authors to actively engage with the conversations going on around research published in their fields. The response they had from their contributors has been enormously positive.

Tracey enthuses; “We LOVE Altmetric – because our authors do! Because citations are such a lagging indicator, they like being able to see a glimpse of who’s talking about their work – even as soon as it’s published early online. And because the social media data is retroactive, we even have some nice surprises.”

Screen Shot 2015-07-16 at 12.16.22

Internally, altmetrics data are being used by the GSA team to monitor their outreach efforts and help shape ongoing activities.

They regularly check in to gauge the attention that their early online articles are getting, and track to see what effect pushing out specific articles via social media has had.

“We LOVE Altmetric – because our authors do!”

Board reports are also benefitting from the additional context that altmetrics provide – they are now incorporating highlights of the online mainstream and social media coverage, including that which is not so positive, to help their teams get a better understanding of how their research is being received.
Board report  Board report

 

In one case, Tracey adds, altmetrics enabled them to quickly identify where an editorial they had published was being met with a negative response, and follow up with their own blog post to further clarify their position and address some of the feedback they’d received.

Working with Altmetric, Tracey reports, was an ideal solution for GSA. They are fans of the colorful donut graphic and see the insight Altmetric data offer as adding value to their journal content (currently hosted by Highwire).

 

What next?

In future, Tracey comments, they’re keen to extend their use of the data and the Altmetric donut visualisations – perhaps including them in email campaigns or using it to identify popular articles to showcase on journal homepages.

“In science, it’s so important to have discussions centered around new research. Altmetric helps us to figure out where the discussions are taking place, and to encourage a wider group to participate.”

Alongside that they’ll be continuing to educate their authors about altmetrics and how they can make use of them, and are determined to move the concept of impact and attention beyond just the numbers for genetics research as a whole. By providing an increasing amount of feedback and insight, Tracey and her colleagues hope to help their authors better understand who is discussing their work, and how it is being received.

We’ve got the leaflets, we’ve got the slides, we’ve got the website; but we wanted something extra special to help us spread the word of Altmetric for Institutions to the wider world. Today we’re excited to announce the release (technically, the world premiere) of Altmetric for Institutions: the movie. Fewer donuts than the Simpsons but hopefully equally punchy graphics – take a look and let us know what you think!

 

Making the video

We worked with online marketing company Distilled to take our idea from concept to a reality – and their creativity and design skills certainly help us along the way!

storyboardFrom the initial script they worked with us to flesh out the concept – drawing out mockupstoryboards, mocking up the design and feel of the final video, and running through a number of voiceover options before we settled on the final feel.

Crucially, we wanted to communicate the benefits that Altmetric for Institutions offers. As researchers are increasingly asked (by management, funders, project leaders, alumni donors) to demonstrate the impacts of their work, this new platform can help them track, monitor and report on early signs of engagement and influence.

Users can create custom groups to monitor the online attention surrounding specific projects, explore the entire Altmetric database to see how much and what type of mentions research outputs from their peer institutions and fellow scholars are receiving, and set up email alerts and reports to be regularly updated on the activity relevant to their work.

All of this data can be particularly useful for enabling more effective online reputation management (both of an individual scholar and an institution as a whole), reporting on evidence of attention, influence and engagement to attract research funding and alumni donations, and for helping to determine future research and outreach strategy.

We’ve already heard some great examples of how institutions are adopting and rolling out the platform, and look forward to hearing how these activities progress as more and more researchers become familiar with the altmetrics data and start to apply it as part of their teaching and career development.

Don’t forget to stay tuned to our YouTube channel over the coming months – we’ve already started adding some past presentations, and are planning to keep on updating it with lots of useful training and user videos to help you get the most from our data and tools!

Metric tideWe’ve just put down our virtual copies of The Metric Tide, a report compiled by an expert panel of academics, scientometricians, and university administrators on the role of bibliometrics and altmetrics in research assessment (including the UK’s next REF). What an excellent read.

In the last 24 hours, many smart people have published articles dissecting various aspects of the report. To this thoughtful and productive discussion, we’d like to add our voice.

Below, we’ve teased out what we believe to be the most important themes from the report, and supplement some of the current discussions with our thoughts on how altmetrics in particular can play a role in research assessment in the next REF.

 

Altmetrics as a complement, not replacement

We’ve long said that altmetrics should be a complement to, not a replacement for, citation-based metrics and expert peer-review. So it was heartening to see that same sentiment echoed in The Metrics Tide report (“Metrics should support, not supplant, expert judgement”).

Altmetrics, by and large and like citation-based metrics, cannot tell us much about research quality (though many people often assume they’re intended to). They do have some distinct advantages over citation-based metrics, however.

Altmetrics are faster to accumulate than citations, so it’s possible to get a sense of an article’s potential reach and influence even it was only recently published. (This is useful in an exercise like the REF, where you might be compiling impact evidence that includes articles only published in the previous year – or even month) Altmetrics are also much more diverse than citations, in that they can measure research attention, help flag up routes to impact, and (occasionally, looking at some sources) quality among scholars and also members of the public, practitioners, policy makers, and other stakeholder audiences.

These differences make altmetrics a powerful complement to citations, which measure attention from a scholarly audience. And they’re a useful complement to expert peer review, in that they can quickly bring together data about and reactions to the article the reviewer may not have noticed themselves. (You can find the full review criteria used by the REF panels here).

Indicators rather than metrics

The Metrics Tide report didn’t spend a lot of time talking about what we consider to be an important consideration when using altmetric data for assessment: relevant altmetrics data is typically an indicator of impact rather than a measure of it. It points to and potentially provides evidence for a route to impact.

That’s why we believe it’s very important for altmetrics providers to report on “who’s saying what about research” in a way that’s easy for others to discover and read. Computers can’t consistently, correctly parse sentiment from documents or online mentions, and they can’t (yet?) figure out what policy gets actually put into practice, or who actually acts on research they’ve cited or mentioned. it’s necessary for humans to parse the data and potentially investigate further. The data alone doesn’t give us enough to, say, write a full impact case study automatically.

That’s the downside (for anybody expecting a quick metric win). The upside is that by removing the burden of systematic data collection from a wide variety of data sources that are all different kinds of indicators we can make it much easier for a human to find and create promising impact stories. Hopefully by doing the heavy lifting of looking at policy outputs, at educational material, at books and datasets as well as articles we can help remove some potential biases too.

We’d encourage the community to read The Metrics Tide report with this in mind. We should see indicators and the related, qualitative “mention” data as two, inseparable sides of the same coin.

Our metrics should be as diverse as reality

The Metrics Tide pointed to two main areas in which attention to diversity is very important: disciplinary differences for what’s considered “research impact” and being mindful of how metrics, when not contextualized, can perpetuate systemic inequalities based on gender, career stage, language, and more.

We have little to add to the point that disciplinary differences in what constitutes “impact” are many. Indeed, we vigorously agree that creating discipline-specific “baskets of metrics” is a much better approach to using metrics for assessment than attempting to use citation-based metrics alone to understand research impact across all fields.

In all disciplines, these “baskets of metrics” should be carefully constructed to include the “alternative indicators that support equality and diversity” that the report hints at. We’d suggest that such “diversity” indicators for the impact of research should include metrics sourced from platforms that highlight regional (especially non-Western) impacts (like Sina Weibo, VK, and others), as well as data features like geolocation and language that can do the same.

All metrics should always be contextualized using percentiles, as well, not only allowing one to more accurately compare articles published in the same year and discipline to each other, but also the research of female academics (which studies show tends to be cited less often than comparable work of male academics), early career researchers, and so on.

Transparency is necessary

We vigorously agree with the report’s statement that “there is a need for greater transparency in the construction and use of indicators”. Transparency is necessary in terms of both how data is collected (what’s considered a “mention”? where is this mention data collected from?) and displayed.

To that point, we think The Metrics Tide could go one step further in its recommendations for what constitutes “responsible metrics”. In addition to metrics being robust, humble, transparent, diverse, and reflective of changes in technology and society, we believe metrics should also be auditable. If somebody says an article has 50 citations you should be able to see those 50 citations.

Such transparency is paramount if we are to conscionably use metrics for assessment, a decision that will have huge effects on individuals and universities alike.

More tangible change is required before we can fully rely on metrics

The final (and somewhat sobering) theme of The Metrics Tide came in the form of many reminders of just how far scholarly communication has left to go before we can actually implement metrics for assessment in a useful and fair way.

In particular, the lack of usage of DOIs and other persistent identifiers when discussing research online is a major stumbling block to the accurate tracking of metrics for all research. Moreover, a scholarly communication infrastructure that, for the most part, does not interoperate makes compiling information for assessment exercises like the REF a very difficult and costly endeavor.

(Interestingly this itself speaks to diversity and disciplinary differences: we see many smaller journals or publishing platforms in BRIC and the developing world eschew DOIs because of the cost. DOIs are more prevalent in STM than in the arts & humanities. Some outputs suit DOIs as they are born and live digitally – others, like concerts or sculptures – don’t. But the great shouldn’t be the enemy of the good)

Luckily, there’s a fairly clear path forward: The Metrics Tide calls for governments and funders to offer more investments in the research information infrastructure, especially to increase interoperability with systems like ORCID. We’d add to that and call for the private sector (including other metrics aggregators) to kill information silos in favor of building or allowing technologies that “play well” with others even if they’re commercial. An API for Google Scholar would be great – but it requires publishers to allow it.

The report also proposes the establishment of the Forum for Responsible Metrics, which we applaud and intend to support in any way we can moving forward (as we’ve done in the past with other common sense, metrics-related initiatives like DORA and the Leiden Manifesto).

We look forward to seeing how the recommendations made in The Metrics Tide play out in the coming years, both here in the UK and in academia worldwide, and to working together towards a future where metrics are used intelligently as part of a much wider scholarly agenda.

In May, I shared some interesting results from a small-scale exercise I ran, comparing what was submitted with REF impact case studies with indicators of “real world” impact sourced from Altmetric data. My findings suggested that Altmetric can help find specific mentions of research in public policy documents that would otherwise go unreported in REF impact case studies. Altmetric may also help find overall themes to the research that has the most “real world” impact (i.e. epidemiology, climate change, etc) in a way that citations can’t.

The data also raised some questions about differences between articles with high citation counts and those with a lot of Altmetric attention:

  1. Are there differences between what’s got the highest scholarly attention (citations),  the highest “real world” attention (as measured by Altmetric), and what’s been submitted with REF impact case studies?
  2. What are the common characteristics of the most popular (as measured by Altmetric) outputs submitted with REF impact case studies vs. the overall most popular research published at each university?
  3. What are the characteristics of the attention that’s been received by REF-submitted impact case studies outputs and high-attention Altmetric articles?

In today’s post, I’ll dig into these questions, with an eye towards shedding more light on themes that might make the REF preparation work of researchers, impact officers, and other UK university administrators easier. The articles data used in this analysis can be found on figshare.

Are there differences between what’s got the highest scholarly attention (citations), the highest “real world” attention via Altmetric, and what was submitted with REF impact case studies?

Short answer: yes.

There seems to be little correlation between the number of citations and the overall Altmetric score received by the publications I looked at. That’s in line with other studies that have examined citation correlations.

And articles with many citations and high Altmetric scores aren’t any likelier to be submitted with REF impact case studies, as found in the previous post.

In other words, when looking at simple attention data (citations and Altmetric scores), there doesn’t seem to be overlap with what’s been chosen for submission with REF impact case studies. But when you drill down into the characteristics of the Altmetric attention data–and the characteristics of the articles themselves–themes do emerge.

What are the common characteristics of the most popular (as measured by Altmetric) outputs submitted with REF impact case studies vs. the overall most popular research published at each university?

There are some common themes found in the types of articles that appear in each university’s “top ten” groups of publications.

Journal of publication: Highly cited publications by London School of Hygiene and Tropical Medicine (LSHTM) authors only appeared in two journals: The Lancet and New England Journal of Medicine. However, this homogeneity of journal title didn’t hold up for highly cited publications by University of Exeter (UE) authors, which were published in a greater variety of journals.

Authorship numbers: Highly-cited LSHTM publications were also more likely to have many authors than the university’s REF-submitted or high-attention publications, or than any “top ten” publications published by UE authors. For highly-cited LSHTM publications, the minimum number of authors was eleven.

Publication date: Overall, highly-cited articles were more likely to be published in 2012 than 2013, which makes sense, given citation delays caused by the publication process. Altmetrics, sourced primarily from text mining and APIs, tend to accumulate much more quickly than citations.

Open Access: LSHTM articles were more likely to be Open Access if highly cited or submitted with a REF impact case study than those that were simply high attention. UE articles were only slightly more likely to be Open Access if highly cited, and were equally as likely to be OA if submitted with a REF impact case study or of high attention overall.

Type of publication: Editorials published by LSHTM authors were more likely to be of high attention, and research articles were more likely to be highly cited or submitted with REF impact case studies. UE publications overall were more likely to be research articles, whether highly cited, of high attention, or submitted with REF impact case studies. No other types of research outputs were among the highest attention, submitted-to-REF, or highly cited lists.

Subject: No matter the type of attention a publication received, they were more likely to focus on public health, epidemiology, or climate change if they appeared in any “top ten” group for both universities.

Of all the themes uncovered, publications’ subjects seem to be the most useful benchmark for researchers to use to select and compile impact case studies. But to know for sure, we’ll have to ask researchers and impact officers themselves how they settled upon particular case studies, and whether they even considered using citation data to inform those themes.

What are the characteristics of the attention that has been received both by outputs submitted with REF impact case studies and by high-attention Altmetric articles from each institution?

Overall, REF-submitted outputs were less likely to have news coverage, appear on Reddit, and surprisingly slightly less likely to appear in policy documents (though this may be due to the selection of policy documents that we currently track) than articles that simply had a lot of attention (as measured by the Altmetric score). In terms of demographics (as sourced from Twitter and Mendeley data), both universities had the most impact among audiences in Great Britain and the United States.

Wikipedia: All articles from both groups were just as likely to have a Wikipedia mention (1 of 10 articles in both groups were cited on the platform).

Twitter: Articles from both groups were just as likely to have attention on Twitter, although outputs from the REF had on average ten times fewer tweets than the highest-attention articles did. Ironically, University of Exeter’s REF outputs were also slightly more likely than highest-attention outputs to be highly shared among scientists than members of the public, whereas LSHTM’s REF outputs had more diverse “top tweeters” overall, including members of the public, scientists, practitioners, and science communicators.

Qualitative data: At Altmetric, we consider the auditable qualitative data we provide to be just as important (if not more important) than the numbers we report. So, I took a look at who was saying what (and where) for each publication. Two points and one example stood out to me:

  • Often, Facebook posts are made by groups promoting research that’s relevant to their audiences. For example, an autism awareness organization might share an article on recent developments in neurological research. Alternatively–in a worst case scenario for many scientists–scholarship is sometimes also shared by groups like climate change denialists that misunderstand the science described. However, having access to what’s being said in both types of groups provides a valuable opportunity for the authors to engage with the members of the public who are reading their work and, in some cases, misinterpreting it.
  • For the LSHTM “highest-attention” article, “The Future of the NHS–Irreversible privatization?”, it was fascinating to read through the commentaries that accompanied the public Facebook and Google+ shares it received. Like the article itself, many were critical of the idea of privatizing the United Kingdom’s National Health Service. It was shared by many biomedical journals, professional groups, and patients’ rights organization.
  • One final example of “public discussion” set me to thinking: researchers have been known to scoff at the idea that it’s valuable to have laypersons discussing their work. For example, it may seem trivial to some that this group on training for cyclists has posted some recommendations on cycling performance and nitrite consumption that cite research on the topic. Yet, is that not also a small form of public impact? If research is advancing knowledge among the public, by and large that’s a good thing.

So what does this mean in practice?

The themes in publications uncovered above may little bearing on how researchers choose articles for submission with REF impact case studies based on variables like journal title, number of authors, and so on. (Which is a very good thing, as such variables have little bearing on the quality or impact of an article itself.)

However, as I found in my previous post on this topic, it’s possible that Altmetric attention data may be useful in choosing which subjects to base selections for impact case studies upon. But Dr. Rosa Scobles–Acting Director of Planning at Brunel University, who helped coordinate her university’s REF2014 submission–is not so sure.

“Altmetrics might be useful in helping to prepare REF impact case studies if online attention is a step on the road to impact,” Scobles recently explained to me via telephone. “That might be true especially if the steps you’ve taken towards engagement have impact. For example, if you’re doing a public health campaign and part of that campaign is to raise awareness using social media, you could say that measurable engagement (retweets, followers, Twitter impressions) points to true impact (behavior change). Or if your research informs public policy, and you can read that policy online, that might be evidence of impact. But most types of ‘impact’ are very difficult to measure in general.”

Another academic who helped prepare for REF2014–Dr. Amy Gibbons, a Faculty Impact Officer at Lancaster University–recently emailed to offer her thoughts on the process, specifically the question of why universities who have a lot of citations in policy documents might not include that information in their REF impact case studies:

“A third possibility for why only a small percentage of the policy impact articles were mentioned in case studies, is that the researcher felt that the study/research was not longitudinal or in-depth enough yet for a case study but instead individual instances may have been submitted to their department/faculty for inclusion in the impact template (detailing overall types of impact activity to accompany the case studies for submission to a panel – worth 20% of the impact submission in REF 2014).”

She makes an excellent point: the value of research changes over time, with what we consider “REF-worthy” impact possibly only being measurable in the longer term.

Overall, what’s most relevant for the REF is the qualitative Altmetric attention data that’s now available. In terms of public engagement, qualitative Altmetric data can be used to help scientists connect with their audiences, and that’s an important “real world” impact by my estimation. And, once articles are chosen for submission with REF impact case studies, researchers and impact officers can now document who is saying what about their university’s research, and whether that discussion is happening in the news or in a policy document.

Looking towards the future

Reading through REF impact case studies has made one thing clear: universities have varied motivations for selecting what to submit to the REF, and those motivations aren’t necessarily reflected in the “one size fits all” approach that Altmetric and other altmetrics services take when building reports. I wonder if market demands will drive altmetrics services like ours and others towards being more flexible and modular. Theoretically, universities could pick and choose the modules that would help them uncover impacts like “technology commercialization” or “policy impact” (if those were important to them), or any other number of impact flavors that help them better communicate their value to the world.

Tomorrow, the much-anticipated results of the HEFCE metrics review (which looked at whether altmetrics might be useful for evaluation purposes in the next REF) will be announced. I’m eager to see the recommendations of the scholars who led in the review, and whether they agree that altmetrics can be indicators (rather than evidence) of potential impact.

Did you help prepare for REF2014 and want to share your thoughts on the role of altmetrics in assessment? If so, leave them in the comments below!

heatherThis is a guest post contributed by Heather Coates, Digital Scholarship & Data Management Librarian at IUPUI. It is intended as a follow-up to Heather’s previous post, Advice from a librarian: how to do successful altmetrics outreach. In her previous post, Heather discussed some key points to bear in mind when conducting altmetrics workshops and other educational activities within an institution. In this second part, she focuses on the core themes that can help a researcher think strategically about how they disseminate and share their work and expertise. 

The research cycle doesn’t end with publication

Dissemination is part of the research process. There are far too many articles published each year (http://www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf) to take a passive approach to dissemination. A 2012 report estimates that 28,100 journals publish ~1.8 million articles per year. Not every article needs to reach broad public awareness, but you can make sure that it reaches the community with whom you are trying to engage. For some, this may be as simple as depositing copies of articles or policy reports in a repository, then sharing it back with the community being studied. In the case of disciplinary work, there may be multiple communities of researchers or practitioners who are stakeholders. In this case, the strategy may differ for each community. Engaging in a discussion with public health practitioners may require involvement with state and local agencies, while engaging with citizen scientists requires a less formal, more grassroots approach. The possibilities are endless, so it’s important to prioritize a few that are effective and important to you personally.

Finally, be sure to include all your scholarly products like data, syllabi, code, not just articles and books, in your dissemination plan. If you need some help thinking about how to share your data, check out the Open Data Handbook. For example, I tend to have different strategies for talking about my work depending on the format. When I present at conferences, engagement tends to happen mostly on Twitter and in-person. I use Storify to gather the tweets into a narrative that I can potentially use in my dossier. If there are associated blog posts, I include those as well. With more formal publications, I share with colleagues on Twitter, but may also write a short piece on my blog, then deposit copies in our institutional repository. When there are associated data, those go into our data repository, of course! The other major product I create are instructional materials on topics like data management and bibliometrics. These go onto Slideshare rather than our repository due to the constantly changing face of tech platforms. I often write up significant instructional events on my blog and Tweet about them.

Are you sensing a pattern? I try not to get overwhelmed with the possibilities; instead, I stick to a few platforms with which I am comfortable and use them as tools to engage with people who might be interested in my work.

Your scholarship is more than just publications

The scholarly ecosystem only recently began offering rewards for creating and sharing scholarly products like data, code, instructional materials. For example:

  • In 2014, the NIH changed their biosketch guidelines to recognize products, rather than just publications, when considering grant applications.
  • Although the practice of citing data is still relatively new, funding agencies like the NIH and NSF and publishers are working to make it widespread, thereby promoting the formal dissemination and sharing of data.
  • Platforms like Github are enabling researchers to get credit for reuse of their code, and new journals can provide peer review of instructional materials, like the journal Syllabus.

You are the product

Your publications are not the product that institutions want – you are. Demonstrating the impact of your work is mostly useful in communicating that you are a high-quality, productive scholar contributing to the missions of your institution, school, and department. Tell the reviewers how you and your work align with these priorities. Use evidence to demonstrate the impact of your work – citation metrics, altmetrics, testimonials, letters of support, etc.

With this in mind, remember that metrics cannot fully describe the value and impact of your work. That is up to you to articulate. What these metrics can do is to support your argument about why your institution should keep you around for the next 15-20 years. Metrics, like all statistics, can be misrepresented. Be sure you understand where a metric comes from and what it means. If a metric doesn’t support your argument, simply don’t include it.

Have a plan – strategically plan how you will disseminate your work

  • Take some time out each year to think about the following: Who do you want to engage with? The public? Policymakers? Potential collaborators?
  • What platforms or media channels do these groups use to communicate?
  • What guidelines or criteria for evaluation do your departmental, school, and institutional provide?

Having a plan is particularly important to ensure that your work is accessible to your key audiences, as well as ensuring that it gets done. It’s far too easy to forget about papers once they have been published, but having a concrete plan for sharing your scholarly products makes it far more likely that it will get done. Basic strategies such as creating a Twitter hashtag for your session or poster are much more effective if they are incorporated into the materials so that attendees know how to engage with you and share your work. Creating a hashtag that ties into a broader theme or conversation of the conference allows you to engage and connect your work to that conversation.

Finally, depositing presentation materials or slides in your repository before the conference so that attendees have instant electronic access can be an effective way to share content and engage them in a deeper discussion during the conference. In my own personal experience, things that I have deposited before a conference to share at the conference have more views in the first few months than those I deposited after the conference.

Execute the plan – treat the work of disseminating, sharing, and tracking evaluation of your scholarly products as a project

Treat the dissemination of your work and the creation of your scholarly reputation like the project it is. Manage the project proactively – make the tasks a priority, set clear goals, track your progress, and re-evaluate when something isn’t working. This is less about self promotion and more about sharing your work in a way that engages your communities and facilitates discussion or deeper understanding of the topic. Some tips to share with faculty include:

  • Choose how, with whom, and when to engage – Be able to describe these communities clearly enough to explain to those reviewing your dossier.
  • Choose a few tools, be selective, then use them consistently – You don’t have to use all the tools. Pick just a few that fit into your workflows and work for the communities you have targeted.
  • Evaluate progress & adjust if necessary – Create summary tables or visualizations of your evidence periodically to confirm whether the data are supporting your story. If not, why?

Review and thoughtfully select the evidence supporting your argument – demonstrate that you are a productive scholar/teacher/practitioner worth keeping around

  • Know the criteria by which you will be evaluated. Choose and explicitly connect these metrics to those criteria.
  • Align your work with institutional priorities.
  • Align your work with the priorities of your research community or field.
  • Identify key themes for your narrative and explicitly connect them to the evaluation criteria.

tl;dr: Be proactive in engaging with your targeted communities. Treat the dissemination of your scholarly products as part of the research process and build it into your plan for getting tenure. Use a range of metrics to support the story of your scholarship.

For more information on altmetrics or ideas for gathering data on scholarly products, check out the following:

Screen Shot 2015-06-30 at 15.08.34

LIBER

On 24th June, I attended the 44th annual LIBER conference with our head of marketing Cat Chimes and our Training and Implementations Manager Natalia Madjarevic. The theme of this year’s conference was Open Access. In the impressive halls of Senate House Library at the University of London, we rubbed shoulders with librarians and other institutional representatives and discussed that most complex of questions: how can we combine technology, legislature and policy in a way that successfully and ethically facilitates the global sharing of knowledge?

The morning session I attended was organised and presented by SPARC Europe, an Open Access advocacy group who liaise with European universities and government bodies to try and further OA initiatives. Firstly, Alma Swan gave an update on SPARC’s efforts to convince the European government to amend copyright laws, so that researchers can perform data and text mining queries with large sets of articles, if their institution is subscribed to the journal (UK copyright law incorporated this change as of last year).

Alma also introduced the ROARMAP, or (Registry of Open Access Repository Policies and Mandates), an online database that houses over 700 repository policy documents for a range of institutions and funders. SPARC looked at six policy conditions across the universities (such as whether that institution has made it mandatory for researchers to deposit their publications in the repository) and performed a regression analysis to try and ascertain the extent to which these policies affected researcher behaviours. They found a positive correlation between institutions that insisted (rather than simply recommended) on depositing, and the deposit rate. In conclusion, Alma suggested that researchers should make a habit of depositing their publications in Open Access spaces, not simply to comply with university regulations, but to make their research more visible to tenure and funding committees, thereby potentially enhancing their career prospects. This approach has interesting implications from an altmetrics perspective, as the more people store and share research in easily accessible online spaces, the more activity and attention data can be collated around those outputs.

The session also included an update from David Ball on Foster, an Open Science training initiative, and from Joseph McArthur, co-founder of the Open Access button. Overall, the workshop provided a comprehensive overview of the Open Access issues currently facing institutions.

After lunch, keynote speaker Sir Mark Walport (UK Government Chief Scientific Advisor and Head of the Government Office for Science) gave a more general introduction to current

Florida Polytechnic University - Image credit: Rick Schwartz at Flickr

Florida Polytechnic University – Image credit: Rick Schwartz at Flickr

issues in research production and dissemination, and how these issues affect the librarian as a “visionary in the communication of knowledge”. He argued that technological developments have allowed us to make huge steps in knowledge dissemination, but that there is pressure to maintain best practises and really think about how to communicate research to different audiences, as we continue to move from the printed page to the screen. He also advocated a transition from the single research paper as an isolated and closed declaration of discovery, arguing that research outputs should be continually updated to include more recent data that corroborates (or undermines) the original findings. One of the slides that really summed up his entire presentation included an image of the new library at Florida Polytechnic University, which doesn’t contain any paper books; only digitised records.

In one of the last sessions of the day, our very own Natalia Madjarevic gave a presentation on how Altmetric data can help libraries improve their research services. Scott Taylor, (research services librarian at the University of Manchester), talked about how his institution had used donutthe data to identify “impact stories” around their research – helping the impact officers uncover previously unknown information about how the scholarly outputs of their faculty had been shared, discussed and put to use beyond academia. Following this, Bertil F Dorch presented the findings of a project on whether sharing astrophysics datasets online can increase citation impact. It was really interesting to get an altmetrics expert, a librarian and researcher in the same room to talk about putting research online, and how that practice relates to different models of research evaluation.

Overall, day two of the annual LIBER conference provided many interesting insights. Although Altmetric only attended day two of a five day conference, we still really got a sense of what librarians, policy makers and OA advocates are thinking and talking about in 2015. One of the things that struck me was that although Open Access is now an established way of offering data and research, the OA movement still presents challenges and opportunities in equal measure. Over the next few years, it will be interesting to see the outcomes of efforts from OA advocates such as SPARC, and to monitor changes in academic publishing and researcher practices, in light of Mark Walport’s comments.

Thanks for reading, and feel free to leave feedback as always!

Welcome to Altmetric’s “High Five” for June, a discussion of the top five scientific papers with the highest Altmetric scores this month. On a monthly basis, my High Five posts examine a selection of the most popular research outputs Altmetric has seen attention for that month.

The theme this month is BIG news.

Study #1. Entering the sixth mass extinction

 

Close-up of the Endangered California Desert Tortoise, Gopherus agassizii. Photo (C) Paige Brown Jarreau.

Close-up of the Endangered California Desert Tortoise, Gopherus agassizii. Photo (C) Paige Brown Jarreau.

 

Our top paper this month is “Accelerated modern human–induced species losses: Entering the sixth mass extinction,” published in Science Advances. In this study, Gerardo Ceballos and colleagues across six different universities assess whether human activities are causing a modern day mass extinction.

According to the study authors, even under conservative assumptions about past vertebrate species extinction rates, “the average rate of vertebrate species loss over the last century is up to 100 times higher than the background rate.”

These estimates reveal an exceptionally rapid loss of biodiversity over the last few centuries, indicating that a sixth mass extinction is already under way. Averting a dramatic decay of biodiversity and the subsequent loss of ecosystem services is still possible through intensified conservation efforts, but that window of opportunity is rapidly closing. – G. Ceballos et al. 2015

Shared largely by scientists and members of the public on social media, this study was covered by news outlets including The Daily Beast (We’re not the Dinos: We’re the Asteroid), Spektrum.de in Germany, Popular Science (We’re entering a sixth mass extinction, and it’s our fault) and National Geographic (Will Humans Survive the Sixth Great Extinction?).

Ecologists have long warned that we are entering a mass extinction. Science journalist Elizabeth Kolbert just won the Pulitzer Prize in nonfiction for her book titled “The Sixth Extinction”—yet this particular study, led by Gerardo Ceballos of the National Autonomous University of Mexico, is so profound because its findings are based off the most conservative extinction rates available. Many other studies in the past were criticized for overestimating the severity of the crisis. Even when using these conservative estimates, however, Ceballos and his team found that the average rate of vertebrate species loss over the last century is over 100 times greater than the normal rate of extinction, also known as the background rate. – Grennan Milliken, Popular Science

Cumulative vertebrate species recorded as extinct or extinct in the wild by the IUCN (2012). Dashed black line represents background rate. Credit: Ceballos et al.

Ceballos et al.

Most news outlets focused on the rather gloomy message that extinction rates have skyrocketed and that humans are the prime suspect in terms of who/what is to blame for this. This is interesting, considering that sad or gloomy messages don’t tend to spread in social media environments as much or as quickly as feel-good or exciting ones. However, anger or indication at the news might have prompted readers to share. “Yes, humans are probably to blame for the Earth’s sixth mass extinction event, which is wiping out species at a rate 53 times greater than normal,” Matthew Francis wrote for The Daily Beast. The graph to the right shows the cumulative vertebrate species recorded as extinct or extinct in the wild by the IUCN (2012) as compared to the conservative background rate used by Ceballos and colleagues.

To be fair, scientists have suspected humans are the reason for the Sixth Extinction for some time. It’s even the subject of several books. However, it’s difficult to assign numbers and rates of extinction over human history: It’s easiest to see extinctions long after they happened, rather than in process. The key is quantifying how many extinctions have happened on our watch versus the normal rate of species death. – The Sixth Mass Extinction: We Aren’t The Dinosaurs, We’re The Asteroid

But Nadia Drake over at National Geographic had a slightly different message than most writers covering this study. Drake interviewed journalist Elizabeth Kolbert, author of the Pulizer Prize winning book The Sixth Extinction, about “what these new results might reveal for the future of life on this planet,” including human life. “Are humans destined to become casualties of our own environmental recklessness?”

There are two questions that arise: One is, OK, just because we’ve survived the loss of X number of species, can we keep going down the same trajectory, or do we eventually imperil the systems that keep people alive? That’s a very big and incredibly serious question. And then there’s another question. Even if we can survive, is that the world you want to live in? Is that the world you want all future generations of humans to live in? That’s a different question. But they’re both extremely serious. I would say they really couldn’t be more serious. – Elizabeth Kolbert, as interviewed by Nadia Drake for NatGeo

How do the results of this study make YOU feel?

 

Study #2. Changing Textbooks – Newly Discovered Link Between Brain and Immune System

 

Image: Maps of the lymphatic system: old (left) and updated to reflect UVA's discovery. Image credit: University of Virginia Health System

Image: Maps of the lymphatic system: old (left) and updated to reflect UVA’s discovery. Image credit: University of Virginia Health System

 

Our next top paper is “Structural and functional features of central nervous system lymphatic vessels,” a research letter published in Nature this month. This study describes the discovery of a central nervous system lymphatic system in mice – in other words, a link between the brain and the immune system. As Time magazine headlined, “Game-Changing Discovery Links the Brain and the Immune System.”

The discovery of the central nervous system lymphatic system may call for a reassessment of basic assumptions in neuroimmunology and sheds new light on the aetiology of neuroinflammatory and neurodegenerative diseases associated with immune system dysfunction. – A. Louveau et al. 2015

 

“The first time these guys showed me the basic result, I just said one sentence: ‘They’ll have to change the textbooks.’” – Kevin Lee, chairman of UVA Department of Neuroscience, quoted in Science Daily

@Vectorofscience, a PhD student studying infectious diseases whom I follow on Twitter, tweeted this about the study: “A lymphatic system in the brain? Now that’s cool, and it’s going to change the way we view immunity in the CNS. http://t.co/eGZtbKFDPD.” If there were a missing link between the brain and the immune system, this study appears to be a big step toward clearing up that link.

Here’s a surprise: there are lymphatic vessels going into the brain. That’s reported in this paper in Nature. (Here’s a pretty breathless press release from the University of Virginia, where the work was done). – Derek Lowe, Ph.D., In The Pipeline

 

Scientists have discovered a previously unknown link between the brain and the immune system that could help explain links between poor physical health and brain disorders including Alzheimer’s and depression. […] The new anatomy is an extension of the lymphatic system, a network of vessels that runs in parallel to the body’s vasculature, carrying immune cells rather than blood. Rather than stopping at the base of the skull, the vessels were discovered to extend throughout the meninges, a membrane that envelops the brain and the spinal cord. – Hannah Devlin, The Guardian

The question now, to be answered by future research, is how exactly does this link between the brain and the immune system impact neurological diseases and mental illnesses?

 

Study #3. A new horned dinosaur, Regaliceratops

 

Our third top paper, “A New Horned Dinosaur Reveals Convergent Evolution in Cranial Ornamentation in Ceratopsidae,” was published in Current Biology this month. The paper describes an intriguing new horned dinosaur. As reported by Ian Sample in The Guardian, “Nicknamed Hellboy, the dinosaur had short horns over the eyes and a long nose horn, the opposite of the features sported by its close relative triceratops.”

Regaliceratops exhibits a suite of cranial ornamentations that are superficially similar to Campanian centrosaurines […] This marks the first time that evolutionary convergence in horn-like display structures has been demonstrated between dinosaur clades, similar to those seen in fossil and extant mammals. – C. Brown and D. Henderson

Regaliceratops (“regal”) was named for its frill, “a set of large, pentagonal plates like a crown atop its head,” by researchers at the Royal Tyrrell Museum of Palaeontology who found the skull of this dinosaur in Canada. The discovery of this dinosaur is even more significant because it provides evidence of evolutionary convergence in horned dinosaur display between this dino and its cousins from distant eras. In other words, without being direct ancestors, Regaliceratops and the centrosaurines developed similar horn displays on their skulls.

There are these really stubby horns over the eyes that match up with the comic book character Hellboy. – Caleb Brown, paleontologist at the Royal Tyrrell Museum of Palaeontology in Alberta, Canada, as quoted by National Geographic.

The discovery was covered by many news outlets and online media sites, including Popular Science, National Geographic (Triceratops cousin unearthed in Canada is so elaborately adorned ‘it blows your mind’) and IFLscience (New Horned Dino Rocked a Crown-Shaped Frill) among others.

But Regaliceratops’s amazing looks weren’t the only aspect of this study that attracted media coverage. It may touch your heart to know that the leading author of the paper, Caleb M. Brown, proposed to his girlfriend in the acknowledgements sections of the published paper!

“C.M.B. would specifically like to highlight the ongoing and unwavering support of Lorna O’Brien. Lorna, will you marry me?” – A New Horned Dinosaur Reveals Convergent Evolution in Cranial Ornamentation in Ceratopsidae, acknowledgements

Buzzfeed picked up on the marriage proposal too. Not only amazing science, but cute. More BIG news!

 

Study #4. No Global Warming Hiatus

 

Credit: NOAA

Credit: NOAA

 

Our next most-mentioned paper this month is a report published in Science, “Possible artifacts of data biases in the recent global surface warming hiatus.”

Much study has been devoted to the possible causes of an apparent decrease in the upward trend of global surface temperatures since 1998, a phenomenon that has been dubbed the global warming “hiatus.” Here, we present an updated global surface temperature analysis that reveals that global trends are higher than those reported by the Intergovernmental Panel on Climate Change, especially in recent decades, and that the central estimate for the rate of warming during the first 15 years of the 21st century is at least as great as the last half of the 20th century. These results do not support the notion of a “slowdown” in the increase of global surface temperature. – Abstract, Possible artifacts of data biases in the recent global surface warming hiatus

In the report, Thomas R. Karl and other scientists from the National Oceanographic and Atmospheric Administration (NOAA) present evidence that disputes the suggestion based on previous analyses that global warming “stalled” during the first decade of the 21st century.

Karl et al. now show that temperatures did not plateau as thought and that the supposed warming “hiatus” is just an artifact of earlier analyses. Warming has continued at a pace similar to that of the last half of the 20th century, and the slowdown was just an illusion. – Editor’s Summary, Possible artifacts of data biases in the recent global surface warming hiatus

The report sparked quite a bit of news coverage and strong discussions on social media.

Last week, a paper out of NOAA concluded that contrary to the popular myth, there’s been no pause in global warming. The study made headlines across the world, including widely-read Guardian stories by John Abraham and Karl Mathiesen. In fact, there may have been information overload associated with the paper, but the key points are relatively straightforward and important. – Dana Nuccitelli, The Guardian

 

[T]here never was any “pause” or “hiatus” in global warming. There is evidence, however, for a modest, temporary slowdown in surface warming through the early part of this decade. – Michael Mann

More good reads about this new report can be found below:

I expect the deniers — as usual — will be blowing a lot of hot air about this, but the science is becoming ever more clear. Global warming is real, and it hasn’t stopped. People who claim otherwise are trying to sell you something… and you really, really shouldn’t be buying it. – Phil Plait

 

Study #5. Your viral history in a single drop of blood

 

The capsid of SV40, an icosahedral virus. Image credit: Phoebus87 at English Wikipedia

The capsid of SV40, an icosahedral virus. Image credit: Phoebus87 at English Wikipedia

 

Our last paper, “Comprehensive serological profiling of human populations using a synthetic human virome,” was published in Science this month. The study describes a new method call VarScan that “enables human virome-wide exploration, at the epitope level, of immune responses in large numbers of individuals.”

VirScan combines DNA microarray synthesis and bacteriophage display to create a uniform, synthetic representation of peptide epitopes comprising the human virome. – G. Xu et al. 2015

What does all that mean? It means the authors of this study have developed a blood test that identifies antibodies against all known human viruses. With a drop of your blood, this new test could technically give scientists a history of all viral infections you’ve ever had.

Every time a virus gets you sick, your immune system keeps a record. This essentially becomes a kill list that lets your body recognize and readily dispatch of any virus that tries to invade again. Scientists have now created $25 test blood test that prints out this list—an easy and cheap way to find out every virus that’s ever made you sick. – Sarah Zhang, Gizmodo

 

Thanks to a method described today (June 4) in Science, it may be soon be possible to test patients for previous exposures to all human-tropic viruses at once. Virologist Stephen Elledge of Harvard Medical School and the Brigham and Women’s Hospital in Boston and his colleagues have built such a test, called “VirScan,” from a bacteriophage-based display system they developed in 2011. The scientists programmed each phage to expresses a unique viral peptide, collectively producing about 100 peptides from each of the 206 known human-tropic viral species. – A Lifetime of Viruses, by Amanda Keener, The Scientist

You can imagine the benefits of such a test, for diagnosis of odd disease symptoms for example.

 

That’s it for this month! Have thoughts about these findings? Share them with me on Twitter, @FromTheLabBench, or comment below. Thanks!

In our first post in this blog series, we introduced the advantages of using altmetrics to curate your digital identity as a researcher. The aim of this post is to look in more detail at how you can do just that, and provide some tips for how to adapt your online activity to successfully promote your research. We also talked to Ethan White, Biology researcher at the University of Florida, and Jacquelyn Gill, Professor of Ecology at the University of Maine, to see what tips they had for our readers.

Blogging 

Screen Shot 2015-06-18 at 15.19.50

Jacquelyn Gill’s blog

Ethan and Jacquelyn both said they use blogs and Twitter most often to promote their research. Blogs are a really great way to introduce new research and participate in the conversations that are happening in your field. However, the blogosphere is not simply an online space from which to alert the world to your own activities.

Following other blogs, commenting on other people’s posts and including links to other blogs in your posts means you can participate in wider academic discussions, and potentially invite more engagement with your own research. If you create a blog using WordPress, Blogger or Tumblr, you can view and save preferred blogs from the same platform using the built-in “suggested blogs” sections on their sites.

You can also install the free Altmetric bookmarklet to see if anyone has mentioned your own research (or even other research published in your field) in a blog post – simply drag the bookmarklet to your browser bar and click it while viewing your article on the publisher site to bring up the Altmetric data.

For more blogging tips, this post from Helen Eassom at Wiley has some great suggestions for effective practise.

Maintaining a consistent digital identity

Screen Shot 2015-06-23 at 16.47.03

Ethan White’s Twitter profile

It’s important to be consistent with how you present your identity across different online platforms. For example, you might want to use the same photo across your university faculty page, blog homepage and social media accounts, so that people who might be interested in your research can instantly identify you and verify (for example) your Twitter account against your LinkedIn profile.

Another way of maintaining these connections is to link between platforms when posting. You can do this by sharing your newest blog posts on social media, or including a link to your blog or website in your Twitter bio and faculty page. According to Jacquelyn Gill, “Maintaining visibility on multiple platforms is key! I’ve found Twitter to be an especially great resource in signal-boosting blog posts and new articles. Most other platforms don’t take much work, but it’s always worth putting in the time to keep them up-to-date”.

Networking

Screen Shot 2015-05-07 at 15.46.30Blogs and social media networks can offer the opportunity to engage with people you might not otherwise have had the chance to meet. If (for example) a fellow researcher leaves an interesting comment on one of your blog posts, it should be easy to respond to their comments, and perhaps later locate them on social media to continue the conversation. The people they follow might also be useful contacts to engage with, thereby increasing your own network. If you’re on the conference circuit, it’s always worth following up any talks you give with a link directly to your published research, using the conference hashtag to alert other delegates to your tweet.

As with blogging, the Altmetric bookmarklet can show you who has been sharing both your own work and other outputs published in your discipline via their blogs and on Twitter, Facebook, Sina Weibo and Google Plus – providing insight into who it might be worth following or reaching out to for additional visibility in future.

Ethan White had lots of interesting things to say about using online platforms to manage and update your professional network. He argued that it’s more useful to think of blogs and social media as tools to create mutually beneficial relationships that support knowledge dissemination.

“Developing a good network of online colleagues will ultimately help you promote your research online more successfully. Think about it this way: if you had a colleague who only ever stopped by your office to tell you that they’d just had a new paper published, you might not be super excited to see them, but if you have a colleague who you talk to about lots of different things, and respect based on their opinions on science in general, then you’d be excited to hear that they had a new idea or had just published a new paper”.

Ethan’s analogy works really well, and suggests that a researcher’s attitude towards online engagement with research is just as important as their practises.

Sharing your own research online 

Ensuring you research is as freely accessible as possible can really help raise your profile online. Make a habit of uploading articles to your institutional repository or sharing them amongst academic networks like Mendeley, Zotero or ResearchGate (once they are free of any embargo restrictions, of course), so they can be read by people who may not otherwise have access.

You can also use services such as Figshare to upload and attach unique identifiers to non-article research outputs, such as datasets, posters or images – giving other researchers the opportunity to reuse and build on your work (dependant on your chosen security and copyright preference settings). Once you’ve made your research available, you might like to include links to your outputs from your email signature, institutional faculty page or LinkedIn profile, or even post it to a subject specific forum.

If you’re keen to take it a step further you might like to consider building your own website to showcase your work. There are lots of free platforms available, so this need not be technically daunting – try Moonfruit or Wix to help you get started.

Finally…..how can I make sure my online activity is picked up by Altmetric?

  • If you have a blog, email support@altmetric.com with the homepage and a link to the RSS feed, so we can add it to our list, and start picking up mentions of published research outputs in your posts.

  • When blogging about research, make sure you embed a link to the article in the main body of text. Our software ignores headers and footers when scraping a page, so mentions of articles in footnotes don’t get picked up.

  • When posting on social media, attach a link to the main article page of the research output on the publisher website, rather than to a PDF.

As always, feel free to give us feedback on this blog post – thanks for reading!