Altmetric Blog

Capturing the flag: Are there easier ways of identifying “high value” engagement?

Josh Clark, 29th September 2016

The following post was written by Sara Rouhi, Director of Business Development (NA):

As someone who spends a lot of time talking to customers using Altmetric data, I hear the following piece of feedback frequently:

I feel like I’m just scratching the surface of this data. I know there’s more I can do with it but I don’t know how to start.

It’s true! The terrabytes of data that Altmetric collects in real time — mentions across 16 different source channels, captured in 20+ languages, 24/7/365 — are grains of sand mixed with diamonds. How do you pull out the diamonds without wading through sand dunes?

Try the red flag/green flag/blue flag approach.

Not all sources are created equal. Some will frequently have mostly positive feedback. I call these green flag sources. You should check them regularly to stay on top of your kudos. F1000, Video, policy citations. I call these green flag sources. You may define these differently based on how your field assesses research and your personal research priorities.

In the case of video and policy citations — these mentions are relatively rare compared to mentions on other platforms like social media. When they pop up, they’re worth investigating immediately to determine the context of the citation — did the policy document cite your work as part of a literature review or is the entire document about your article?

F1000 mentions are by definition positive as they’re recommendations from experts, faculty, and researchers who have been invited to review research and recommend it. In F1000 your work can be highlighted as a “New Finding,” “Controversial,” “Interesting Hypothesis,” “Negative/Null results,” “Clinical trial,” “Good for Teaching,” “Systematic Review/Meta-Analysis,” “Refutation,” “Confirmation,” “Technical Advance,” and many others.

Red flag sources will most likely have critical/negative feedback and deserve immediate attention to mitigate potential misunderstandings about your work, misrepresentation of your work, or PR crises. Addressing concerns in red flag sources is fundamentally about reputation management.

At a glance red flag sources include: Reddit, Peer Review sites, Twitter, blogs (again these might differ according to how you assess these channels).

Reddit and Peer-review sites are public fora where anyone (but usually researchers) can comment on your work and ask questions. These questions should generally be answered as quickly as possible as often speculation around academic fraud can start in these outlets.

If there is smoke, the fire will break out on Twitter and spill over to the blogosphere. Preempting such contagion by always reviewing Reddit/peer-review mentions quickly and transparenty engaging with them is the best way to keep your work out of higher profile venues like Twitter and blogs.

Blue flag, or review sources, tend to have little editorial content but rather function as an information intermediary between your research and the consumer. Typically the practitioners will not read the actually peer reviewed article but will instead visit Wikipedia, a preferred blogger, or a Q&A site to get a sense of what peer reviewed work they need to pay attention to. A busy nurse may not have time to read 40 new peer-reviewed articles each day, but she can read 3-4 blogs daily that keep her abreast of research in her field.

These review sources should be checked regularly to ensure that your data is being accurately represented. Are you being cited correctly in Wikipedia? Are the answers people posting about your work on Q&A sites correct? Are their comments in blog posts that you need to respond to?

Once you’ve cut the source channels by red, blue, and green flags, the next step of questions to ask are:

What channels do I consider highest priority?

What commenters/researchers/stakeholders matter most to me?

What audiences or communities are most relevant to me?

When reviewing red, green, and blue flag channels, the next level of assessment is to determine WHO is engaging with you, do they matter to you, are they positive/negative/neutral.

This is critical for

  • Tenure/promotion
  • Grant applications/reporting
  • Reputation management
  • Exploring collaboration opportunities

Any surprises that arise in a close demographic, professional, or regional analysis of engagement with your work could be valuable data in making a qualitative case around the impact of your work.

Give the flag system a try and let us know what you think? Was finding diamonds easier? Still digging through a lot of sand? What other ways of slicing and dicing the engagement are helpful to you?

You can find me on Twitter @RouhiRoo or send me an email at

4 Responses to “Capturing the flag: Are there easier ways of identifying “high value” engagement?”

Linda Margaret
October 28, 2016 at 12:00 am

I hope the three questions
What channels do I consider highest priority?
What commenters/researchers/stakeholders matter most to me?
What audiences or communities are most relevant to me?
are incorporated into editorial workflows. This will help anyone working on assessing the outreach of a publication to understand better how s/he can demonstrate the value of the altmetrics tool.

mrgunn (@mrgunn)
November 1, 2016 at 12:00 am

YMMV, but I feel like you could really get yourself in trouble expecting a whole category of sources to be always positive or always negative, so I'm not sure breaking it down that way really does help. You could sidestep the positive/negative issue by not seeking to use metrics to pass judgment, but rather to add context. That still leaves you with the job of having to decide how to harness all the different streams of data, but I don't see how you can get around it without blindly following a categorization, and you'll have no idea who badly you're misinterpreting things if you don't make your own decisions about sources.

Sara Rouhi
November 3, 2016 at 12:00 am

Dear Linda,

Thanks for your questions and reading my post. Your first three questions all are up to you. Annoying response I know!

You have to evaluate your priorities before identifying channels, stakeholders, and audiences. Is your primary interest getting more eyeballs on your work? Then you're probably going to focus on Twitter and blogs as a means to foster potential news coverage. If you're interested finding other scholars take on your work, you should focus on Twitter, peer review sites and F1000. If you want to focus on translation -- turning your work into action -- you should look at policy mentions and blogs.

Once you've determined your goal with your online presence, then you can identify channels, stakeholders, and audiences.

Let me know if this helps! I'm at

Sara Rouhi
November 3, 2016 at 12:00 am

William, thanks always for your feedback. I absolutely agree here. The purposes is not to uniformly paint any source type with a broad brush but rather to use loose constructs to begin breaking down this sea of data into manageable bites. Of course NO source will be 100% positive or negative but different sources definitely have different tilts, especially depending on your organization and your role at that org. Within the same organization, different groups or individuals might label these categories differently. These are meant to be loose constructs to help get your hands around this data. This is by no means the right, correct or ONLY way to do this kind of evaluation.

William, thanks for this reminder!


Leave a Reply

Your email address will not be published.