The following post was written by Sara Rouhi, Director of Business Development (NA):
As someone who spends a lot of time talking to customers using Altmetric data, I hear the following piece of feedback frequently:
I feel like I’m just scratching the surface of this data. I know there’s more I can do with it but I don’t know how to start.
It’s true! The terrabytes of data that Altmetric collects in real time — mentions across 16 different source channels, captured in 20+ languages, 24/7/365 — are grains of sand mixed with diamonds. How do you pull out the diamonds without wading through sand dunes?
Try the red flag/green flag/blue flag approach.
Not all sources are created equal. Some will frequently have mostly positive feedback. I call these green flag sources. You should check them regularly to stay on top of your kudos. F1000, Video, policy citations. I call these green flag sources. You may define these differently based on how your field assesses research and your personal research priorities.
In the case of video and policy citations — these mentions are relatively rare compared to mentions on other platforms like social media. When they pop up, they’re worth investigating immediately to determine the context of the citation — did the policy document cite your work as part of a literature review or is the entire document about your article?
F1000 mentions are by definition positive as they’re recommendations from experts, faculty, and researchers who have been invited to review research and recommend it. In F1000 your work can be highlighted as a “New Finding,” “Controversial,” “Interesting Hypothesis,” “Negative/Null results,” “Clinical trial,” “Good for Teaching,” “Systematic Review/Meta-Analysis,” “Refutation,” “Confirmation,” “Technical Advance,” and many others.
Red flag sources will most likely have critical/negative feedback and deserve immediate attention to mitigate potential misunderstandings about your work, misrepresentation of your work, or PR crises. Addressing concerns in red flag sources is fundamentally about reputation management.
At a glance red flag sources include: Reddit, Peer Review sites, Twitter, blogs (again these might differ according to how you assess these channels).
Reddit and Peer-review sites are public fora where anyone (but usually researchers) can comment on your work and ask questions. These questions should generally be answered as quickly as possible as often speculation around academic fraud can start in these outlets.
If there is smoke, the fire will break out on Twitter and spill over to the blogosphere. Preempting such contagion by always reviewing Reddit/peer-review mentions quickly and transparenty engaging with them is the best way to keep your work out of higher profile venues like Twitter and blogs.
Blue flag, or review sources, tend to have little editorial content but rather function as an information intermediary between your research and the consumer. Typically the practitioners will not read the actually peer reviewed article but will instead visit Wikipedia, a preferred blogger, or a Q&A site to get a sense of what peer reviewed work they need to pay attention to. A busy nurse may not have time to read 40 new peer-reviewed articles each day, but she can read 3-4 blogs daily that keep her abreast of research in her field.
These review sources should be checked regularly to ensure that your data is being accurately represented. Are you being cited correctly in Wikipedia? Are the answers people posting about your work on Q&A sites correct? Are their comments in blog posts that you need to respond to?
Once you’ve cut the source channels by red, blue, and green flags, the next step of questions to ask are:
What channels do I consider highest priority?
What commenters/researchers/stakeholders matter most to me?
What audiences or communities are most relevant to me?
When reviewing red, green, and blue flag channels, the next level of assessment is to determine WHO is engaging with you, do they matter to you, are they positive/negative/neutral.
This is critical for
- Grant applications/reporting
- Reputation management
- Exploring collaboration opportunities
Any surprises that arise in a close demographic, professional, or regional analysis of engagement with your work could be valuable data in making a qualitative case around the impact of your work.
Give the flag system a try and let us know what you think? Was finding diamonds easier? Still digging through a lot of sand? What other ways of slicing and dicing the engagement are helpful to you?