↓ Skip to main content

Before, Between, and After: Enriching Robot Communication Surrounding Collaborative Creative Activities

Overview of attention for article published in Frontiers in Robotics and AI, April 2021
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Above-average Attention Score compared to outputs of the same age and source (54th percentile)

Mentioned by

twitter
4 X users

Citations

dimensions_citation
9 Dimensions

Readers on

mendeley
26 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Before, Between, and After: Enriching Robot Communication Surrounding Collaborative Creative Activities
Published in
Frontiers in Robotics and AI, April 2021
DOI 10.3389/frobt.2021.662355
Pubmed ID
Authors

Richard Savery, Lisa Zahray, Gil Weinberg

Abstract

Research in creative robotics continues to expand across all creative domains, including art, music and language. Creative robots are primarily designed to be task specific, with limited research into the implications of their design outside their core task. In the case of a musical robot, this includes when a human sees and interacts with the robot before and after the performance, as well as in between pieces. These non-musical interaction tasks such as the presence of a robot during musical equipment set up, play a key role in the human perception of the robot however have received only limited attention. In this paper, we describe a new audio system using emotional musical prosody, designed to match the creative process of a musical robot for use before, between and after musical performances. Our generation system relies on the creation of a custom dataset for musical prosody. This system is designed foremost to operate in real time and allow rapid generation and dialogue exchange between human and robot. For this reason, the system combines symbolic deep learning through a Conditional Convolution Variational Auto-encoder, with an emotion-tagged audio sampler. We then compare this to a SOTA text-to-speech system in our robotic platform, Shimon the marimba player.We conducted a between-groups study with 100 participants watching a musician interact for 30 s with Shimon. We were able to increase user ratings for the key creativity metrics; novelty and coherence, while maintaining ratings for expressivity across each implementation. Our results also indicated that by communicating in a form that relates to the robot's core functionality, we can raise likeability and perceived intelligence, while not altering animacy or anthropomorphism. These findings indicate the variation that can occur in the perception of a robot based on interactions surrounding a performance, such as initial meetings and spaces between pieces, in addition to the core creative algorithms.

X Demographics

X Demographics

The data shown below were collected from the profiles of 4 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 26 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 26 100%

Demographic breakdown

Readers by professional status Count As %
Student > Postgraduate 3 12%
Student > Ph. D. Student 3 12%
Student > Doctoral Student 2 8%
Student > Master 2 8%
Unspecified 1 4%
Other 3 12%
Unknown 12 46%
Readers by discipline Count As %
Engineering 3 12%
Social Sciences 3 12%
Psychology 2 8%
Business, Management and Accounting 2 8%
Computer Science 1 4%
Other 3 12%
Unknown 12 46%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 11 January 2024.
All research outputs
#15,715,863
of 25,918,104 outputs
Outputs from Frontiers in Robotics and AI
#770
of 1,787 outputs
Outputs of similar age
#230,028
of 457,732 outputs
Outputs of similar age from Frontiers in Robotics and AI
#59
of 138 outputs
Altmetric has tracked 25,918,104 research outputs across all sources so far. This one is in the 38th percentile – i.e., 38% of other outputs scored the same or lower than it.
So far Altmetric has tracked 1,787 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.2. This one has gotten more attention than average, scoring higher than 54% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 457,732 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 47th percentile – i.e., 47% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 138 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 54% of its contemporaries.