Search Results for author: Micol Spitale

Found 7 papers, 6 papers with code

REACT 2024: the Second Multiple Appropriate Facial Reaction Generation Challenge

1 code implementation10 Jan 2024 Siyang Song, Micol Spitale, Cheng Luo, Cristina Palmero, German Barquero, Hengde Zhu, Sergio Escalera, Michel Valstar, Tobias Baur, Fabien Ringeval, Elisabeth Andre, Hatice Gunes

In dyadic interactions, humans communicate their intentions and state of mind using verbal and non-verbal cues, where multiple different facial reactions might be appropriate in response to a specific speaker behaviour.

REACT2023: the first Multi-modal Multiple Appropriate Facial Reaction Generation Challenge

1 code implementation11 Jun 2023 Siyang Song, Micol Spitale, Cheng Luo, German Barquero, Cristina Palmero, Sergio Escalera, Michel Valstar, Tobias Baur, Fabien Ringeval, Elisabeth Andre, Hatice Gunes

The Multi-modal Multiple Appropriate Facial Reaction Generation Challenge (REACT2023) is the first competition event focused on evaluating multimedia processing and machine learning techniques for generating human-appropriate facial reactions in various dyadic interaction scenarios, with all participants competing strictly under the same conditions.

ReactFace: Multiple Appropriate Facial Reaction Generation in Dyadic Interactions

1 code implementation25 May 2023 Cheng Luo, Siyang Song, Weicheng Xie, Micol Spitale, Linlin Shen, Hatice Gunes

ReactFace generates multiple different but appropriate photo-realistic human facial reactions by (i) learning an appropriate facial reaction distribution representing multiple appropriate facial reactions; and (ii) synchronizing the generated facial reactions with the speaker's verbal and non-verbal behaviours at each time stamp, resulting in realistic 2D facial reaction sequences.

Reversible Graph Neural Network-based Reaction Distribution Learning for Multiple Appropriate Facial Reactions Generation

1 code implementation24 May 2023 Tong Xu, Micol Spitale, Hao Tang, Lu Liu, Hatice Gunes, Siyang Song

This means that we approach this problem by considering the generation of a distribution of the listener's appropriate facial reactions instead of multiple different appropriate facial reactions, i. e., 'many' appropriate facial reaction labels are summarised as 'one' distribution label during training.

Multiple Appropriate Facial Reaction Generation in Dyadic Interaction Settings: What, Why and How?

1 code implementation13 Feb 2023 Siyang Song, Micol Spitale, Yiming Luo, Batuhan Bal, Hatice Gunes

However, none attempted to automatically generate multiple appropriate reactions in the context of dyadic interactions and evaluate the appropriateness of those reactions using objective measures.

Modeling User Empathy Elicited by a Robot Storyteller

1 code implementation29 Jul 2021 Leena Mathur, Micol Spitale, Hao Xi, Jieyun Li, Maja J Matarić

Our research informs and motivates future development of empathy perception models that can be leveraged by virtual and robotic agents during human-machine interactions.

Toward Automated Generation of Affective Gestures from Text:A Theory-Driven Approach

no code implementations4 Mar 2021 Micol Spitale, Maja J Matarić

Communication in both human-human and human-robot interac-tion (HRI) contexts consists of verbal (speech-based) and non-verbal(facial expressions, eye gaze, gesture, body pose, etc.)

Sentiment Analysis Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.