no code implementations • LREC 2022 • Liu Yang, Catherine Achard, Catherine Pelachaud
Integrating the existing interruption and turn switch classification methods, we propose a new annotation schema to annotate different types of interruptions through timeliness, switch accomplishment and speech content level.
1 code implementation • 9 Nov 2023 • Mireille Fares, Catherine Pelachaud, Nicolas Obin
Our approach is the first method for generating speech driven metaphoric gestures while leveraging the potential of Image Schemas.
no code implementations • 25 Sep 2023 • Lucie Galland, Catherine Pelachaud, Florian Pecune
To evaluate the quality of an MI conversation, client utterances can be classified using the MISC code as either change talk, sustain talk, or follow/neutral talk.
no code implementations • 8 Aug 2023 • Mireille Fares, Catherine Pelachaud, Nicolas Obin
Behavior expressivity style is viewed here as the qualitative properties of behaviors.
no code implementations • 22 May 2023 • Mireille Fares, Catherine Pelachaud, Nicolas Obin
In this study, we address the importance of modeling behavior style in virtual agents for personalized human-agent interaction.
no code implementations • 18 May 2023 • Jieyeon Woo, Mireille Fares, Catherine Pelachaud, Catherine Achard
We propose AMII, a novel approach to synthesize adaptive facial gestures for SIAs while interacting with Users and acting interchangeably as a speaker or as a listener.
no code implementations • 3 Aug 2022 • Mireille Fares, Michele Grimaldi, Catherine Pelachaud, Nicolas Obin
The third goal is to allow zero shot style transfer of speakers unseen during training without retraining the model.
no code implementations • 17 Jul 2022 • Fajrian Yunus, Chloé Clavel, Catherine Pelachaud
Therefore, after obtaining the vector representation of the image schemas, we calculate the distances between those vectors.
1 code implementation • 9 Oct 2021 • Mireille Fares, Catherine Pelachaud, Nicolas Obin
We propose a semantically-aware speech driven model to generate expressive and natural upper-facial and head motion for Embodied Conversational Agents (ECA).
no code implementations • 17 Aug 2020 • Fajrian Yunus, Chloé Clavel, Catherine Pelachaud
Our objective is to predict the timing of gestures according to the acoustic.
no code implementations • LREC 2020 • Harry Bunt, Volha Petukhova, Emer Gilmartin, Catherine Pelachaud, Alex Fang, Simon Keizer, Laurent Pr{\'e}vot
ISO standard 24617-2 for dialogue act annotation, established in 2012, has in the past few years been used both in corpus annotation and in the design of components for spoken and multimodal dialogue systems.
no code implementations • LREC 2020 • Reshmashree Bangalore Kantharaju, Caroline Langlet, Mukesh Barange, Chlo{\'e} Clavel, Catherine Pelachaud
We also observe that, dialogue acts and head nods did not have an impact on the level of cohesion by itself.
no code implementations • 10 Apr 2019 • Reshmashree B. Kantharaju, Dominic De Franco, Alison Pease, Catherine Pelachaud
In this paper, we present an evaluation study focused on understanding the effects of multiple agents on user's persuasion.
no code implementations • 18 Jun 2015 • Kevin Sanlaville, Gérard Assayag, Frédéric Bevilacqua, Catherine Pelachaud
In a Human-Computer Interaction context, we aim to elaborate an adaptive and generic interaction model in two different use cases: Embodied Conversational Agents and Creative Musical Agents for musical improvisation.
no code implementations • LREC 2014 • Zoraida Callejas, Brian Ravenet, Magalie Ochs, Catherine Pelachaud
This paper presents an adaptive model of multimodal social behavior for embodied conversational agents.
no code implementations • LREC 2014 • Mathieu Chollet, Magalie Ochs, Catherine Pelachaud
Interpersonal attitudes are expressed by non-verbal behaviors on a variety of different modalities.
no code implementations • LREC 2014 • Nesrine Fourati, Catherine Pelachaud
In this paper, we describe our new database of emotional body expression in daily actions, where 11 actors express 8 emotions in 7 actions.
no code implementations • 20 Feb 2014 • Nicolas Sabouret, Hazaël Jones, Magalie Ochs, Mathieu Chollet, Catherine Pelachaud
In this paper, we propose a model of social attitudes that enables a virtual agent to reason on the appropriate social attitude to express during the interaction with a user given the course of the interaction, but also the emotions, mood and personality of the agent.