Search Results for author: Catherine Pelachaud

Found 22 papers, 2 papers with code

Annotating Interruption in Dyadic Human Interaction

no code implementations LREC 2022 Liu Yang, Catherine Achard, Catherine Pelachaud

Integrating the existing interruption and turn switch classification methods, we propose a new annotation schema to annotate different types of interruptions through timeliness, switch accomplishment and speech content level.

META4: Semantically-Aligned Generation of Metaphoric Gestures Using Self-Supervised Text and Speech Representation

1 code implementation9 Nov 2023 Mireille Fares, Catherine Pelachaud, Nicolas Obin

Our approach is the first method for generating speech driven metaphoric gestures while leveraging the potential of Image Schemas.

Seeing and hearing what has not been said; A multimodal client behavior classifier in Motivational Interviewing with interpretable fusion

no code implementations25 Sep 2023 Lucie Galland, Catherine Pelachaud, Florian Pecune

To evaluate the quality of an MI conversation, client utterances can be classified using the MISC code as either change talk, sustain talk, or follow/neutral talk.

Decision Making

AMII: Adaptive Multimodal Inter-personal and Intra-personal Model for Adapted Behavior Synthesis

no code implementations18 May 2023 Jieyeon Woo, Mireille Fares, Catherine Pelachaud, Catherine Achard

We propose AMII, a novel approach to synthesize adaptive facial gestures for SIAs while interacting with Users and acting interchangeably as a speaker or as a listener.

Representation Learning of Image Schema

no code implementations17 Jul 2022 Fajrian Yunus, Chloé Clavel, Catherine Pelachaud

Therefore, after obtaining the vector representation of the image schemas, we calculate the distances between those vectors.

Representation Learning

Transformer Network for Semantically-Aware and Speech-Driven Upper-Face Generation

1 code implementation9 Oct 2021 Mireille Fares, Catherine Pelachaud, Nicolas Obin

We propose a semantically-aware speech driven model to generate expressive and natural upper-facial and head motion for Embodied Conversational Agents (ECA).

Face Generation

The ISO Standard for Dialogue Act Annotation, Second Edition

no code implementations LREC 2020 Harry Bunt, Volha Petukhova, Emer Gilmartin, Catherine Pelachaud, Alex Fang, Simon Keizer, Laurent Pr{\'e}vot

ISO standard 24617-2 for dialogue act annotation, established in 2012, has in the past few years been used both in corpus annotation and in the design of components for spoken and multimodal dialogue systems.

Is Two Better than One? Effects of Multiple Agents on User Persuasion

no code implementations10 Apr 2019 Reshmashree B. Kantharaju, Dominic De Franco, Alison Pease, Catherine Pelachaud

In this paper, we present an evaluation study focused on understanding the effects of multiple agents on user's persuasion.

Emergence of synchrony in an Adaptive Interaction Model

no code implementations18 Jun 2015 Kevin Sanlaville, Gérard Assayag, Frédéric Bevilacqua, Catherine Pelachaud

In a Human-Computer Interaction context, we aim to elaborate an adaptive and generic interaction model in two different use cases: Embodied Conversational Agents and Creative Musical Agents for musical improvisation.

Emilya: Emotional body expression in daily actions database

no code implementations LREC 2014 Nesrine Fourati, Catherine Pelachaud

In this paper, we describe our new database of emotional body expression in daily actions, where 11 actors express 8 emotions in 7 actions.

Expressing social attitudes in virtual agents for social training games

no code implementations20 Feb 2014 Nicolas Sabouret, Hazaël Jones, Magalie Ochs, Mathieu Chollet, Catherine Pelachaud

In this paper, we propose a model of social attitudes that enables a virtual agent to reason on the appropriate social attitude to express during the interaction with a user given the course of the interaction, but also the emotions, mood and personality of the agent.

Cannot find the paper you are looking for? You can Submit a new open access paper.