1 code implementation • 30 Sep 2023 • Pei Xu, Kaixiang Xie, Sheldon Andrews, Paul G. Kry, Michael Neff, Morgan McGuire, Ioannis Karamouzas, Victor Zordan
The technique is shown to be effective for adapting existing physics-based controllers to a wide range of new styles for locomotion, new task targets, changes in character morphology and extensive changes in environment.
no code implementations • 13 Jan 2023 • Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter, Michael Neff
Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications.
no code implementations • 12 Aug 2021 • Taras Kucherenko, Rajmund Nagy, Michael Neff, Hedvig Kjellström, Gustav Eje Henter
Embodied conversational agents benefit from being able to accompany their speech with gestures.
no code implementations • 28 Jun 2021 • Taras Kucherenko, Rajmund Nagy, Patrik Jonell, Michael Neff, Hedvig Kjellström, Gustav Eje Henter
We propose a new framework for gesture generation, aiming to allow data-driven approaches to produce more semantically rich gestures.
no code implementations • 4 Mar 2021 • Ylva Ferstl, Michael Neff, Rachel McDonnell
Automatic gesture generation from speech generally relies on implicit modelling of the nondeterministic speech-gesture relationship and can result in averaged motion lacking defined form.
Gesture Generation Human-Computer Interaction
no code implementations • 2 Oct 2020 • Ylva Ferstl, Michael Neff, Rachel McDonnell
We determine a number of parameters characterizing gesture, such as speed and gesture size, and explore their relationship to the speech signal in a two-fold manner.
no code implementations • 4 Sep 2017 • Zhichao Hu, Marilyn A. Walker, Michael Neff, Jean E. Fox Tree
Our results show that subjects are able to perceive the intended variation in extraversion between different virtual agents, independently of the story they are telling and the gender of the agent.
no code implementations • LREC 2016 • Jackson Tolins, Kris Liu, Michael Neff, Marilyn Walker, Jean Fox Tree
We used a novel data collection method where an agent presented story components in installments, which the human would then retell to the agent.
no code implementations • LREC 2016 • Zhichao Hu, Michelle Dick, Chung-Ning Chang, Kevin Bowden, Michael Neff, Jean Fox Tree, Marilyn Walker
This paper presents a new corpus, the Story Dialogue with Gestures (SDG) corpus, consisting of 50 personal narratives regenerated as dialogues, complete with annotations of gesture placement and accompanying gesture forms.
no code implementations • LREC 2016 • Jackson Tolins, Kris Liu, Yingying Wang, Jean E. Fox Tree, Marilyn Walker, Michael Neff
This paper presents a new corpus, the Personality Dyads Corpus, consisting of multimodal data for three conversations between three personality-matched, two-person dyads (a total of 9 separate dialogues).