Search Results for author: Michael Neff

Found 10 papers, 1 papers with code

AdaptNet: Policy Adaptation for Physics-Based Character Control

1 code implementation30 Sep 2023 Pei Xu, Kaixiang Xie, Sheldon Andrews, Paul G. Kry, Michael Neff, Morgan McGuire, Ioannis Karamouzas, Victor Zordan

The technique is shown to be effective for adapting existing physics-based controllers to a wide range of new styles for locomotion, new task targets, changes in character morphology and extensive changes in environment.

A Comprehensive Review of Data-Driven Co-Speech Gesture Generation

no code implementations13 Jan 2023 Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter, Michael Neff

Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications.

Gesture Generation

It's A Match! Gesture Generation Using Expressive Parameter Matching

no code implementations4 Mar 2021 Ylva Ferstl, Michael Neff, Rachel McDonnell

Automatic gesture generation from speech generally relies on implicit modelling of the nondeterministic speech-gesture relationship and can result in averaged motion lacking defined form.

Gesture Generation Human-Computer Interaction

Understanding the Predictability of Gesture Parameters from Speech and their Perceptual Importance

no code implementations2 Oct 2020 Ylva Ferstl, Michael Neff, Rachel McDonnell

We determine a number of parameters characterizing gesture, such as speed and gesture size, and explore their relationship to the speech signal in a two-fold manner.

Video Summarization

Storytelling Agents with Personality and Adaptivity

no code implementations4 Sep 2017 Zhichao Hu, Marilyn A. Walker, Michael Neff, Jean E. Fox Tree

Our results show that subjects are able to perceive the intended variation in extraversion between different virtual agents, independently of the story they are telling and the gender of the agent.

A Verbal and Gestural Corpus of Story Retellings to an Expressive Embodied Virtual Character

no code implementations LREC 2016 Jackson Tolins, Kris Liu, Michael Neff, Marilyn Walker, Jean Fox Tree

We used a novel data collection method where an agent presented story components in installments, which the human would then retell to the agent.

A Corpus of Gesture-Annotated Dialogues for Monologue-to-Dialogue Generation from Personal Narratives

no code implementations LREC 2016 Zhichao Hu, Michelle Dick, Chung-Ning Chang, Kevin Bowden, Michael Neff, Jean Fox Tree, Marilyn Walker

This paper presents a new corpus, the Story Dialogue with Gestures (SDG) corpus, consisting of 50 personal narratives regenerated as dialogues, complete with annotations of gesture placement and accompanying gesture forms.

Dialogue Generation

A Multimodal Motion-Captured Corpus of Matched and Mismatched Extravert-Introvert Conversational Pairs

no code implementations LREC 2016 Jackson Tolins, Kris Liu, Yingying Wang, Jean E. Fox Tree, Marilyn Walker, Michael Neff

This paper presents a new corpus, the Personality Dyads Corpus, consisting of multimodal data for three conversations between three personality-matched, two-person dyads (a total of 9 separate dialogues).

Cannot find the paper you are looking for? You can Submit a new open access paper.