Search Results for author: Ylva Ferstl

Found 3 papers, 1 papers with code

ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech

1 code implementation15 Sep 2022 Saeed Ghorbani, Ylva Ferstl, Daniel Holden, Nikolaus F. Troje, Marc-André Carbonneau

In a series of experiments, we first demonstrate the flexibility and generalizability of our model to new speakers and styles.

Gesture Generation

It's A Match! Gesture Generation Using Expressive Parameter Matching

no code implementations4 Mar 2021 Ylva Ferstl, Michael Neff, Rachel McDonnell

Automatic gesture generation from speech generally relies on implicit modelling of the nondeterministic speech-gesture relationship and can result in averaged motion lacking defined form.

Gesture Generation Human-Computer Interaction

Understanding the Predictability of Gesture Parameters from Speech and their Perceptual Importance

no code implementations2 Oct 2020 Ylva Ferstl, Michael Neff, Rachel McDonnell

We determine a number of parameters characterizing gesture, such as speed and gesture size, and explore their relationship to the speech signal in a two-fold manner.

Video Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.