1 code implementation • 15 Sep 2022 • Saeed Ghorbani, Ylva Ferstl, Daniel Holden, Nikolaus F. Troje, Marc-André Carbonneau
In a series of experiments, we first demonstrate the flexibility and generalizability of our model to new speakers and styles.
no code implementations • 4 Mar 2021 • Ylva Ferstl, Michael Neff, Rachel McDonnell
Automatic gesture generation from speech generally relies on implicit modelling of the nondeterministic speech-gesture relationship and can result in averaged motion lacking defined form.
Gesture Generation Human-Computer Interaction
no code implementations • 2 Oct 2020 • Ylva Ferstl, Michael Neff, Rachel McDonnell
We determine a number of parameters characterizing gesture, such as speed and gesture size, and explore their relationship to the speech signal in a two-fold manner.