2 code implementations • 24 Aug 2023 • Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov, Mihail Tsakov, Gustav Eje Henter
The effect of the interlocutor is even more subtle, with submitted systems at best performing barely above chance.
no code implementations • 15 Mar 2023 • Taras Kucherenko, Pieter Wolfert, Youngwoo Yoon, Carla Viegas, Teodor Nikolov, Mihail Tsakov, Gustav Eje Henter
For each tier, we evaluated both the human-likeness of the gesture motion and its appropriateness for the specific speech signal.
no code implementations • 13 Jan 2023 • Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter, Michael Neff
Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications.
3 code implementations • 22 Aug 2022 • Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov, Mihail Tsakov, Gustav Eje Henter
On the other hand, all synthetic motion is found to be vastly less appropriate for the speech than the original motion-capture recordings.
no code implementations • 12 Aug 2021 • Taras Kucherenko, Rajmund Nagy, Michael Neff, Hedvig Kjellström, Gustav Eje Henter
Embodied conversational agents benefit from being able to accompany their speech with gestures.
no code implementations • 28 Jun 2021 • Taras Kucherenko, Rajmund Nagy, Patrik Jonell, Michael Neff, Hedvig Kjellström, Gustav Eje Henter
We propose a new framework for gesture generation, aiming to allow data-driven approaches to produce more semantically rich gestures.
1 code implementation • 24 Feb 2021 • Rajmund Nagy, Taras Kucherenko, Birger Moell, André Pereira, Hedvig Kjellström, Ulysses Bernardet
To date, recent end-to-end gesture generation methods have not been evaluated in a real-time interaction with users.
no code implementations • 14 Jan 2021 • Simon Alexanderson, Éva Székely, Gustav Eje Henter, Taras Kucherenko, Jonas Beskow
In contrast to previous approaches for joint speech-and-gesture generation, we generate full-body gestures from speech synthesis trained on recordings of spontaneous speech from the same person as the motion-capture data.
1 code implementation • 16 Jul 2020 • Taras Kucherenko, Dai Hasegawa, Naoshi Kaneko, Gustav Eje Henter, Hedvig Kjellström
We provide an analysis of different representations for the input (speech) and the output (motion) of the network by both objective and subjective evaluations.
1 code implementation • 11 Jun 2020 • Patrik Jonell, Taras Kucherenko, Gustav Eje Henter, Jonas Beskow
Our contributions are: a) a method for feature extraction from multi-party video and speech recordings, resulting in a representation that allows for independent control and manipulation of expression and speech articulation in a 3D avatar; b) an extension to MoGlow, a recent motion-synthesis method based on normalizing flows, to also take multi-modal signals from the interlocutor as input and subsequently output interlocutor-aware facial gestures; and c) a subjective evaluation assessing the use and relative importance of the input modalities.
1 code implementation • Computer Graphics Forum 2020 • Simon Alexanderson, Gustav Eje Henter, Taras Kucherenko, Jonas Beskow
In interactive scenarios, systems for generating natural animations on the fly are key to achieving believable and relatable characters.
1 code implementation • 25 Jan 2020 • Taras Kucherenko, Patrik Jonell, Sanne van Waveren, Gustav Eje Henter, Simon Alexanderson, Iolanda Leite, Hedvig Kjellström
During speech, people spontaneously gesticulate, which plays a key role in conveying information.
1 code implementation • arXiv 2019 • Taras Kucherenko, Dai Hasegawa, Gustav Eje Henter, Naoshi Kaneko, Hedvig Kjellström
We evaluate different representation sizes in order to find the most effective dimensionality for the representation.
Gesture Generation Human-Computer Interaction I.2.6; I.5.1; J.4
1 code implementation • 7 Mar 2018 • Taras Kucherenko, Jonas Beskow, Hedvig Kjellström
Optical motion capture systems have become a widely used technology in various fields, such as augmented reality, robotics, movie production, etc.
no code implementations • 5 Sep 2017 • Patrik Jonell, Joseph Mendelson, Thomas Storskog, Goran Hagman, Per Ostberg, Iolanda Leite, Taras Kucherenko, Olga Mikheeva, Ulrika Akenine, Vesna Jelic, Alina Solomon, Jonas Beskow, Joakim Gustafson, Miia Kivipelto, Hedvig Kjellstrom
This paper presents the EACare project, an ambitious multi-disciplinary collaboration with the aim to develop an embodied system, capable of carrying out neuropsychological tests to detect early signs of dementia, e. g., due to Alzheimer's disease.