Search Results for author: Taras Kucherenko

Found 15 papers, 9 papers with code

Evaluating gesture generation in a large-scale open challenge: The GENEA Challenge 2022

no code implementations15 Mar 2023 Taras Kucherenko, Pieter Wolfert, Youngwoo Yoon, Carla Viegas, Teodor Nikolov, Mihail Tsakov, Gustav Eje Henter

For each tier, we evaluated both the human-likeness of the gesture motion and its appropriateness for the specific speech signal.

Gesture Generation

A Comprehensive Review of Data-Driven Co-Speech Gesture Generation

no code implementations13 Jan 2023 Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter, Michael Neff

Finally, we identify key research challenges in gesture generation, including data availability and quality; producing human-like motion; grounding the gesture in the co-occurring speech in interaction with other speakers, and in the environment; performing gesture evaluation; and integration of gesture synthesis into applications.

Gesture Generation

The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation

3 code implementations22 Aug 2022 Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov, Mihail Tsakov, Gustav Eje Henter

On the other hand, all synthetic motion is found to be vastly less appropriate for the speech than the original motion-capture recordings.

Gesture Generation

Generating coherent spontaneous speech and gesture from text

no code implementations14 Jan 2021 Simon Alexanderson, Éva Székely, Gustav Eje Henter, Taras Kucherenko, Jonas Beskow

In contrast to previous approaches for joint speech-and-gesture generation, we generate full-body gestures from speech synthesis trained on recordings of spontaneous speech from the same person as the motion-capture data.

Gesture Generation Speech Synthesis

Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation

1 code implementation16 Jul 2020 Taras Kucherenko, Dai Hasegawa, Naoshi Kaneko, Gustav Eje Henter, Hedvig Kjellström

We provide an analysis of different representations for the input (speech) and the output (motion) of the network by both objective and subjective evaluations.

Gesture Generation Representation Learning

Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation of Facial Gestures in Dyadic Settings

1 code implementation11 Jun 2020 Patrik Jonell, Taras Kucherenko, Gustav Eje Henter, Jonas Beskow

Our contributions are: a) a method for feature extraction from multi-party video and speech recordings, resulting in a representation that allows for independent control and manipulation of expression and speech articulation in a 3D avatar; b) an extension to MoGlow, a recent motion-synthesis method based on normalizing flows, to also take multi-modal signals from the interlocutor as input and subsequently output interlocutor-aware facial gestures; and c) a subjective evaluation assessing the use and relative importance of the input modalities.

Motion Synthesis

Analyzing Input and Output Representations for Speech-Driven Gesture Generation

1 code implementation arXiv 2019 Taras Kucherenko, Dai Hasegawa, Gustav Eje Henter, Naoshi Kaneko, Hedvig Kjellström

We evaluate different representation sizes in order to find the most effective dimensionality for the representation.

Gesture Generation Human-Computer Interaction I.2.6; I.5.1; J.4

A Neural Network Approach to Missing Marker Reconstruction in Human Motion Capture

1 code implementation7 Mar 2018 Taras Kucherenko, Jonas Beskow, Hedvig Kjellström

Optical motion capture systems have become a widely used technology in various fields, such as augmented reality, robotics, movie production, etc.

3D Reconstruction Missing Markers Reconstruction

Machine Learning and Social Robotics for Detecting Early Signs of Dementia

no code implementations5 Sep 2017 Patrik Jonell, Joseph Mendelson, Thomas Storskog, Goran Hagman, Per Ostberg, Iolanda Leite, Taras Kucherenko, Olga Mikheeva, Ulrika Akenine, Vesna Jelic, Alina Solomon, Jonas Beskow, Joakim Gustafson, Miia Kivipelto, Hedvig Kjellstrom

This paper presents the EACare project, an ambitious multi-disciplinary collaboration with the aim to develop an embodied system, capable of carrying out neuropsychological tests to detect early signs of dementia, e. g., due to Alzheimer's disease.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.