Search Results for author: Youngwoo Yoon

Found 8 papers, 6 papers with code

LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents

1 code implementation13 Feb 2024 Jae-Woo Choi, Youngwoo Yoon, Hyobin Ong, Jaehong Kim, Minsu Jang

Large language models (LLMs) have recently received considerable attention as alternative solutions for task planning.

Benchmarking Model Selection

Evaluating gesture generation in a large-scale open challenge: The GENEA Challenge 2022

no code implementations15 Mar 2023 Taras Kucherenko, Pieter Wolfert, Youngwoo Yoon, Carla Viegas, Teodor Nikolov, Mihail Tsakov, Gustav Eje Henter

For each tier, we evaluated both the human-likeness of the gesture motion and its appropriateness for the specific speech signal.

Gesture Generation

Co-Speech Gesture Synthesis using Discrete Gesture Token Learning

no code implementations4 Mar 2023 Shuhong Lu, Youngwoo Yoon, Andrew Feng

Synthesizing realistic co-speech gestures is an important and yet unsolved problem for creating believable motions that can drive a humanoid robot to interact and communicate with human users.

The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation

3 code implementations22 Aug 2022 Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov, Mihail Tsakov, Gustav Eje Henter

On the other hand, all synthetic motion is found to be vastly less appropriate for the speech than the original motion-capture recordings.

Gesture Generation

Evaluating the Quality of a Synthesized Motion with the Fréchet Motion Distance

1 code implementation26 Apr 2022 Antoine Maiorca, Youngwoo Yoon, Thierry Dutoit

Evaluating the Quality of a Synthesized Motion with the Fr\'echet Motion Distance

Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity

2 code implementations4 Sep 2020 Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim, Geehyuk Lee

In this paper, we present an automatic gesture generation model that uses the multimodal context of speech text, audio, and speaker identity to reliably generate gestures.

Gesture Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.