2 code implementations • 4 Sep 2020 • Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim, Geehyuk Lee
In this paper, we present an automatic gesture generation model that uses the multimodal context of speech text, audio, and speaker identity to reliably generate gestures.
Ranked #2 on Gesture Generation on BEAT
5 code implementations • ICRA 2019 • Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim, Geehyuk Lee
Co-speech gestures enhance interaction experiences between humans as well as between humans and robots.
Robotics