no code implementations • 3 Dec 2024 • Mingyi Shi, Dafei Qin, Leo Ho, Zhouyingcheng Liao, Yinghao Huang, Junichi Yamagishi, Taku Komura
To the best of our knowledge, this is the first system capable of generating interactive full-body motions for two characters from speech in an online manner.
no code implementations • 19 May 2024 • Yinghao Huang, Leo Ho, Dafei Qin, Mingyi Shi, Taku Komura
We address the problem of accurate capture and expressive modelling of interactive behaviors happening between two persons in daily scenarios.
no code implementations • 26 Sep 2022 • Yinghao Huang, Omid Tehari, Michael J. Black, Dimitrios Tzionas
With this method we capture the InterCap dataset, which contains 10 subjects (5 males and 5 females) interacting with 10 objects of various sizes and affordances, including contact with the hands or feet.
1 code implementation • 10 Oct 2018 • Yinghao Huang, Manuel Kaufmann, Emre Aksan, Michael J. Black, Otmar Hilliges, Gerard Pons-Moll
To learn from sufficient data, we synthesize IMU data from motion capture datasets.
no code implementations • 24 Jul 2017 • Yinghao Huang, Federica Bogo, Christoph Lassner, Angjoo Kanazawa, Peter V. Gehler, Ijaz Akhter, Michael J. Black
Existing marker-less motion capture methods often assume known backgrounds, static cameras, and sequence specific motion priors, which narrows its application scenarios.