Search Results for author: Daniel Holden

Found 2 papers, 1 papers with code

ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech

1 code implementation15 Sep 2022 Saeed Ghorbani, Ylva Ferstl, Daniel Holden, Nikolaus F. Troje, Marc-André Carbonneau

In a series of experiments, we first demonstrate the flexibility and generalizability of our model to new speakers and styles.

Gesture Generation

Scanning and animating characters dressed in multiple-layer garments

no code implementations9 May 2017 Pengpeng Hu, Taku Komura, Daniel Holden, Yueqi Zhong

In this paper, we propose a novel scanning-based solution for modeling and animating characters wearing multiple layers of clothes.

Cannot find the paper you are looking for? You can Submit a new open access paper.