no code implementations • 23 Jun 2022 • Abiramy Kuganesan, Shih-Yang Su, James J. Little, Helge Rhodin
Neural Radiance Fields (NeRFs) increase reconstruction detail for novel view synthesis and scene reconstruction, with applications ranging from large static scenes to dynamic human motion.
no code implementations • 3 May 2022 • Shih-Yang Su, Timur Bagautdinov, Helge Rhodin
While a few such approaches exist, those have limited generalization capabilities and are prone to learning spurious (chance) correlations between irrelevant body parts, resulting in implausible deformations and missing body parts on unseen poses.
no code implementations • NeurIPS 2021 • Shih-Yang Su, Frank Yu, Michael Zollhoefer, Helge Rhodin
We propose a method to learn a generative neural body model from unlabelled monocular videos by extending Neural Radiance Fields (NeRFs).
1 code implementation • CVPR 2020 • Meng-Li Shih, Shih-Yang Su, Johannes Kopf, Jia-Bin Huang
We propose a method for converting a single RGB-D input image into a 3D photo - a multi-layer representation for novel view synthesis that contains hallucinated color and depth structures in regions occluded in the original view.
no code implementations • 2 Oct 2019 • Shih-Yang Su, Hossein Hajimirsadeghi, Greg Mori
Generating graph structures is a challenging problem due to the diverse representations and complex dependencies among nodes.
no code implementations • NeurIPS 2018 • Zhang-Wei Hong, Tzu-Yun Shann, Shih-Yang Su, Yi-Hsiang Chang, Chun-Yi Lee
Efficient exploration remains a challenging research problem in reinforcement learning, especially when an environment contains large state spaces, deceptive local optima, or sparse rewards.
no code implementations • 1 Feb 2018 • Zhang-Wei Hong, Chen Yu-Ming, Shih-Yang Su, Tzu-Yun Shann, Yi-Hsiang Chang, Hsuan-Kung Yang, Brian Hsi-Lin Ho, Chih-Chieh Tu, Yueh-Chuan Chang, Tsu-Ching Hsiao, Hsin-Wei Hsiao, Sih-Pin Lai, Chun-Yi Lee
Collecting training data from the physical world is usually time-consuming and even dangerous for fragile robots, and thus, recent advances in robot learning advocate the use of simulators as the training platform.
no code implementations • 21 Dec 2017 • Zhang-Wei Hong, Shih-Yang Su, Tzu-Yun Shann, Yi-Hsiang Chang, Chun-Yi Lee
DPIQN incorporates the learned policy features as a hidden vector into its own deep Q-network (DQN), such that it is able to predict better Q values for the controllable agents than the state-of-the-art deep reinforcement learning models.