no code implementations • 8 Feb 2024 • Yi-Ting Pan, Chai-Rong Lee, Shu-Ho Fan, Jheng-Wei Su, Jia-Bin Huang, Yung-Yu Chuang, Hung-Kuo Chu
The entertainment industry relies on 3D visual content to create immersive experiences, but traditional methods for creating textured 3D models can be time-consuming and subjective.
no code implementations • 13 Jan 2023 • Chao-Chen Gao, Cheng-Hsiu Chen, Jheng-Wei Su, Hung-Kuo Chu
Specifically, we take the low-level layout edges estimated from the input panorama as a prior to guide the inpainting model for recovering the global indoor structure.
1 code implementation • 27 Nov 2022 • Jen-I Pan, Jheng-Wei Su, Kai-Wen Hsiao, Ting-Yu Yen, Hung-Kuo Chu
To tackle this challenging problem, the refractive index of a scene is reconstructed from silhouettes.
no code implementations • 20 Oct 2022 • Jheng-Wei Su, Chi-Han Peng, Peter Wonka, Hung-Kuo Chu
The major improvement over PSMNet comes from a novel Geometry-aware Panorama Registration Network or GPR-Net that effectively tackles the wide baseline registration problem by exploiting the layout geometry and computing fine-grained correspondences on the layout boundaries, instead of the global pixel-space.
2 code implementations • CVPR 2020 • Jheng-Wei Su, Hung-Kuo Chu, Jia-Bin Huang
Previous methods leverage the deep neural network to map input grayscale images to plausible color outputs directly.
Ranked #2 on
Point-interactive Image Colorization
on CUB-200-2011
(using extra training data)
1 code implementation • 7 May 2020 • Peng Wang, Lingjie Liu, Nenglun Chen, Hung-Kuo Chu, Christian Theobalt, Wenping Wang
We propose the first approach that simultaneously estimates camera motion and reconstructs the geometry of complex 3D thin structures in high quality from a color video captured by a handheld camera.
3 code implementations • 9 Oct 2019 • Chuhang Zou, Jheng-Wei Su, Chi-Han Peng, Alex Colburn, Qi Shan, Peter Wonka, Hung-Kuo Chu, Derek Hoiem
Recent approaches for predicting layouts from 360 panoramas produce excellent results.
1 code implementation • CVPR 2019 • Shang-Ta Yang, Fu-En Wang, Chi-Han Peng, Peter Wonka, Min Sun, Hung-Kuo Chu
We present a deep learning framework, called DuLa-Net, to predict Manhattan-world 3D room layouts from a single RGB panorama.
no code implementations • 13 Nov 2018 • Fu-En Wang, Hou-Ning Hu, Hsien-Tzu Cheng, Juan-Ting Lin, Shang-Ta Yang, Meng-Li Shih, Hung-Kuo Chu, Min Sun
We propose a novel self-supervised learning approach for predicting the omnidirectional depth and camera motion from a 360{\deg} video.
no code implementations • 23 Mar 2014 • Yu-Shiang Wong, Hung-Kuo Chu, Niloy J. Mitra
Further, as more scenes are annotated, the system makes better suggestions based on structural and geometric priors learns from the previous annotation sessions.