no code implementations • 27 Nov 2024 • Brian Chao, Hung-Yu Tseng, Lorenzo Porzi, Chen Gao, Tuotuo Li, Qinbo Li, Ayush Saraf, Jia-Bin Huang, Johannes Kopf, Gordon Wetzstein, Changil Kim
As such, each Gaussian can represent a richer set of texture patterns and geometric structures, instead of just a single color and ellipsoid as in naive Gaussian Splatting.
no code implementations • 7 Nov 2024 • Chen Gao, Yipeng Wang, Changil Kim, Jia-Bin Huang, Johannes Kopf
Neural Radiance Fields (NeRF) have demonstrated exceptional capabilities in reconstructing complex scenes with high fidelity.
no code implementations • 13 Jun 2024 • Meng-Li Shih, Jia-Bin Huang, Changil Kim, Rajvi Shah, Johannes Kopf, Chen Gao
We introduce a novel method for dynamic free-view synthesis of an ambient scenes from a monocular capture bringing a immersive quality to the viewing experience.
no code implementations • 15 Apr 2024 • Chieh Hubert Lin, Changil Kim, Jia-Bin Huang, Qinbo Li, Chih-Yao Ma, Johannes Kopf, Ming-Hsuan Yang, Hung-Yu Tseng
These two problems are further reinforced with the use of pixel-distance losses.
no code implementations • 23 Jan 2024 • Zhi-Hao Lin, Jia-Bin Huang, Zhengqin Li, Zhao Dong, Christian Richardt, Tuotuo Li, Michael Zollhöfer, Johannes Kopf, Shenlong Wang, Changil Kim
While numerous 3D reconstruction and novel-view synthesis methods allow for photorealistic rendering of a scene from multi-view images easily captured with consumer cameras, they bake illumination in their representations and fall short of supporting advanced applications like material editing, relighting, and virtual object insertion.
no code implementations • CVPR 2024 • Yu-Ying Yeh, Jia-Bin Huang, Changil Kim, Lei Xiao, Thu Nguyen-Phuoc, Numair Khan, Cheng Zhang, Manmohan Chandraker, Carl S Marshall, Zhao Dong, Zhengqin Li
In contrast, TextureDreamer can transfer highly detailed, intricate textures from real-world environments to arbitrary objects with only a few casually captured images, potentially significantly democratizing texture creation.
no code implementations • CVPR 2024 • Jaehoon Choi, Rajvi Shah, Qinbo Li, Yipeng Wang, Ayush Saraf, Changil Kim, Jia-Bin Huang, Dinesh Manocha, Suhib Alsisan, Johannes Kopf
We validate the effectiveness of the proposed method on large unbounded scenes from mip-NeRF 360 Tanks & Temples and Deep Blending datasets achieving at-par rendering quality with 73x reduced triangles and 11x reduction in memory footprint.
no code implementations • CVPR 2024 • Li Ma, Vasu Agrawal, Haithem Turki, Changil Kim, Chen Gao, Pedro Sander, Michael Zollhöfer, Christian Richardt
We show that our Gaussian directional encoding and geometry prior significantly improve the modeling of challenging specular reflections in neural radiance fields, which helps decompose appearance into more physically meaningful components.
no code implementations • 15 Nov 2023 • Badour AlBahar, Shunsuke Saito, Hung-Yu Tseng, Changil Kim, Johannes Kopf, Jia-Bin Huang
We present an approach to generate a 360-degree view of a person with a consistent, high-resolution appearance from a single input image.
no code implementations • 5 Nov 2023 • Linning Xu, Vasu Agrawal, William Laney, Tony Garcia, Aayush Bansal, Changil Kim, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Aljaž Božič, Dahua Lin, Michael Zollhöfer, Christian Richardt
We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields.
1 code implementation • ICCV 2023 • Geng Lin, Chen Gao, Jia-Bin Huang, Changil Kim, Yipeng Wang, Matthias Zwicker, Ayush Saraf
Video matting has broad applications, from adding interesting effects to casually captured movies to assisting video production professionals.
no code implementations • CVPR 2023 • Hung-Yu Tseng, Qinbo Li, Changil Kim, Suhib Alsisan, Jia-Bin Huang, Johannes Kopf
In this work, we propose a pose-guided diffusion model to generate a consistent long-term video of novel views from a single image.
no code implementations • CVPR 2023 • Andreas Meuleman, Yu-Lun Liu, Chen Gao, Jia-Bin Huang, Changil Kim, Min H. Kim, Johannes Kopf
For handling unknown poses, we jointly estimate the camera poses with radiance field in a progressive manner.
1 code implementation • CVPR 2023 • Benjamin Attal, Jia-Bin Huang, Christian Richardt, Michael Zollhoefer, Johannes Kopf, Matthew O'Toole, Changil Kim
Volumetric scene representations enable photorealistic view synthesis for static scenes and form the basis of several existing 6-DoF video techniques.
Ranked #1 on Novel View Synthesis on DONeRF: Evaluation Dataset
1 code implementation • CVPR 2023 • Yu-Lun Liu, Chen Gao, Andreas Meuleman, Hung-Yu Tseng, Ayush Saraf, Changil Kim, Yung-Yu Chuang, Johannes Kopf, Jia-Bin Huang
Dynamic radiance field reconstruction methods aim to model the time-varying structure and appearance of a dynamic scene.
no code implementations • 11 Oct 2022 • Peiye Zhuang, Jia-Bin Huang, Ayush Saraf, Xuejian Rong, Changil Kim, Denis Demandolx
Image composition aims to blend multiple objects to form a harmonized image.
no code implementations • CVPR 2022 • Xuejian Rong, Jia-Bin Huang, Ayush Saraf, Changil Kim, Johannes Kopf
We present a simple but effective technique to boost the rendering quality, which can be easily integrated with most view synthesis methods.
no code implementations • CVPR 2022 • Benjamin Attal, Jia-Bin Huang, Michael Zollhöfer, Johannes Kopf, Changil Kim
Our method supports rendering with a single network evaluation per pixel for small baseline light fields and with only a few evaluations per pixel for light fields with larger baselines.
1 code implementation • 2 Dec 2021 • Benjamin Attal, Jia-Bin Huang, Michael Zollhoefer, Johannes Kopf, Changil Kim
Our method supports rendering with a single network evaluation per pixel for small baseline light field datasets and can also be applied to larger baselines with only a few evaluations per pixel.
1 code implementation • NeurIPS 2021 • Benjamin Attal, Eliot Laidlaw, Aaron Gokaslan, Changil Kim, Christian Richardt, James Tompkin, Matthew O'Toole
Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e. g., NeRF).
1 code implementation • CVPR 2022 • Tianye Li, Mira Slavcheva, Michael Zollhoefer, Simon Green, Christoph Lassner, Changil Kim, Tanner Schmidt, Steven Lovegrove, Michael Goesele, Richard Newcombe, Zhaoyang Lv
We propose a novel approach for 3D video synthesis that is able to represent multi-view video recordings of a dynamic real-world scene in a compact, yet expressive representation that enables high-quality view synthesis and motion interpolation.
no code implementations • CVPR 2021 • Wenqi Xian, Jia-Bin Huang, Johannes Kopf, Changil Kim
We present a method that learns a spatiotemporal neural irradiance field for dynamic scenes from a single video.
3 code implementations • CVPR 2019 • Tae-Hyun Oh, Tali Dekel, Changil Kim, Inbar Mosseri, William T. Freeman, Michael Rubinstein, Wojciech Matusik
How much can we infer about a person's looks from the way they speak?
no code implementations • ECCV 2018 • Yagiz Aksoy, Changil Kim, Petr Kellnhofer, Sylvain Paris, Mohamed Elgharib, Marc Pollefeys, Wojciech Matusik
We present a dataset of thousands of ambient and flash illumination pairs to enable studying flash photography and other applications that can benefit from having separate illuminations.
1 code implementation • 15 May 2018 • Changil Kim, Hijung Valentina Shin, Tae-Hyun Oh, Alexandre Kaspar, Mohamed Elgharib, Wojciech Matusik
We computationally model the overlapping information between faces and voices and show that the learned cross-modal representation contains enough information to identify matching faces and voices with performance similar to that of humans.
2 code implementations • ECCV 2018 • Tae-Hyun Oh, Ronnachai Jaroensri, Changil Kim, Mohamed Elgharib, Frédo Durand, William T. Freeman, Wojciech Matusik
We show that the learned filters achieve high-quality results on real videos, with less ringing artifacts and better noise characteristics than previous methods.
no code implementations • ICCV 2017 • Ajay Nandoriya, Mohamed Elgharib, Changil Kim, Mohamed Hefeeda, Wojciech Matusik
The novelty of our work is in our optimization formulation as well as the motion initialization strategy.