no code implementations • 27 Nov 2024 • Brian Chao, Hung-Yu Tseng, Lorenzo Porzi, Chen Gao, Tuotuo Li, Qinbo Li, Ayush Saraf, Jia-Bin Huang, Johannes Kopf, Gordon Wetzstein, Changil Kim
As such, each Gaussian can represent a richer set of texture patterns and geometric structures, instead of just a single color and ellipsoid as in naive Gaussian Splatting.
no code implementations • 7 Nov 2024 • Chen Gao, Yipeng Wang, Changil Kim, Jia-Bin Huang, Johannes Kopf
Neural Radiance Fields (NeRF) have demonstrated exceptional capabilities in reconstructing complex scenes with high fidelity.
no code implementations • 13 Jun 2024 • Meng-Li Shih, Jia-Bin Huang, Changil Kim, Rajvi Shah, Johannes Kopf, Chen Gao
We introduce a novel method for dynamic free-view synthesis of an ambient scenes from a monocular capture bringing a immersive quality to the viewing experience.
no code implementations • 15 Apr 2024 • Chieh Hubert Lin, Changil Kim, Jia-Bin Huang, Qinbo Li, Chih-Yao Ma, Johannes Kopf, Ming-Hsuan Yang, Hung-Yu Tseng
These two problems are further reinforced with the use of pixel-distance losses.
no code implementations • 23 Jan 2024 • Chih-Hao Lin, Jia-Bin Huang, Zhengqin Li, Zhao Dong, Christian Richardt, Tuotuo Li, Michael Zollhöfer, Johannes Kopf, Shenlong Wang, Changil Kim
Our results show that IRIS effectively recovers HDR lighting, accurate material, and plausible camera response functions, supporting photorealistic relighting and object insertion.
no code implementations • CVPR 2024 • Jaehoon Choi, Rajvi Shah, Qinbo Li, Yipeng Wang, Ayush Saraf, Changil Kim, Jia-Bin Huang, Dinesh Manocha, Suhib Alsisan, Johannes Kopf
We validate the effectiveness of the proposed method on large unbounded scenes from mip-NeRF 360 Tanks & Temples and Deep Blending datasets achieving at-par rendering quality with 73x reduced triangles and 11x reduction in memory footprint.
no code implementations • 15 Nov 2023 • Badour AlBahar, Shunsuke Saito, Hung-Yu Tseng, Changil Kim, Johannes Kopf, Jia-Bin Huang
We present an approach to generate a 360-degree view of a person with a consistent, high-resolution appearance from a single input image.
no code implementations • CVPR 2023 • Hung-Yu Tseng, Qinbo Li, Changil Kim, Suhib Alsisan, Jia-Bin Huang, Johannes Kopf
In this work, we propose a pose-guided diffusion model to generate a consistent long-term video of novel views from a single image.
no code implementations • CVPR 2023 • Andreas Meuleman, Yu-Lun Liu, Chen Gao, Jia-Bin Huang, Changil Kim, Min H. Kim, Johannes Kopf
For handling unknown poses, we jointly estimate the camera poses with radiance field in a progressive manner.
1 code implementation • CVPR 2023 • Yu-Lun Liu, Chen Gao, Andreas Meuleman, Hung-Yu Tseng, Ayush Saraf, Changil Kim, Yung-Yu Chuang, Johannes Kopf, Jia-Bin Huang
Dynamic radiance field reconstruction methods aim to model the time-varying structure and appearance of a dynamic scene.
1 code implementation • CVPR 2023 • Benjamin Attal, Jia-Bin Huang, Christian Richardt, Michael Zollhoefer, Johannes Kopf, Matthew O'Toole, Changil Kim
Volumetric scene representations enable photorealistic view synthesis for static scenes and form the basis of several existing 6-DoF video techniques.
Ranked #1 on Novel View Synthesis on DONeRF: Evaluation Dataset
no code implementations • CVPR 2022 • Xuejian Rong, Jia-Bin Huang, Ayush Saraf, Changil Kim, Johannes Kopf
We present a simple but effective technique to boost the rendering quality, which can be easily integrated with most view synthesis methods.
no code implementations • CVPR 2022 • Benjamin Attal, Jia-Bin Huang, Michael Zollhöfer, Johannes Kopf, Changil Kim
Our method supports rendering with a single network evaluation per pixel for small baseline light fields and with only a few evaluations per pixel for light fields with larger baselines.
1 code implementation • 2 Dec 2021 • Benjamin Attal, Jia-Bin Huang, Michael Zollhoefer, Johannes Kopf, Changil Kim
Our method supports rendering with a single network evaluation per pixel for small baseline light field datasets and can also be applied to larger baselines with only a few evaluations per pixel.
1 code implementation • ICCV 2021 • Chen Gao, Ayush Saraf, Johannes Kopf, Jia-Bin Huang
We present an algorithm for generating novel views at arbitrary viewpoints and any input time step given a monocular video of a dynamic scene.
1 code implementation • CVPR 2021 • Johannes Kopf, Xuejian Rong, Jia-Bin Huang
We present an algorithm for estimating consistent dense depth maps and camera poses from a monocular video.
no code implementations • CVPR 2021 • Wenqi Xian, Jia-Bin Huang, Johannes Kopf, Changil Kim
We present a method that learns a spatiotemporal neural irradiance field for dynamic scenes from a single video.
1 code implementation • ECCV 2020 • Chen Gao, Ayush Saraf, Jia-Bin Huang, Johannes Kopf
We present a new flow-based video completion algorithm.
Ranked #2 on Video Inpainting on HQVI (480p)
1 code implementation • 27 Aug 2020 • Johannes Kopf, Kevin Matzen, Suhib Alsisan, Ocean Quigley, Francis Ge, Yangming Chong, Josh Patterson, Jan-Michael Frahm, Shu Wu, Matthew Yu, Peizhao Zhang, Zijian He, Peter Vajda, Ayush Saraf, Michael Cohen
3D photos are static in time, like traditional photos, but are displayed with interactive parallax on mobile or desktop screens, as well as on Virtual Reality devices, where viewing it also includes stereo.
3 code implementations • 30 Apr 2020 • Xuan Luo, Jia-Bin Huang, Richard Szeliski, Kevin Matzen, Johannes Kopf
We present an algorithm for reconstructing dense, geometrically consistent depth for all pixels in a monocular video.
1 code implementation • CVPR 2020 • Meng-Li Shih, Shih-Yang Su, Johannes Kopf, Jia-Bin Huang
We propose a method for converting a single RGB-D input image into a 3D photo - a multi-layer representation for novel view synthesis that contains hallucinated color and depth structures in regions occluded in the original view.
1 code implementation • CVPR 2018 • Po-Han Huang, Kevin Matzen, Johannes Kopf, Narendra Ahuja, Jia-Bin Huang
We present DeepMVS, a deep convolutional neural network (ConvNet) for multi-view stereo reconstruction.
no code implementations • 31 Jan 2017 • Hadar Averbuch-Elor, Johannes Kopf, Tamir Hazan, Daniel Cohen-Or
Thus, to disambiguate what the common foreground object is, we introduce a weakly-supervised technique, where we assume only a small seed, given in the form of a single segmented image.
no code implementations • 26 Jan 2016 • Michael Waechter, Mate Beljan, Simon Fuhrmann, Nils Moehrle, Johannes Kopf, Michael Goesele
Furthermore, using only geometric accuracy by itself does not allow evaluating systems that either lack a geometric scene representation or utilize coarse proxy geometry.
no code implementations • CVPR 2013 • Michael Rubinstein, Armand Joulin, Johannes Kopf, Ce Liu
In contrast to previous co-segmentation methods, our algorithm performs well even in the presence of significant amounts of noise images (images not containing a common object), as typical for datasets collected from Internet search.