no code implementations • ICCV 2023 • Jonathan Lorraine, Kevin Xie, Xiaohui Zeng, Chen-Hsuan Lin, Towaki Takikawa, Nicholas Sharp, Tsung-Yi Lin, Ming-Yu Liu, Sanja Fidler, James Lucas
Text-to-3D modelling has seen exciting progress by combining generative text-to-image models with image-to-3D methods like Neural Radiance Fields.
1 code implementation • CVPR 2023 • Zhaoshuo Li, Thomas Müller, Alex Evans, Russell H. Taylor, Mathias Unberath, Ming-Yu Liu, Chen-Hsuan Lin
Neural surface reconstruction has been shown to be powerful for recovering dense 3D surfaces via image-based neural rendering.
1 code implementation • CVPR 2023 • Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, Tsung-Yi Lin
DreamFusion has recently demonstrated the utility of a pre-trained text-to-image diffusion model to optimize Neural Radiance Fields (NeRF), achieving remarkable text-to-3D synthesis results.
Ranked #2 on Text to 3D on T$^3$Bench
4 code implementations • ICCV 2021 • Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, Simon Lucey
In this paper, we propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect (or even unknown) camera poses -- the joint problem of learning neural 3D representations and registering camera frames.
1 code implementation • NeurIPS 2020 • Chen-Hsuan Lin, Chaoyang Wang, Simon Lucey
Dense 3D object reconstruction from a single image has recently witnessed remarkable advances, but supervising neural networks with ground-truth 3D shapes is impractical due to the laborious process of creating paired image-shape datasets.
3D Object Reconstruction From A Single Image 3D Reconstruction
1 code implementation • 27 Jan 2020 • Chaoyang Wang, Chen-Hsuan Lin, Simon Lucey
The recovery of 3D shape and pose from 2D landmarks stemming from a large ensemble of images can be viewed as a non-rigid structure from motion (NRSfM) problem.
1 code implementation • CVPR 2019 • Chen-Hsuan Lin, Oliver Wang, Bryan C. Russell, Eli Shechtman, Vladimir G. Kim, Matthew Fisher, Simon Lucey
In this paper, we address the problem of 3D object mesh reconstruction from RGB videos.
2 code implementations • CVPR 2018 • Chen-Hsuan Lin, Ersin Yumer, Oliver Wang, Eli Shechtman, Simon Lucey
We address the problem of finding realistic geometric corrections to a foreground object such that it appears natural when composited into a background image.
no code implementations • 30 Nov 2017 • Rui Zhu, Chaoyang Wang, Chen-Hsuan Lin, Ziyan Wang, Simon Lucey
More recently, excellent results have been attained through the application of photometric bundle adjustment (PBA) methods -- which directly minimize the photometric error across frames.
no code implementations • 4 Nov 2017 • Rui Zhu, Chaoyang Wang, Chen-Hsuan Lin, Ziyan Wang, Simon Lucey
Reconstructing 3D shapes from a sequence of images has long been a problem of interest in computer vision.
no code implementations • CVPR 2017 • Chen Kong, Chen-Hsuan Lin, Simon Lucey
A common strategy in dictionary learning to encourage generalization is to allow for linear combinations of dictionary elements.
3 code implementations • 21 Jun 2017 • Chen-Hsuan Lin, Chen Kong, Simon Lucey
Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones.
no code implementations • 19 May 2017 • Chaoyang Wang, Hamed Kiani Galoogahi, Chen-Hsuan Lin, Simon Lucey
In this paper we present a new approach for efficient regression based object tracking which we refer to as Deep- LK.
1 code implementation • CVPR 2017 • Chen-Hsuan Lin, Simon Lucey
In this paper, we establish a theoretical connection between the classical Lucas & Kanade (LK) algorithm and the emerging topic of Spatial Transformer Networks (STNs).
1 code implementation • 29 Mar 2016 • Chen-Hsuan Lin, Rui Zhu, Simon Lucey
In this paper, we present a new approach, referred to as the Conditional LK algorithm, which: (i) directly learns linear models that predict geometric displacement as a function of appearance, and (ii) employs a novel strategy for ensuring that the generative pixel independence assumption can still be taken advantage of.