Search Results for author: Chen-Hsuan Lin

Found 15 papers, 10 papers with code

ATT3D: Amortized Text-to-3D Object Synthesis

no code implementations ICCV 2023 Jonathan Lorraine, Kevin Xie, Xiaohui Zeng, Chen-Hsuan Lin, Towaki Takikawa, Nicholas Sharp, Tsung-Yi Lin, Ming-Yu Liu, Sanja Fidler, James Lucas

Text-to-3D modelling has seen exciting progress by combining generative text-to-image models with image-to-3D methods like Neural Radiance Fields.

Image to 3D Object +1

Magic3D: High-Resolution Text-to-3D Content Creation

1 code implementation CVPR 2023 Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, Tsung-Yi Lin

DreamFusion has recently demonstrated the utility of a pre-trained text-to-image diffusion model to optimize Neural Radiance Fields (NeRF), achieving remarkable text-to-3D synthesis results.

Text to 3D Vocal Bursts Intensity Prediction

BARF: Bundle-Adjusting Neural Radiance Fields

4 code implementations ICCV 2021 Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, Simon Lucey

In this paper, we propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect (or even unknown) camera poses -- the joint problem of learning neural 3D representations and registering camera frames.

Visual Localization

SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images

1 code implementation NeurIPS 2020 Chen-Hsuan Lin, Chaoyang Wang, Simon Lucey

Dense 3D object reconstruction from a single image has recently witnessed remarkable advances, but supervising neural networks with ground-truth 3D shapes is impractical due to the laborious process of creating paired image-shape datasets.

3D Object Reconstruction From A Single Image 3D Reconstruction

Deep NRSfM++: Towards Unsupervised 2D-3D Lifting in the Wild

1 code implementation27 Jan 2020 Chaoyang Wang, Chen-Hsuan Lin, Simon Lucey

The recovery of 3D shape and pose from 2D landmarks stemming from a large ensemble of images can be viewed as a non-rigid structure from motion (NRSfM) problem.

3D Reconstruction

ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing

2 code implementations CVPR 2018 Chen-Hsuan Lin, Ersin Yumer, Oliver Wang, Eli Shechtman, Simon Lucey

We address the problem of finding realistic geometric corrections to a foreground object such that it appears natural when composited into a background image.

Generative Adversarial Network

Semantic Photometric Bundle Adjustment on Natural Sequences

no code implementations30 Nov 2017 Rui Zhu, Chaoyang Wang, Chen-Hsuan Lin, Ziyan Wang, Simon Lucey

More recently, excellent results have been attained through the application of photometric bundle adjustment (PBA) methods -- which directly minimize the photometric error across frames.

Object Object Reconstruction

Object-Centric Photometric Bundle Adjustment with Deep Shape Prior

no code implementations4 Nov 2017 Rui Zhu, Chaoyang Wang, Chen-Hsuan Lin, Ziyan Wang, Simon Lucey

Reconstructing 3D shapes from a sequence of images has long been a problem of interest in computer vision.

Object

Using Locally Corresponding CAD Models for Dense 3D Reconstructions From a Single Image

no code implementations CVPR 2017 Chen Kong, Chen-Hsuan Lin, Simon Lucey

A common strategy in dictionary learning to encourage generalization is to allow for linear combinations of dictionary elements.

Dictionary Learning Graph Embedding

Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction

3 code implementations21 Jun 2017 Chen-Hsuan Lin, Chen Kong, Simon Lucey

Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones.

3D Object Reconstruction Object +1

Deep-LK for Efficient Adaptive Object Tracking

no code implementations19 May 2017 Chaoyang Wang, Hamed Kiani Galoogahi, Chen-Hsuan Lin, Simon Lucey

In this paper we present a new approach for efficient regression based object tracking which we refer to as Deep- LK.

Object Object Tracking +1

Inverse Compositional Spatial Transformer Networks

1 code implementation CVPR 2017 Chen-Hsuan Lin, Simon Lucey

In this paper, we establish a theoretical connection between the classical Lucas & Kanade (LK) algorithm and the emerging topic of Spatial Transformer Networks (STNs).

Classification General Classification

The Conditional Lucas & Kanade Algorithm

1 code implementation29 Mar 2016 Chen-Hsuan Lin, Rui Zhu, Simon Lucey

In this paper, we present a new approach, referred to as the Conditional LK algorithm, which: (i) directly learns linear models that predict geometric displacement as a function of appearance, and (ii) employs a novel strategy for ensuring that the generative pixel independence assumption can still be taken advantage of.

Cannot find the paper you are looking for? You can Submit a new open access paper.