Search Results for author: Fangchang Ma

Found 12 papers, 9 papers with code

StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D

no code implementations2 Dec 2023 Pengsheng Guo, Hans Hao, Adam Caccavale, Zhongzheng Ren, Edward Zhang, Qi Shan, Aditya Sankar, Alexander G. Schwing, Alex Colburn, Fangchang Ma

Our analysis identifies the core of these challenges as the interaction among noise levels in the 2D diffusion process, the architecture of the diffusion network, and the 3D model representation.

3D Generation Text to 3D +1

Pseudo-Generalized Dynamic View Synthesis from a Video

no code implementations12 Oct 2023 Xiaoming Zhao, Alex Colburn, Fangchang Ma, Miguel Angel Bautista, Joshua M. Susskind, Alexander G. Schwing

In contrast, for dynamic scenes, scene-specific optimization techniques exist, but, to our best knowledge, there is currently no generalized method for dynamic novel view synthesis from a given monocular video.

Novel View Synthesis

FineRecon: Depth-aware Feed-forward Network for Detailed 3D Reconstruction

1 code implementation ICCV 2023 Noah Stier, Anurag Ranjan, Alex Colburn, Yajie Yan, Liang Yang, Fangchang Ma, Baptiste Angles

Recent works on 3D reconstruction from posed images have demonstrated that direct inference of scene-level 3D geometry without test-time optimization is feasible using deep neural networks, showing remarkable promise and high efficiency.

3D Reconstruction

HyperDiffusion: Generating Implicit Neural Fields with Weight-Space Diffusion

1 code implementation ICCV 2023 Ziya Erkoç, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai

HyperDiffusion operates directly on MLP weights and generates new neural implicit fields encoded by synthesized MLP parameters.

Texturify: Generating Textures on 3D Shape Surfaces

no code implementations5 Apr 2022 Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai

Texture cues on 3D objects are key to compelling visual representations, with the possibility to create high visual fidelity with inherent spatial consistency across different views.

RetrievalFuse: Neural 3D Scene Reconstruction with a Database

1 code implementation ICCV 2021 Yawar Siddiqui, Justus Thies, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai

3D reconstruction of large scenes is a challenging problem due to the high-complexity nature of the solution space, in particular for generative neural networks.

3D Reconstruction 3D Scene Reconstruction +3

Invertibility of Convolutional Generative Networks from Partial Measurements

1 code implementation NeurIPS 2018 Fangchang Ma, Ulas Ayaz, Sertac Karaman

In this work, we present new theoretical results on convolutional generative neural networks, in particular their invertibility (i. e., the recovery of input latent code given the network output).

Image Inpainting

Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera

2 code implementations1 Jul 2018 Fangchang Ma, Guilherme Venturelli Cavalheiro, Sertac Karaman

Depth completion, the technique of estimating a dense depth image from sparse depth measurements, has a variety of applications in robotics and autonomous driving.

Autonomous Driving Depth Completion

Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image

6 code implementations21 Sep 2017 Fangchang Ma, Sertac Karaman

We consider the problem of dense depth prediction from a sparse set of depth measurements and a single RGB image.

Depth Estimation Depth Prediction +2

Sparse Depth Sensing for Resource-Constrained Robots

1 code implementation4 Mar 2017 Fangchang Ma, Luca Carlone, Ulas Ayaz, Sertac Karaman

We address the following question: is it possible to reconstruct the geometry of an unknown environment using sparse and incomplete depth measurements?

Compressive Sensing Depth Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.