Search Results for author: Matheus Gadelha

Found 21 papers, 6 papers with code

PlanarRecon: Real-time 3D Plane Detection and Reconstruction from Posed Monocular Videos

1 code implementation CVPR 2022 Yiming Xie, Matheus Gadelha, Fengting Yang, Xiaowei Zhou, Huaizu Jiang

We present PlanarRecon -- a novel framework for globally coherent detection and reconstruction of 3D planes from a posed monocular video.

3D Plane Detection

Multiresolution Tree Networks for 3D Point Cloud Processing

1 code implementation ECCV 2018 Matheus Gadelha, Rui Wang, Subhransu Maji

We present multiresolution tree-structured networks to process point clouds for 3D shape understanding and generation tasks.

3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks

3 code implementations20 Jul 2017 Zhaoliang Lun, Matheus Gadelha, Evangelos Kalogerakis, Subhransu Maji, Rui Wang

The decoder converts this representation into depth and normal maps capturing the underlying surface from several output viewpoints.

3D Reconstruction 3D Shape Reconstruction

Shape Generation using Spatially Partitioned Point Clouds

no code implementations19 Jul 2017 Matheus Gadelha, Subhransu Maji, Rui Wang

We propose to use the expressive power of neural networks to learn a distribution over the shape coefficients in a generative-adversarial framework.

3D Shape Induction from 2D Views of Multiple Objects

no code implementations18 Dec 2016 Matheus Gadelha, Subhransu Maji, Rui Wang

In this paper we investigate the problem of inducing a distribution over three-dimensional structures given two-dimensional views of multiple objects taken from unknown viewpoints.

A Deeper Look at 3D Shape Classifiers

no code implementations7 Sep 2018 Jong-Chyi Su, Matheus Gadelha, Rui Wang, Subhransu Maji

We investigate the role of representations and architectures for classifying 3D shapes in terms of their computational efficiency, generalization, and robustness to adversarial transformations.

3D Shape Classification Computational Efficiency +1

Inferring 3D Shapes from Image Collections using Adversarial Networks

no code implementations11 Jun 2019 Matheus Gadelha, Aartika Rai, Subhransu Maji, Rui Wang

To this end, we present new differentiable projection operators that can be used by PrGAN to learn better 3D generative models.

Generative Adversarial Network

Shape Reconstruction Using Differentiable Projections and Deep Priors

no code implementations ICCV 2019 Matheus Gadelha, Rui Wang, Subhransu Maji

We investigate the problem of reconstructing shapes from noisy and incomplete projections in the presence of viewpoint uncertainities.

3D Shape Reconstruction

Learning Generative Models of Shape Handles

no code implementations CVPR 2020 Matheus Gadelha, Giorgio Gori, Duygu Ceylan, Radomir Mech, Nathan Carr, Tamy Boubekeur, Rui Wang, Subhransu Maji

We present a generative model to synthesize 3D shapes as sets of handles -- lightweight proxies that approximate the original 3D shape -- for applications in interactive editing, shape parsing, and building compact 3D representations.

Deep Manifold Prior

no code implementations8 Apr 2020 Matheus Gadelha, Rui Wang, Subhransu Maji

We present a prior for manifold structured data, such as surfaces of 3D shapes, where deep neural networks are adopted to reconstruct a target shape using gradient descent starting from a random initialization.

Denoising Gaussian Processes

ANISE: Assembly-based Neural Implicit Surface rEconstruction

no code implementations27 May 2022 Dmitry Petrov, Matheus Gadelha, Radomir Mech, Evangelos Kalogerakis

Reconstructions can be obtained in two ways: (i) by directly decoding the part latent codes to part implicit functions, then combining them into the final shape; or (ii) by using part latents to retrieve similar part instances in a part database and assembling them in a single shape.

Point cloud reconstruction Retrieval +1

Accidental Turntables: Learning 3D Pose by Watching Objects Turn

no code implementations13 Dec 2022 Zezhou Cheng, Matheus Gadelha, Subhransu Maji

We propose a technique for learning single-view 3D object pose estimation models by utilizing a new source of data -- in-the-wild videos where objects turn.

3D Pose Estimation

Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models

no code implementations3 Dec 2023 Shengqu Cai, Duygu Ceylan, Matheus Gadelha, Chun-Hao Paul Huang, Tuanfeng Yang Wang, Gordon Wetzstein

Traditional 3D content creation tools empower users to bring their imagination to life by giving them direct control over a scene's geometry, appearance, motion, and camera path.

Text-to-Image Generation Video Generation

Diffusion Handles: Enabling 3D Edits for Diffusion Models by Lifting Activations to 3D

no code implementations2 Dec 2023 Karran Pandey, Paul Guerrero, Matheus Gadelha, Yannick Hold-Geoffroy, Karan Singh, Niloy Mitra

Our key insight is to lift diffusion activations for an object to 3D using a proxy depth, 3D-transform the depth and associated activations, and project them back to image space.

3D Object Retrieval Depth Estimation +2

Learning Continuous 3D Words for Text-to-Image Generation

no code implementations13 Feb 2024 Ta-Ying Cheng, Matheus Gadelha, Thibault Groueix, Matthew Fisher, Radomir Mech, Andrew Markham, Niki Trigoni

We do this by engineering special sets of input tokens that can be transformed in a continuous manner -- we call them Continuous 3D Words.

Text-to-Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.