Search Results for author: Kangxue Yin

Found 11 papers, 4 papers with code

AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis

no code implementations6 Apr 2022 Zhiqin Chen, Kangxue Yin, Sanja Fidler

In this paper, we address the problem of texture representation for 3D shapes for the challenging and underexplored tasks of texture transfer and synthesis.

3D Reconstruction Single-View 3D Reconstruction +1

Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis

no code implementations NeurIPS 2021 Tianchang Shen, Jun Gao, Kangxue Yin, Ming-Yu Liu, Sanja Fidler

The core of DMTet includes a deformable tetrahedral grid that encodes a discretized signed distance function and a differentiable marching tetrahedra layer that converts the implicit signed distance representation to the explicit surface mesh representation.

3DStyleNet: Creating 3D Shapes with Geometric and Texture Style Variations

no code implementations ICCV 2021 Kangxue Yin, Jun Gao, Maria Shugrina, Sameh Khamis, Sanja Fidler

Given a small set of high-quality textured objects, our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation.

3D Reconstruction Data Augmentation +1

DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort

1 code implementation CVPR 2021 Yuxuan Zhang, Huan Ling, Jun Gao, Kangxue Yin, Jean-Francois Lafleche, Adela Barriuso, Antonio Torralba, Sanja Fidler

To showcase the power of our approach, we generated datasets for 7 image segmentation tasks which include pixel-level labels for 34 human face parts, and 32 car parts.

Semantic Segmentation

Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes

1 code implementation CVPR 2021 Towaki Takikawa, Joey Litalien, Kangxue Yin, Karsten Kreis, Charles Loop, Derek Nowrouzezahrai, Alec Jacobson, Morgan McGuire, Sanja Fidler

We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs, while achieving state-of-the-art geometry reconstruction quality.

COALESCE: Component Assembly by Learning to Synthesize Connections

no code implementations5 Aug 2020 Kangxue Yin, Zhiqin Chen, Siddhartha Chaudhuri, Matthew Fisher, Vladimir G. Kim, Hao Zhang

We introduce COALESCE, the first data-driven framework for component-based shape assembly which employs deep learning to synthesize part connections.

FAME: 3D Shape Generation via Functionality-Aware Model Evolution

1 code implementation9 May 2020 Yanran Guan, Han Liu, Kun Liu, Kangxue Yin, Ruizhen Hu, Oliver van Kaick, Yan Zhang, Ersin Yumer, Nathan Carr, Radomir Mech, Hao Zhang

Our tool supports constrained modeling, allowing users to restrict or steer the model evolution with functionality labels.

Graphics

BAE-NET: Branched Autoencoder for Shape Co-Segmentation

1 code implementation ICCV 2019 Zhiqin Chen, Kangxue Yin, Matthew Fisher, Siddhartha Chaudhuri, Hao Zhang

The unsupervised BAE-NET is trained with a collection of un-segmented shapes, using a shape reconstruction loss, without any ground-truth labels.

One-Shot Learning Representation Learning

LOGAN: Unpaired Shape Transform in Latent Overcomplete Space

no code implementations25 Mar 2019 Kangxue Yin, Zhiqin Chen, Hui Huang, Daniel Cohen-Or, Hao Zhang

Our network consists of an autoencoder to encode shapes from the two input domains into a common latent space, where the latent codes concatenate multi-scale shape features, resulting in an overcomplete representation.

Translation

P2P-NET: Bidirectional Point Displacement Net for Shape Transform

no code implementations25 Mar 2018 Kangxue Yin, Hui Huang, Daniel Cohen-Or, Hao Zhang

We introduce P2P-NET, a general-purpose deep neural network which learns geometric transformations between point-based shape representations from two domains, e. g., meso-skeletons and surfaces, partial and complete scans, etc.

Cannot find the paper you are looking for? You can Submit a new open access paper.