Search Results for author: Kangxue Yin

Found 16 papers, 6 papers with code

TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion Models

no code implementations ICCV 2023 Tianshi Cao, Karsten Kreis, Sanja Fidler, Nicholas Sharp, Kangxue Yin

We present TexFusion (Texture Diffusion), a new method to synthesize textures for given 3D geometries, using large-scale text-guided image diffusion models.

Denoising Texture Synthesis

Flexible Isosurface Extraction for Gradient-Based Mesh Optimization

no code implementations10 Aug 2023 Tianchang Shen, Jacob Munkberg, Jon Hasselgren, Kangxue Yin, Zian Wang, Wenzheng Chen, Zan Gojcic, Sanja Fidler, Nicholas Sharp, Jun Gao

This work considers gradient-based mesh optimization, where we iteratively optimize for a 3D surface mesh by representing it as the isosurface of a scalar field, an increasingly common paradigm in applications including photogrammetry, generative modeling, and inverse physics.

NeuralField-LDM: Scene Generation with Hierarchical Latent Diffusion Models

no code implementations CVPR 2023 Seung Wook Kim, Bradley Brown, Kangxue Yin, Karsten Kreis, Katja Schwarz, Daiqing Li, Robin Rombach, Antonio Torralba, Sanja Fidler

We first train a scene auto-encoder to express a set of image and pose pairs as a neural field, represented as density and feature voxel grids that can be projected to produce novel views of the scene.

Scene Generation

GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images

3 code implementations22 Sep 2022 Jun Gao, Tianchang Shen, Zian Wang, Wenzheng Chen, Kangxue Yin, Daiqing Li, Or Litany, Zan Gojcic, Sanja Fidler

As several industries are moving towards modeling massive 3D virtual worlds, the need for content creation tools that can scale in terms of the quantity, quality, and diversity of 3D content is becoming evident.

MvDeCor: Multi-view Dense Correspondence Learning for Fine-grained 3D Segmentation

2 code implementations18 Aug 2022 Gopal Sharma, Kangxue Yin, Subhransu Maji, Evangelos Kalogerakis, Or Litany, Sanja Fidler

As a result, the learned 2D representations are view-invariant and geometrically consistent, leading to better generalization when trained on a limited number of labeled shapes compared to alternatives that utilize self-supervision in 2D or 3D alone.

Contrastive Learning Segmentation

AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis

no code implementations CVPR 2022 Zhiqin Chen, Kangxue Yin, Sanja Fidler

In this paper, we address the problem of texture representation for 3D shapes for the challenging and underexplored tasks of texture transfer and synthesis.

3D Reconstruction Single-View 3D Reconstruction +1

Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis

no code implementations NeurIPS 2021 Tianchang Shen, Jun Gao, Kangxue Yin, Ming-Yu Liu, Sanja Fidler

The core of DMTet includes a deformable tetrahedral grid that encodes a discretized signed distance function and a differentiable marching tetrahedra layer that converts the implicit signed distance representation to the explicit surface mesh representation.

3DStyleNet: Creating 3D Shapes with Geometric and Texture Style Variations

no code implementations ICCV 2021 Kangxue Yin, Jun Gao, Maria Shugrina, Sameh Khamis, Sanja Fidler

Given a small set of high-quality textured objects, our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation.

3D Reconstruction Data Augmentation +1

DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort

2 code implementations CVPR 2021 Yuxuan Zhang, Huan Ling, Jun Gao, Kangxue Yin, Jean-Francois Lafleche, Adela Barriuso, Antonio Torralba, Sanja Fidler

To showcase the power of our approach, we generated datasets for 7 image segmentation tasks which include pixel-level labels for 34 human face parts, and 32 car parts.

Image Segmentation Semantic Segmentation

Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes

2 code implementations CVPR 2021 Towaki Takikawa, Joey Litalien, Kangxue Yin, Karsten Kreis, Charles Loop, Derek Nowrouzezahrai, Alec Jacobson, Morgan McGuire, Sanja Fidler

We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs, while achieving state-of-the-art geometry reconstruction quality.

COALESCE: Component Assembly by Learning to Synthesize Connections

no code implementations5 Aug 2020 Kangxue Yin, Zhiqin Chen, Siddhartha Chaudhuri, Matthew Fisher, Vladimir G. Kim, Hao Zhang

We introduce COALESCE, the first data-driven framework for component-based shape assembly which employs deep learning to synthesize part connections.

FAME: 3D Shape Generation via Functionality-Aware Model Evolution

1 code implementation9 May 2020 Yanran Guan, Han Liu, Kun Liu, Kangxue Yin, Ruizhen Hu, Oliver van Kaick, Yan Zhang, Ersin Yumer, Nathan Carr, Radomir Mech, Hao Zhang

Our tool supports constrained modeling, allowing users to restrict or steer the model evolution with functionality labels.

Graphics

BAE-NET: Branched Autoencoder for Shape Co-Segmentation

1 code implementation ICCV 2019 Zhiqin Chen, Kangxue Yin, Matthew Fisher, Siddhartha Chaudhuri, Hao Zhang

The unsupervised BAE-NET is trained with a collection of un-segmented shapes, using a shape reconstruction loss, without any ground-truth labels.

One-Shot Learning Representation Learning

LOGAN: Unpaired Shape Transform in Latent Overcomplete Space

no code implementations25 Mar 2019 Kangxue Yin, Zhiqin Chen, Hui Huang, Daniel Cohen-Or, Hao Zhang

Our network consists of an autoencoder to encode shapes from the two input domains into a common latent space, where the latent codes concatenate multi-scale shape features, resulting in an overcomplete representation.

Generative Adversarial Network Translation

P2P-NET: Bidirectional Point Displacement Net for Shape Transform

no code implementations25 Mar 2018 Kangxue Yin, Hui Huang, Daniel Cohen-Or, Hao Zhang

We introduce P2P-NET, a general-purpose deep neural network which learns geometric transformations between point-based shape representations from two domains, e. g., meso-skeletons and surfaces, partial and complete scans, etc.

Cannot find the paper you are looking for? You can Submit a new open access paper.