Search Results for author: Zhiqin Chen

Found 16 papers, 11 papers with code

DAE-Net: Deforming Auto-Encoder for fine-grained shape co-segmentation

1 code implementation22 Nov 2023 Zhiqin Chen, Qimin Chen, Hang Zhou, Hao Zhang

We present an unsupervised 3D shape co-segmentation method which learns a set of deformable part templates from a shape collection.

ShaDDR: Interactive Example-Based Geometry and Texture Generation via 3D Shape Detailization and Differentiable Rendering

1 code implementation8 Jun 2023 Qimin Chen, Zhiqin Chen, Hang Zhou, Hao Zhang

Furthermore, we showcase the ability of our method to learn geometric details and textures from shapes reconstructed from real-world photos.

Texture Synthesis

A Review of Deep Learning-Powered Mesh Reconstruction Methods

no code implementations6 Mar 2023 Zhiqin Chen

With the recent advances in hardware and rendering techniques, 3D models have emerged everywhere in our life.

3D Shape Reconstruction

AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis

no code implementations CVPR 2022 Zhiqin Chen, Kangxue Yin, Sanja Fidler

In this paper, we address the problem of texture representation for 3D shapes for the challenging and underexplored tasks of texture transfer and synthesis.

3D Reconstruction Single-View 3D Reconstruction +1

Neural Dual Contouring

2 code implementations4 Feb 2022 Zhiqin Chen, Andrea Tagliasacchi, Thomas Funkhouser, Hao Zhang

We introduce neural dual contouring (NDC), a new data-driven approach to mesh reconstruction based on dual contouring (DC).

Surface Reconstruction

Learning Mesh Representations via Binary Space Partitioning Tree Networks

1 code implementation27 Jun 2021 Zhiqin Chen, Andrea Tagliasacchi, Hao Zhang

The network is trained to reconstruct a shape using a set of convexes obtained from a BSP-tree built over a set of planes, where the planes and convexes are both defined by learned network weights.

Neural Marching Cubes

1 code implementation21 Jun 2021 Zhiqin Chen, Hao Zhang

To tackle these challenges, we re-cast MC from a deep learning perspective, by designing tessellation templates more apt at preserving geometric features, and learning the vertex positions and mesh topologies from training meshes, to account for contextual information from nearby cubes.

CAPRI-Net: Learning Compact CAD Shapes with Adaptive Primitive Assembly

no code implementations CVPR 2022 Fenggen Yu, Zhiqin Chen, Manyi Li, Aditya Sanghi, Hooman Shayani, Ali Mahdavi-Amiri, Hao Zhang

We introduce CAPRI-Net, a neural network for learning compact and interpretable implicit representations of 3D computer-aided design (CAD) models, in the form of adaptive primitive assemblies.

CAD Reconstruction

COALESCE: Component Assembly by Learning to Synthesize Connections

no code implementations5 Aug 2020 Kangxue Yin, Zhiqin Chen, Siddhartha Chaudhuri, Matthew Fisher, Vladimir G. Kim, Hao Zhang

We introduce COALESCE, the first data-driven framework for component-based shape assembly which employs deep learning to synthesize part connections.

BSP-Net: Generating Compact Meshes via Binary Space Partitioning

3 code implementations CVPR 2020 Zhiqin Chen, Andrea Tagliasacchi, Hao Zhang

The network is trained to reconstruct a shape using a set of convexes obtained from a BSP-tree built on a set of planes.

3D Reconstruction 3D Shape Representation

BAE-NET: Branched Autoencoder for Shape Co-Segmentation

1 code implementation ICCV 2019 Zhiqin Chen, Kangxue Yin, Matthew Fisher, Siddhartha Chaudhuri, Hao Zhang

The unsupervised BAE-NET is trained with a collection of un-segmented shapes, using a shape reconstruction loss, without any ground-truth labels.

One-Shot Learning Representation Learning

LOGAN: Unpaired Shape Transform in Latent Overcomplete Space

no code implementations25 Mar 2019 Kangxue Yin, Zhiqin Chen, Hui Huang, Daniel Cohen-Or, Hao Zhang

Our network consists of an autoencoder to encode shapes from the two input domains into a common latent space, where the latent codes concatenate multi-scale shape features, resulting in an overcomplete representation.

Generative Adversarial Network Translation

Learning Implicit Fields for Generative Shape Modeling

4 code implementations CVPR 2019 Zhiqin Chen, Hao Zhang

We advocate the use of implicit fields for learning generative models of shapes and introduce an implicit field decoder, called IM-NET, for shape generation, aimed at improving the visual quality of the generated shapes.

3D Reconstruction 3D Shape Representation +2

BSD-GAN: Branched Generative Adversarial Network for Scale-Disentangled Representation Learning and Image Synthesis

2 code implementations22 Mar 2018 Zili Yi, Zhiqin Chen, Hao Cai, Wendong Mao, Minglun Gong, Hao Zhang

The key feature of BSD-GAN is that it is trained in multiple branches, progressively covering both the breadth and depth of the network, as resolutions of the training images increase to reveal finer-scale features.

Generative Adversarial Network Image Generation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.