3D Shape Generation

43 papers with code • 0 benchmarks • 1 datasets

Image: Mo et al

Latest papers with no code

UDiFF: Generating Conditional Unsigned Distance Fields with Optimal Wavelet Diffusion

no code yet • 10 Apr 2024

In this work, we present UDiFF, a 3D diffusion model for unsigned distance fields (UDFs) which is capable to generate textured 3D shapes with open surfaces from text conditions or unconditionally.

NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation

no code yet • 27 Mar 2024

3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.

Text-to-3D Shape Generation

no code yet • 20 Mar 2024

Recent years have seen an explosion of work and interest in text-to-3D shape generation.

Compositional 3D Scene Synthesis with Scene Graph Guided Layout-Shape Generation

no code yet • 19 Mar 2024

Recent progresses have been made in shape generation with powerful generative models, such as diffusion models, which increases the shape fidelity.

Deep Generative Design for Mass Production

no code yet • 16 Mar 2024

Generative Design (GD) has evolved as a transformative design approach, employing advanced algorithms and AI to create diverse and innovative solutions beyond traditional constraints.

HyperSDFusion: Bridging Hierarchical Structures in Language and Geometry for Enhanced 3D Text2Shape Generation

no code yet • 1 Mar 2024

First, we introduce a hyperbolic text-image encoder to learn the sequential and multi-modal hierarchical features of text in hyperbolic space.

Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability

no code yet • 19 Feb 2024

In this paper, we extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.

Topology-Aware Latent Diffusion for 3D Shape Generation

no code yet • 31 Jan 2024

By strategically incorporating topological features into the diffusion process, our generative module is able to produce a richer variety of 3D shapes with different topological structures.

MVDD: Multi-View Depth Diffusion Models

no code yet • 8 Dec 2023

State-of-the-art results from extensive experiments demonstrate MVDD's excellent ability in 3D shape generation, depth completion, and its potential as a 3D prior for downstream tasks.

XCube ($\mathcal{X}^3$): Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies

no code yet • 6 Dec 2023

In addition to unconditional generation, we show that our model can be used to solve a variety of tasks such as user-guided editing, scene completion from a single scan, and text-to-3D.