3D Generation
143 papers with code • 1 benchmarks • 6 datasets
Libraries
Use these libraries to find 3D Generation models and implementationsMost implemented papers
Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following
We introduce Point-Bind, a 3D multi-modality model aligning point clouds with 2D image, language, audio, and video.
MVDream: Multi-view Diffusion for 3D Generation
We introduce MVDream, a diffusion model that is able to generate consistent multi-view images from a given text prompt.
Text-to-Image Rectified Flow as Plug-and-Play Priors
Besides the generative capabilities of diffusion priors, motivated by the unique time-symmetry properties of rectified flow models, a variant of our method can additionally perform image inversion.
CraftsMan3D: High-fidelity Mesh Generation with 3D Native Generation and Interactive Geometry Refiner
We present a novel generative 3D modeling system, coined CraftsMan, which can generate high-fidelity 3D geometries with highly varied shapes, regular mesh topologies, and detailed surfaces, and, notably, allows for refining the geometry in an interactive manner.
LION: Latent Point Diffusion Models for 3D Shape Generation
To advance 3D DDMs and make them useful for digital artists, we require (i) high generation quality, (ii) flexibility for manipulation and applications such as conditional synthesis and shape interpolation, and (iii) the ability to output smooth surfaces or meshes.
Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures
This unique combination of text and shape guidance allows for increased control over the generation process.
ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation
In comparison, VSD works well with various CFG weights as ancestral sampling from diffusion models and simultaneously improves the diversity and sample quality with a common CFG weight (i. e., $7. 5$).
StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation
The recent advancements in image-text diffusion models have stimulated research interest in large-scale 3D generative models.
VPP: Efficient Conditional 3D Generation via Voxel-Point Progressive Representation
VPP leverages structured voxel representation in the proposed Voxel Semantic Generator and the sparsity of unstructured point representation in the Point Upsampler, enabling efficient generation of multi-category objects.
SyncDreamer: Generating Multiview-consistent Images from a Single-view Image
In this paper, we present a novel diffusion model called that generates multiview-consistent images from a single-view image.