Text to 3D
59 papers with code • 1 benchmarks • 1 datasets
Libraries
Use these libraries to find Text to 3D models and implementationsMost implemented papers
High-Fidelity 3D Face Generation from Natural Language Descriptions
Synthesizing high-quality 3D face models from natural language descriptions is very valuable for many applications, including avatar creation, virtual reality, and telepresence.
Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields
Extensive experiments demonstrate that our Text2NeRF outperforms existing methods in producing photo-realistic, multi-view consistent, and diverse 3D scenes from a variety of natural language prompts.
HiFA: High-fidelity Text-to-3D Generation with Advanced Diffusion Guidance
To address texture flickering issues in NeRFs, we introduce a kernel smoothing technique that refines importance sampling weights coarse-to-fine, ensuring accurate and thorough sampling in high-density regions.
Scalable 3D Captioning with Pretrained Models
We introduce Cap3D, an automatic approach for generating descriptive text for 3D objects.
One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization
Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world.
IT3D: Improved Text-to-3D Generation with Explicit View Synthesis
Recent strides in Text-to-3D techniques have been propelled by distilling knowledge from powerful large text-to-image diffusion models (LDMs).
EfficientDreamer: High-Fidelity and Robust 3D Creation via Orthogonal-view Diffusion Prior
This limitation leads to the Janus problem, where multi-faced 3D models are generated under the guidance of such diffusion models.
Language-driven Object Fusion into Neural Radiance Fields with Pose-Conditioned Dataset Updates
Specifically, to insert a new foreground object represented by a set of multi-view images into a background radiance field, we use a text-to-image diffusion model to learn and generate combined images that fuse the object of interest into the given background across views.
Progressive Text-to-3D Generation for Automatic 3D Prototyping
We aspire for our work to pave the way for automatic 3D prototyping via natural language descriptions.
Text-to-3D using Gaussian Splatting
Specifically, our method adopts a progressive optimization strategy, which includes a geometry optimization stage and an appearance refinement stage.