3D scene Editing
8 papers with code • 1 benchmarks • 1 datasets
Most implemented papers
Feature 3DGS: Supercharging 3D Gaussian Splatting to Enable Distilled Feature Fields
In this work, we go one step further: in addition to radiance field rendering, we enable 3D Gaussian splatting on arbitrary-dimension semantic features via 2D foundation model distillation.
OR-NeRF: Object Removing from 3D Scenes Guided by Multiview Segmentation with Neural Radiance Fields
This paper proposes a novel object-removing pipeline, named OR-NeRF, that can remove objects from 3D scenes with user-given points or text prompts on a single view, achieving better performance in less time than previous works.
DreamEditor: Text-Driven 3D Scene Editing with Neural Fields
Neural fields have achieved impressive advancements in view synthesis and scene reconstruction.
LatentEditor: Text Driven Local Editing of 3D Scenes
Our approach achieves faster editing speeds and superior output quality compared to existing 3D editing models, bridging the gap between textual instructions and high-quality 3D scene editing in latent space.
Free-Editor: Zero-shot Text-driven 3D Scene Editing
Text-to-Image (T2I) diffusion models have recently gained traction for their versatility and user-friendliness in 2D content generation and editing.
GaussianVTON: 3D Human Virtual Try-ON via Multi-Stage Gaussian Splatting Editing with Image Prompting
The increasing prominence of e-commerce has underscored the importance of Virtual Try-On (VTON).
Chat-Edit-3D: Interactive 3D Scene Editing via Text Prompts
Recent work on image content manipulation based on vision-language pre-training models has been effectively extended to text-driven 3D scene editing.
Neural Surface Priors for Editable Gaussian Splatting
Unlike other methods, our pipeline allows modifications applied to the extracted mesh to be propagated to the proxy representation, from which we recover the updated parameters of the Gaussians.