Text2Mesh: Text-Driven Neural Stylization for Meshes

6 Dec 2021  ·  Oscar Michel, Roi Bar-On, Richard Liu, Sagie Benaim, Rana Hanocka ·

In this work, we develop intuitive controls for editing the style of 3D objects. Our framework, Text2Mesh, stylizes a 3D mesh by predicting color and local geometric details which conform to a target text prompt... We consider a disentangled representation of a 3D object using a fixed mesh input (content) coupled with a learned neural network, which we term neural style field network. In order to modify style, we obtain a similarity score between a text prompt (describing style) and a stylized mesh by harnessing the representational power of CLIP. Text2Mesh requires neither a pre-trained generative model nor a specialized 3D mesh dataset. It can handle low-quality meshes (non-manifold, boundaries, etc.) with arbitrary genus, and does not require UV parameterization. We demonstrate the ability of our technique to synthesize a myriad of styles over a wide variety of 3D meshes. read more

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Neural Stylization Meshes VQGAN Mean Opinion Score (Q1:Overall) 2.83 ± 0.39 # 2
Mean Opinion Score (Q2: Content) 3.6 ± 0.68 # 2
Mean Opinion Score (Q3: Style) 2.59 ± 0.44 # 2
Neural Stylization Meshes Text2Mesh Mean Opinion Score (Q1:Overall) 3.9 ± 0.37 # 1
Mean Opinion Score (Q2: Content) 4.04 ± 0.53 # 1
Mean Opinion Score (Q3: Style) 3.91 ± 0.51 # 1

Methods