Texture Synthesis
71 papers with code • 0 benchmarks • 3 datasets
The fundamental goal of example-based Texture Synthesis is to generate a texture, usually larger than the input, that faithfully captures all the visual characteristics of the exemplar, yet is neither identical to it, nor exhibits obvious unnatural looking artifacts.
Source: Non-Stationary Texture Synthesis by Adversarial Expansion
Benchmarks
These leaderboards are used to track progress in Texture Synthesis
Latest papers with no code
Text-Driven Diverse Facial Texture Generation via Progressive Latent-Space Refinement
SDS boosts GANs with more generative modes, while GANs promote more efficient optimization of SDS.
NoiseNCA: Noisy Seed Improves Spatio-Temporal Continuity of Neural Cellular Automata
We demonstrate the effectiveness of our approach in preserving the consistency of NCA dynamics across a wide range of spatio-temporal granularities.
Garment3DGen: 3D Garment Stylization and Texture Generation
We present a plethora of quantitative and qualitative comparisons on various assets both real and generated and provide use-cases of how one can generate simulation-ready 3D garments.
WordRobe: Text-Guided Generation of Textured 3D Garments
We achieve this by first learning a latent representation of 3D garments using a novel coarse-to-fine training strategy and a loss for latent disentanglement, promoting better latent interpolation.
Make-It-Vivid: Dressing Your Animatable Biped Cartoon Characters from Text
Creating and animating 3D biped cartoon characters is crucial and valuable in various applications.
TexRO: Generating Delicate Textures of 3D Models by Recursive Optimization
We propose an optimal viewpoint selection strategy, that finds the most miniature set of viewpoints covering all the faces of a mesh.
Stochastic Geometry Models for Texture Synthesis of Machined Metallic Surfaces: Sandblasting and Milling
We develop stochastic texture models for sandblasted and milled surfaces based on topography measurements of such surfaces.
TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation
Texturing 3D humans with semantic UV maps remains a challenge due to the difficulty of acquiring reasonably unfolded UV.
InTeX: Interactive Text-to-texture Synthesis via Unified Depth-aware Inpainting
Text-to-texture synthesis has become a new frontier in 3D content creation thanks to the recent advances in text-to-image models.
Ultraman: Single Image 3D Human Reconstruction with Ultra Speed and Detail
In this paper, we propose a new method called \emph{Ultraman} for fast reconstruction of textured 3D human models from a single image.