Texture Synthesis
71 papers with code • 0 benchmarks • 3 datasets
The fundamental goal of example-based Texture Synthesis is to generate a texture, usually larger than the input, that faithfully captures all the visual characteristics of the exemplar, yet is neither identical to it, nor exhibits obvious unnatural looking artifacts.
Source: Non-Stationary Texture Synthesis by Adversarial Expansion
Benchmarks
These leaderboards are used to track progress in Texture Synthesis
Latest papers
TexTile: A Differentiable Metric for Texture Tileability
We introduce TexTile, a novel differentiable metric to quantify the degree upon which a texture image can be concatenated with itself without introducing repeating artifacts (i. e., the tileability).
Generic 3D Diffusion Adapter Using Controlled Multi-View Editing
Open-domain 3D object synthesis has been lagging behind image synthesis due to limited data and higher computational complexity.
3DTopia: Large Text-to-3D Generation Model with Hybrid Diffusion Priors
Specifically, it is powered by a text-conditioned tri-plane latent diffusion model, which quickly generates coarse 3D samples for fast prototyping.
Generating Non-Stationary Textures using Self-Rectification
This paper addresses the challenge of example-based non-stationary texture synthesis.
Iterative Token Evaluation and Refinement for Real-World Super-Resolution
Distortion removal involves simple HQ token prediction with LQ images, while texture generation uses a discrete diffusion model to iteratively refine the distortion removal output with a token refinement network.
Enhancing Object Coherence in Layout-to-Image Synthesis
Layout-to-image synthesis is an emerging technique in conditional image generation.
Towards Garment Sewing Pattern Reconstruction from a Single Image
In this work, we explore the challenging problem of recovering garment sewing patterns from daily photos for augmenting these applications.
Generating Infinite-Size Textures using GANs with Patch-by-Patch Paradigm
Existing texture synthesis techniques rely on generating large-scale textures using a single forward pass to the generative model; this approach limits the scalability and flexibility of the images produced.
ShaDDR: Interactive Example-Based Geometry and Texture Generation via 3D Shape Detailization and Differentiable Rendering
Furthermore, we showcase the ability of our method to learn geometric details and textures from shapes reconstructed from real-world photos.
A geometrically aware auto-encoder for multi-texture synthesis
We propose an auto-encoder architecture for multi-texture synthesis.