53 papers with code • 0 benchmarks • 2 datasets
The fundamental goal of example-based Texture Synthesis is to generate a texture, usually larger than the input, that faithfully captures all the visual characteristics of the exemplar, yet is neither identical to it, nor exhibits obvious unnatural looking artifacts.
These leaderboards are used to track progress in Texture Synthesis
Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition.
Second, we show that the image generation with PSGANs has properties of a texture manifold: we can smoothly interpolate between samples in the structured noise space and generate novel samples, which lie perceptually between the textures of the original dataset.
This paper studies a combination of generative Markov random field (MRF) models and discriminatively trained deep convolutional neural networks (dCNNs) for synthesizing 2D images.
These losses can improve the quality of large features, improve the separation of content and style, and offer artistic controls such as paint by numbers.
Gatys et al. (2015) showed that pair-wise products of features in a convolutional network are a very effective representation of image textures.
Generative adversarial networks (GANs) are a recent approach to train generative models of data, which have been shown to work particularly well on image data.
Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input.
This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis.
This paper presents a significant improvement for the synthesis of texture images using convolutional neural networks (CNNs), making use of constraints on the Fourier spectrum of the results.