Browse > Computer Vision > Texture Synthesis

Texture Synthesis

12 papers with code · Computer Vision

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis

CVPR 2016 awentzonline/image-analogies

This paper studies a combination of generative Markov random field (MRF) models and discriminatively trained deep convolutional neural networks (dCNNs) for synthesizing 2D images. The generative MRF acts on higher-levels of a dCNN feature pyramid, controling the image layout at an abstract level.

IMAGE GENERATION TEXTURE SYNTHESIS

Improved Texture Networks: Maximizing Quality and Diversity in Feed-forward Stylization and Texture Synthesis

CVPR 2017 DmitryUlyanov/texture_nets

The recent work of Gatys et al., who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems. While their image generation technique uses a slow optimization process, recently several authors have proposed to learn generator neural networks that can produce similar outputs in one quick forward pass.

IMAGE GENERATION IMAGE STYLIZATION TEXTURE SYNTHESIS

Texture Synthesis Using Convolutional Neural Networks

NeurIPS 2015 DmitryUlyanov/texture_nets

Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition. Samples from the model are of high perceptual quality demonstrating the generative power of neural networks trained in a purely discriminative fashion.

OBJECT RECOGNITION TEXTURE SYNTHESIS

Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

15 Apr 2016chuanli11/MGANs

This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images).

STYLE TRANSFER TEXTURE SYNTHESIS

Two-Stream Convolutional Networks for Dynamic Texture Synthesis

CVPR 2018 ryersonvisionlab/two-stream-dyntex-synth

We introduce a two-stream model for dynamic texture synthesis. Given an input dynamic texture, statistics of filter responses from the object recognition ConvNet encapsulate the per-frame appearance of the input texture, while statistics of filter responses from the optical flow ConvNet model its dynamics.

OBJECT RECOGNITION OPTICAL FLOW ESTIMATION STYLE TRANSFER TEXTURE SYNTHESIS

Texture Synthesis with Spatial Generative Adversarial Networks

24 Nov 2016zalandoresearch/spatial_gan

Generative adversarial networks (GANs) are a recent approach to train generative models of data, which have been shown to work particularly well on image data. In the current paper we introduce a new model for texture synthesis based on GAN learning.

TEXTURE SYNTHESIS

Learning Texture Manifolds with the Periodic Spatial GAN

ICML 2017 zalandoresearch/psgan

First, we can learn multiple textures from datasets of one or more complex large images. Second, we show that the image generation with PSGANs has properties of a texture manifold: we can smoothly interpolate between samples in the structured noise space and generate novel samples, which lie perceptually between the textures of the original dataset.

IMAGE GENERATION TEXTURE SYNTHESIS

EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis

ICCV 2017 msmsajjadi/EnhanceNet-Code

Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality.

IMAGE SUPER-RESOLUTION TEXTURE SYNTHESIS

TextureGAN: Controlling Deep Image Synthesis with Texture Patches

CVPR 2018 janesjanes/Pytorch-TextureGAN

In this paper, we investigate deep image synthesis guided by sketch, color, and texture. Previous image synthesis methods can be controlled by sketch and color strokes but we are the first to examine texture control.

IMAGE GENERATION TEXTURE SYNTHESIS

Incorporating long-range consistency in CNN-based texture generation

3 Jun 2016anujdutt9/Artistic-Style-Transfer-using-Keras-Tensorflow

Gatys et al. (2015) showed that pair-wise products of features in a convolutional network are a very effective representation of image textures. We propose a simple modification to that representation which makes it possible to incorporate long-range structure into image generation, and to render images that satisfy various symmetry constraints.

IMAGE GENERATION TEXTURE SYNTHESIS