Perceptual Losses for Real-Time Style Transfer and Super-Resolution

27 Mar 2016  ·  Justin Johnson, Alexandre Alahi, Li Fei-Fei ·

We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a \emph{per-pixel} loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing \emph{perceptual} loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Super-Resolution BSD100 - 4x upscaling Perceptual Loss PSNR 24.95 # 58
SSIM 0.6317 # 54
Nuclear Segmentation Cell17 FnsNet F1-score 0.7413 # 3
Dice 0.6165 # 4
Hausdorff 25.9102 # 4

Methods


No methods listed for this paper. Add relevant methods here