VGG Loss is a type of content loss introduced in the Perceptual Losses for Real-Time Style Transfer and Super-Resolution super-resolution and style transfer framework. It is an alternative to pixel-wise losses; VGG Loss attempts to be closer to perceptual similarity. The VGG loss is based on the ReLU activation layers of the pre-trained 19 layer VGG network. With $\phi_{i,j}$ we indicate the feature map obtained by the $j$-th convolution (after activation) before the $i$-th maxpooling layer within the VGG19 network, which we consider given. We then define the VGG loss as the euclidean distance between the feature representations of a reconstructed image $G_{\theta_{G}}\left(I^{LR}\right)$ and the reference image $I^{HR}$:
$$ l_{VGG/i.j} = \frac{1}{W_{i,j}H_{i,j}}\sum_{x=1}^{W_{i,j}}\sum_{y=1}^{H_{i,j}}\left(\phi_{i,j}\left(I^{HR}\right)_{x, y} - \phi_{i,j}\left(G_{\theta_{G}}\left(I^{LR}\right)\right)_{x, y}\right)^{2}$$
Here $W_{i,j}$ and $H_{i,j}$ describe the dimensions of the respective feature maps within the VGG network.
Source: Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial NetworkPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Super-Resolution | 33 | 39.76% |
Image Super-Resolution | 22 | 26.51% |
Quantization | 2 | 2.41% |
Deep Learning | 1 | 1.20% |
Data Compression | 1 | 1.20% |
Image Compression | 1 | 1.20% |
Domain Adaptation | 1 | 1.20% |
Infrared image super-resolution | 1 | 1.20% |
Denoising | 1 | 1.20% |