Path Length Regularization

Introduced by Karras et al. in Analyzing and Improving the Image Quality of StyleGAN

Path Length Regularization is a type of regularization for generative adversarial networks that encourages good conditioning in the mapping from latent codes to images. The idea is to encourage that a fixed-size step in the latent space $\mathcal{W}$ results in a non-zero, fixed-magnitude change in the image.

We can measure the deviation from this ideal empirically by stepping into random directions in the image space and observing the corresponding $\mathbf{w}$ gradients. These gradients should have close to an equal length regardless of $\mathbf{w}$ or the image-space direction, indicating that the mapping from the latent space to image space is well-conditioned.

At a single $\mathbf{w} \in \mathcal{W}$ the local metric scaling properties of the generator mapping $g\left(\mathbf{w}\right) : \mathcal{W} \rightarrow \mathcal{Y}$ are captured by the Jacobian matrix $\mathbf{J_{w}} = \delta{g}\left(\mathbf{w}\right)/\delta{\mathbf{w}}$. Motivated by the desire to preserve the expected lengths of vectors regardless of the direction, we formulate the regularizer as:

$$ \mathbb{E}_{\mathbf{w},\mathbf{y} \sim \mathcal{N}\left(0, \mathbf{I}\right)} \left(||\mathbf{J}^{\mathbf{T}}_{\mathbf{w}}\mathbf{y}||_{2} - a\right)^{2} $$

where $y$ are random images with normally distributed pixel intensities, and $w \sim f\left(z\right)$, where $z$ are normally distributed.

To avoid explicit computation of the Jacobian matrix, we use the identity $\mathbf{J}^{\mathbf{T}}_{\mathbf{w}}\mathbf{y} = \nabla_{\mathbf{w}}\left(g\left(\mathbf{w}\right)·y\right)$, which is efficiently computable using standard backpropagation. The constant $a$ is set dynamically during optimization as the long-running exponential moving average of the lengths $||\mathbf{J}^{\mathbf{T}}_{\mathbf{w}}\mathbf{y}||_{2}$, allowing the optimization to find a suitable global scale by itself.

The authors note that they find that path length regularization leads to more reliable and consistently behaving models, making architecture exploration easier. They also observe that the smoother generator is significantly easier to invert.

Source: Analyzing and Improving the Image Quality of StyleGAN

Latest Papers

PAPER DATE
Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2 Network
Jialu HuangJing LiaoSam Kwong
2020-10-12
MaterialGAN: Reflectance Capture using a Generative SVBRDF Model
Yu GuoCameron SmithMiloš HašanKalyan SunkavalliShuang Zhao
2020-09-30
GIF: Generative Interpretable Faces
| Partha GhoshPravir Singh GuptaRoy UzielAnurag RanjanMichael BlackTimo Bolkart
2020-08-31
Improving the Performance of Fine-Grain Image Classifiers via Generative Data Augmentation
Shashank ManjunathAitzaz NathanielJeff DruceStan German
2020-08-12
High Resolution Zero-Shot Domain Adaptation of Synthetically Rendered Face Images
Stephan J. GarbinMarek KowalskiMatthew JohnsonJamie Shotton
2020-06-26
Differentiable Augmentation for Data-Efficient GAN Training
| Shengyu ZhaoZhijian LiuJi LinJun-Yan ZhuSong Han
2020-06-18
Training Generative Adversarial Networks with Limited Data
| Tero KarrasMiika AittalaJanne HellstenSamuli LaineJaakko LehtinenTimo Aila
2020-06-11
Using Generative Models for Pediatric wbMRI
Alex ChangVinith M. SuriyakumarAbhishek MoturuNipaporn TewattanaratAndrea DoriaAnna Goldenberg
2020-06-01
Network Bending: Manipulating The Inner Representations of Deep Generative Models
| Terence BroadFrederic Fol LeymarieMick Grierson
2020-05-25
Recognizing Families through Images with Pretrained Encoder
Tuan-Duy H. NguyenHuu-Nghia H. NguyenHieu Dao
2020-05-24
StyleGAN2 Distillation for Feed-forward Image Manipulation
| Yuri ViazovetskyiVladimir IvashkinEvgeny Kashin
2020-03-07
Analyzing and Improving the Image Quality of StyleGAN
| Tero KarrasSamuli LaineMiika AittalaJanne HellstenJaakko LehtinenTimo Aila
2019-12-03

Tasks

Components

COMPONENT TYPE
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories