Path Length Regularization is a type of regularization for generative adversarial networks that encourages good conditioning in the mapping from latent codes to images. The idea is to encourage that a fixedsize step in the latent space $\mathcal{W}$ results in a nonzero, fixedmagnitude change in the image.
We can measure the deviation from this ideal empirically by stepping into random directions in the image space and observing the corresponding $\mathbf{w}$ gradients. These gradients should have close to an equal length regardless of $\mathbf{w}$ or the imagespace direction, indicating that the mapping from the latent space to image space is wellconditioned.
At a single $\mathbf{w} \in \mathcal{W}$ the local metric scaling properties of the generator mapping $g\left(\mathbf{w}\right) : \mathcal{W} \rightarrow \mathcal{Y}$ are captured by the Jacobian matrix $\mathbf{J_{w}} = \delta{g}\left(\mathbf{w}\right)/\delta{\mathbf{w}}$. Motivated by the desire to preserve the expected lengths of vectors regardless of the direction, we formulate the regularizer as:
$$ \mathbb{E}_{\mathbf{w},\mathbf{y} \sim \mathcal{N}\left(0, \mathbf{I}\right)} \left(\mathbf{J}^{\mathbf{T}}_{\mathbf{w}}\mathbf{y}_{2}  a\right)^{2} $$
where $y$ are random images with normally distributed pixel intensities, and $w \sim f\left(z\right)$, where $z$ are normally distributed.
To avoid explicit computation of the Jacobian matrix, we use the identity $\mathbf{J}^{\mathbf{T}}_{\mathbf{w}}\mathbf{y} = \nabla_{\mathbf{w}}\left(g\left(\mathbf{w}\right)·y\right)$, which is efficiently computable using standard backpropagation. The constant $a$ is set dynamically during optimization as the longrunning exponential moving average of the lengths $\mathbf{J}^{\mathbf{T}}_{\mathbf{w}}\mathbf{y}_{2}$, allowing the optimization to find a suitable global scale by itself.
The authors note that they find that path length regularization leads to more reliable and consistently behaving models, making architecture exploration easier. They also observe that the smoother generator is significantly easier to invert.
Source:PAPER  DATE 

Unsupervised ImagetoImage Translation via Pretrained StyleGAN2 Network
• • 
20201012 
MaterialGAN: Reflectance Capture using a Generative SVBRDF Model
• • • • 
20200930 
GIF: Generative Interpretable Faces

20200831 
Improving the Performance of FineGrain Image Classifiers via Generative Data Augmentation
• • • 
20200812 
High Resolution ZeroShot Domain Adaptation of Synthetically Rendered Face Images
• • • 
20200626 
Differentiable Augmentation for DataEfficient GAN Training

20200618 
Training Generative Adversarial Networks with Limited Data

20200611 
Using Generative Models for Pediatric wbMRI
• • • • • 
20200601 
Network Bending: Manipulating The Inner Representations of Deep Generative Models

20200525 
Recognizing Families through Images with Pretrained Encoder
• • 
20200524 
StyleGAN2 Distillation for Feedforward Image Manipulation

20200307 
Analyzing and Improving the Image Quality of StyleGAN

20191203 
TASK  PAPERS  SHARE 

Image Generation  4  28.57% 
Domain Adaptation  1  7.14% 
Image Morphing  1  7.14% 
Style Transfer  1  7.14% 
Decision Making  1  7.14% 
Colorization  1  7.14% 
ImagetoImage Translation  1  7.14% 
Semantic Similarity  1  7.14% 
Semantic Textual Similarity  1  7.14% 
COMPONENT  TYPE 


🤖 No Components Found  You can add them if they exist; e.g. Mask RCNN uses RoIAlign 