Cycle Consistency Loss is a type of loss used for generative adversarial networks that performs unpaired image-to-image translation. It was introduced with the CycleGAN architecture. For two domains $X$ and $Y$, we want to learn a mapping $G : X \rightarrow Y$ and $F: Y \rightarrow X$. We want to enforce the intuition that these mappings should be reverses of each other and that both mappings should be bijections. Cycle Consistency Loss encourages $F\left(G\left(x\right)\right) \approx x$ and $G\left(F\left(y\right)\right) \approx y$. It reduces the space of possible mapping functions by enforcing forward and backwards consistency:
$$ \mathcal{L}_{cyc}\left(G, F\right) = \mathbb{E}_{x \sim p_{data}\left(x\right)}\left[||F\left(G\left(x\right)\right) - x||_{1}\right] + \mathbb{E}_{y \sim p_{data}\left(y\right)}\left[||G\left(F\left(y\right)\right) - y||_{1}\right] $$
Source: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial NetworksPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image-to-Image Translation | 67 | 13.54% |
Domain Adaptation | 35 | 7.07% |
Image Generation | 28 | 5.66% |
Semantic Segmentation | 22 | 4.44% |
Style Transfer | 20 | 4.04% |
Unsupervised Domain Adaptation | 14 | 2.83% |
Super-Resolution | 12 | 2.42% |
Object Detection | 11 | 2.22% |
Voice Conversion | 11 | 2.22% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |