PatchGAN is a type of discriminator for generative adversarial networks which only penalizes structure at the scale of local image patches. The PatchGAN discriminator tries to classify if each $N \times N$ patch in an image is real or fake. This discriminator is run convolutionally across the image, averaging all responses to provide the ultimate output of $D$. Such a discriminator effectively models the image as a Markov random field, assuming independence between pixels separated by more than a patch diameter. It can be understood as a type of texture/style loss.

Source: Image-to-Image Translation with Conditional Adversarial Networks


Paper Code Results Date Stars


Task Papers Share
Image-to-Image Translation 92 14.26%
Image Generation 41 6.36%
Domain Adaptation 33 5.12%
Semantic Segmentation 28 4.34%
Style Transfer 24 3.72%
Test 14 2.17%
Super-Resolution 14 2.17%
Denoising 12 1.86%
Object Detection 11 1.71%


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign