Ensembling Off-the-shelf Models for GAN Training

The advent of large-scale training has produced a cornucopia of powerful visual recognition models. However, generative models, such as GANs, have traditionally been trained from scratch in an unsupervised manner. Can the collective "knowledge" from a large bank of pretrained vision models be leveraged to improve GAN training? If so, with so many models to choose from, which one(s) should be selected, and in what manner are they most effective? We find that pretrained computer vision models can significantly improve performance when used in an ensemble of discriminators. Notably, the particular subset of selected models greatly affects performance. We propose an effective selection mechanism, by probing the linear separability between real and fake samples in pretrained model embeddings, choosing the most accurate model, and progressively adding it to the discriminator ensemble. Interestingly, our method can improve GAN training in both limited data and large-scale settings. Given only 10k training samples, our FID on LSUN Cat matches the StyleGAN2 trained on 1.6M images. On the full dataset, our method improves FID by 1.5x to 2x on cat, church, and horse categories of LSUN.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Generation AFHQ Cat Vision-aided GAN clean-KID 0.46 ± .03 # 1
clean-FID 2.51 ± .02 # 1
FID 2.44 # 3
Image Generation AFHQ Dog Vision-aided GAN clean-KID 0.38 ± .01 # 1
clean-FID 4.73 ± .02 # 1
FID 4.60 # 2
Image Generation AFHQ Wild Vision-aided GAN clean-KID 0.38 ± .02 # 1
clean-FID 2.35 ± .02 # 1
FID 2.25 # 3
Image Generation LSUN Cat 256 x 256 Vision-aided GAN FID 3.87 # 1
Clean-FID (trainfull) 3.98 ± 0.03 # 1
Image Generation LSUN Churches 256 x 256 Vision-aided GAN FID 1.72 # 2
Clean-FID (trainfull) 1.72 ± 0.01 # 1
Image Generation LSUN Horse 256 x 256 Vision-aided GAN FID 2.15 # 1
Clean-FID (trainfull) 2.11 # 1

Methods