Adversarial Generation of Continuous Images

In most existing learning systems, images are typically viewed as 2D pixel arrays. However, in another paradigm gaining popularity, a 2D image is represented as an implicit neural representation (INR) - an MLP that predicts an RGB pixel value given its (x,y) coordinate. In this paper, we propose two novel architectural techniques for building INR-based image decoders: factorized multiplicative modulation and multi-scale INRs, and use them to build a state-of-the-art continuous image GAN. Previous attempts to adapt INRs for image generation were limited to MNIST-like datasets and do not scale to complex real-world data. Our proposed INR-GAN architecture improves the performance of continuous image generators by several times, greatly reducing the gap between continuous image GANs and pixel-based ones. Apart from that, we explore several exciting properties of the INR-based decoders, like out-of-the-box superresolution, meaningful image-space interpolation, accelerated inference of low-resolution images, an ability to extrapolate outside of image boundaries, and strong geometric prior. The project page is located at https://universome.github.io/inr-gan.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Image Generation FFHQ 256 x 256 INR-GAN-bil FID 4.95 # 17
Image Generation LSUN Churches 256 x 256 INR-GAN-bil FID 4.04 # 12

Methods


No methods listed for this paper. Add relevant methods here