Quaternion Generative Adversarial Networks

19 Apr 2021  ยท  Eleonora Grassucci, Edoardo Cicero, Danilo Comminiello ยท

Latest Generative Adversarial Networks (GANs) are gathering outstanding results through a large-scale training, thus employing models composed of millions of parameters requiring extensive computational capabilities. Building such huge models undermines their replicability and increases the training instability. Moreover, multi-channel data, such as images or audio, are usually processed by realvalued convolutional networks that flatten and concatenate the input, often losing intra-channel spatial relations. To address these issues related to complexity and information loss, we propose a family of quaternion-valued generative adversarial networks (QGANs). QGANs exploit the properties of quaternion algebra, e.g., the Hamilton product, that allows to process channels as a single entity and capture internal latent relations, while reducing by a factor of 4 the overall number of parameters. We show how to design QGANs and to extend the proposed approach even to advanced models.We compare the proposed QGANs with real-valued counterparts on several image generation benchmarks. Results show that QGANs are able to obtain better FID scores than real-valued GANs and to generate visually pleasing images. Furthermore, QGANs save up to 75% of the training parameters. We believe these results may pave the way to novel, more accessible, GANs capable of improving performance and saving computational resources.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Generation CelebA-HQ 128x128 QSNGAN FID 29.417 # 7
IS 2.249 # 1
Image Generation CIFAR-10 QSNGAN Inception score 4.7128 # 70
FID 31.966 # 135
Image Generation Oxford 102 Flowers 128x128 QSNGAN FID 115.838 # 1
IS 3 # 1
Image Generation STL-10 QSNGAN FID 59.611 # 24
Inception score 4.987 # 19

Methods


No methods listed for this paper. Add relevant methods here