Hypercomplex Image-to-Image Translation

4 May 2022  ·  Eleonora Grassucci, Luigi Sigillo, Aurelio Uncini, Danilo Comminiello ·

Image-to-image translation (I2I) aims at transferring the content representation from an input domain to an output one, bouncing along different target domains. Recent I2I generative models, which gain outstanding results in this task, comprise a set of diverse deep networks each with tens of million parameters. Moreover, images are usually three-dimensional being composed of RGB channels and common neural models do not take dimensions correlation into account, losing beneficial information. In this paper, we propose to leverage hypercomplex algebra properties to define lightweight I2I generative models capable of preserving pre-existing relations among image dimensions, thus exploiting additional input information. On manifold I2I benchmarks, we show how the proposed Quaternion StarGANv2 and parameterized hypercomplex StarGANv2 (PHStarGANv2) reduce parameters and storage memory amount while ensuring high domain translation performance and good image quality as measured by FID and LPIPS scores. Full code is available at: https://github.com/ispamm/HI2I.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Image-to-Image Translation CelebA-HQ PHStarGANv2 n=3 FID 16.63 # 4
LPIPS 0.33 # 2
Image-to-Image Translation CelebA-HQ PHStarGANv2 n=4 FID 16.54 # 3
LPIPS 0.29 # 3

Methods


No methods listed for this paper. Add relevant methods here