Federated Contrastive Learning for Privacy-Preserving Unpaired Image-to-Image Translation

29 Sep 2021  ·  Joonyoung Song, Jong Chul Ye ·

The goal of an unsupervised image-to-image translation (I2I) is to convert an input image in a specific domain to a target domain using a neural network trained with unpaired data. Existing I2I methods usually require a centrally stored dataset, which can compromise data privacy. A recent proposal of federated cycleGAN (FedCycleGAN) can protect the data-privacy by splitting the loss between the server and the clients so that the data does not need to be shared, but the weights and gradients of both generator and discriminators should be exchanged, demanding significant communication cost. To address this, here we propose a novel federated contrastive unpaired translation (FedCUT) approach for privacy-preserving image-to-image translation. Similar to FedCycleGAN, our method is based on the observation that the CUT loss can be decomposed into domain-specific local objectives, but in contrast to FedCycleGAN, our method only exchanges weights and gradients of a discriminator, significantly reducing the band-width requirement. In addition, by combining it with the pre-trained VGG network, the learnable part of the discriminator can be further reduced without impairing the image quality, resulting in two order magnitude reduction in the communication cost. Through extensive experiments for various translation tasks, we confirm that our method shows competitive performance compared to existing approaches.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods