Unsupervised Image Transformation Learning via Generative Adversarial Networks

13 Mar 2021  ·  Kaiwen Zha, Yujun Shen, Bolei Zhou ·

In this work, we study the image transformation problem, which targets at learning the underlying transformations (e.g., the transition of seasons) from a collection of unlabeled images. However, there could be countless of transformations in the real world, making such a task incredibly challenging, especially under the unsupervised setting. To tackle this obstacle, we propose a novel learning framework built on generative adversarial networks (GANs), where the discriminator and the generator share a transformation space. After the model gets fully optimized, any two points within the shared space are expected to define a valid transformation. In this way, at the inference stage, we manage to adequately extract the variation factor between a customizable image pair by projecting both images onto the transformation space. The resulting transformation vector can further guide the image synthesis, facilitating image editing with continuous semantic change (e.g., altering summer to winter with fall as the intermediate step). Noticeably, the learned transformation space supports not only transferring image styles (e.g., changing day to night), but also manipulating image contents (e.g., adding clouds in the sky). In addition, we make in-depth analysis on the properties of the transformation space to help understand how various transformations are organized. Project page is at https://genforce.github.io/trgan/.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here