Paper

Sym-parameterized Dynamic Inference for Mixed-Domain Image Translation

Recent advances in image-to-image translation have led to some ways to generate multiple domain images through a single network. However, there is still a limit in creating an image of a target domain without a dataset on it. We propose a method that expands the concept of `multi-domain' from data to the loss area and learns the combined characteristics of each domain to dynamically infer translations of images in mixed domains. First, we introduce Sym-parameter and its learning method for variously mixed losses while synchronizing them with input conditions. Then, we propose Sym-parameterized Generative Network (SGN) which is empirically confirmed of learning mixed characteristics of various data and losses, and translating images to any mixed-domain without ground truths, such as 30% Van Gogh and 20% Monet and 40% snowy.

Results in Papers With Code
(↓ scroll down to see all results)