28 papers with code • 2 benchmarks • 2 datasets
This paper proposes a new generative adversarial network for pose transfer, i. e., transferring the pose of a given person to a target pose.
This paper introduces the Attribute-Decomposed GAN, a novel generative model for controllable person image synthesis, which can produce realistic person images with desired human attributes (e. g., pose, head, upper clothes and pants) provided in various source inputs.
Unlike existing methods, we propose to estimate dense and intrinsic 3D appearance flow to better guide the transfer of pixels between poses.
We address the problem of guided image-to-image translation where we translate an input image into another while respecting the constraints provided by an external, user-provided guidance image.