Gesture-to-Gesture Translation
6 papers with code • 2 benchmarks • 0 datasets
Latest papers
Unified Generative Adversarial Networks for Controllable Image-to-Image Translation
The proposed model consists of a single generator and a discriminator taking a conditional image and the target controllable structure as input.
Gesture-to-Gesture Translation in the Wild via Category-Independent Conditional Maps
In this work, we propose a novel GAN architecture that decouples the required annotations into a category label - that specifies the gesture type - and a simple-to-draw category-independent conditional map - that expresses the location, rotation and size of the hand gesture.
GestureGAN for Hand Gesture-to-Gesture Translation in the Wild
Therefore, this task requires a high-level understanding of the mapping between the input source gesture and the output target gesture.
Deformable GANs for Pose-based Human Image Generation
Specifically, given an image of a person and a target pose, we synthesize a new image of that person in the novel pose.
Disentangled Person Image Generation
Generating novel, yet realistic, images of persons is a challenging task due to the complex interplay between the different image factors, such as the foreground, background and pose information.
Pose Guided Person Image Generation
This paper proposes the novel Pose Guided Person Generation Network (PG$^2$) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose.