|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Specifically, given an image of a person and a target pose, we synthesize a new image of that person in the novel pose.
Ranked #5 on Gesture-to-Gesture Translation on NTU Hand Digit
Generating novel, yet realistic, images of persons is a challenging task due to the complex interplay between the different image factors, such as the foreground, background and pose information.
Ranked #2 on Gesture-to-Gesture Translation on Senz3D
The proposed model consists of a single generator and a discriminator taking a conditional image and the target controllable structure as input.
Ranked #1 on Cross-View Image-to-Image Translation on Dayton (64x64) - ground-to-aerial (LPIPS metric)
Therefore, this task requires a high-level understanding of the mapping between the input source gesture and the output target gesture.
Ranked #1 on Gesture-to-Gesture Translation on NTU Hand Digit
In this work, we propose a novel GAN architecture that decouples the required annotations into a category label - that specifies the gesture type - and a simple-to-draw category-independent conditional map - that expresses the location, rotation and size of the hand gesture.
This paper proposes the novel Pose Guided Person Generation Network (PG$^2$) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose.
Ranked #3 on Gesture-to-Gesture Translation on Senz3D