23 papers with code • 3 benchmarks • 6 datasets
Virtual try-on of clothing or other items such as glasses and makeup. Most recent techniques use Generative Adversarial Networks.
However, existing works overlooked the latter components and confined makeup transfer to color manipulation, focusing only on light makeup styles.
Ranked #1 on Facial Makeup Transfer on CPM-Synt-2
The task of image-based virtual try-on aims to transfer a target clothing item onto the corresponding region of a person, which is commonly tackled by fitting the item to the desired body part and fusing the warped item with the person.
To this end, DCTON can be naturally trained in a self-supervised manner following cycle consistency learning.
A recent pioneering work employed knowledge distillation to reduce the dependency of human parsing, where the try-on images produced by a parser-based method are used as supervisions to train a "student" network without relying on segmentation, making the student mimic the try-on ability of the parser-based model.
We build a series of scientific experiments to isolate effective design choices in video synthesis for virtual clothing try-on.
Recently proposed Image-based virtual try-on (VTON) approaches have several challenges regarding diverse human poses and cloth styles.
Second, a clothes warping module warps clothes image according to the generated semantic layout, where a second-order difference constraint is introduced to stabilize the warping process during training. Third, an inpainting module for content fusion integrates all information (e. g. reference image, semantic layout, warped clothes) to adaptively produce each semantic part of human body.
Secondly, it can synthesize images of multiple garments composed into a single, coherent outfit; and it enables control of the type of garments rendered in the final outfit.
High-fidelity clothing reconstruction is the key to achieving photorealism in a wide range of applications including human digitization, virtual try-on, etc.