Virtual Try-on
80 papers with code • 7 benchmarks • 11 datasets
Virtual try-on of clothing or other items such as glasses and makeup. Most recent techniques use Generative Adversarial Networks.
Libraries
Use these libraries to find Virtual Try-on models and implementationsDatasets
Most implemented papers
SwapNet: Garment Transfer in Single View Images
Garment transfer is a challenging task that requires (i) disentangling the features of the clothing from the body pose and shape and (ii) realistic synthesis of the garment texture on the new body.
Learning-Based Animation of Clothing for Virtual Try-On
We propose a model that separates global garment fit, due to body shape, from local garment wrinkles, due to both pose dynamics and body shape.
TightCap: 3D Human Shape Capture with Clothing Tightness Field
In this paper, we present TightCap, a data-driven scheme to capture both the human shape and dressed garments accurately with only a single 3D human scan, which enables numerous applications such as virtual try-on, biometrics and body evaluation.
Disentangled Makeup Transfer with Generative Adversarial Network
Facial makeup transfer is a widely-used technology that aims to transfer the makeup style from a reference face image to a non-makeup face.
Poly-GAN: Multi-Conditioned GAN for Fashion Synthesis
We present Poly-GAN, a novel conditional GAN architecture that is motivated by Fashion Synthesis, an application where garments are automatically placed on images of human models at an arbitrary pose.
ClothFlow: A Flow-Based Model for Clothed Person Generation
By estimating a dense flow between source and target clothing regions, ClothFlow effectively models the geometric changes and naturally transfers the appearance to synthesize novel images as shown in Figure 1.
VTNFP: An Image-Based Virtual Try-On Network With Body and Clothing Feature Preservation
A key innovation of VTNFP is the body segmentation map prediction module, which provides critical information to guide image synthesis in regions where body parts and clothing intersects, and is very beneficial for preventing blurry pictures and preserving clothing and body part details.
Down to the Last Detail: Virtual Try-on with Detail Carving
However, existing methods can hardly preserve the details in clothing texture and facial identity (face, hair) while fitting novel clothes and poses onto a person.
SieveNet: A Unified Framework for Robust Image-Based Virtual Try-On
An efficient framework for this is composed of two stages: (1) warping (transforming) the try-on cloth to align with the pose and shape of the target model, and (2) a texture transfer module to seamlessly integrate the warped try-on cloth onto the target model image.
Learning to Transfer Texture from Clothing Images to 3D Humans
In this paper, we present a simple yet effective method to automatically transfer textures of clothing images (front and back) to 3D garments worn on top SMPL, in real time.