CloTH-VTON+: Clothing Three-dimensional reconstruction for Hybrid image-based Virtual Try-ON

16 Feb 2021  ·  Matiur Rahman Minar, Thai Thanh Tuan, Heejune Ahn ·

Image-based virtual try-on (VTON) systems based on deep learning have attracted research and commercialinterests. Although they show their strengths in blending the person and try-on clothing image andsynthesizing the dis-occluded regions, their results for complex-posed persons are often unsatisfactorydue to the limitations in their geometry deformation and texture-preserving capacity. To address thesechallenges, we propose CloTH-VTON+ for seamlessly integrating the image-based deep learning methodsand the strength of the 3D model in shape deformation. Specifically, a fully automatic pipeline is developedfor 3D clothing model reconstruction and deformation using a reference human model: first, the try-onclothing is matched to the target clothing regions in the simple shaped reference human model, and thenthe 3D clothing model is reconstructed. The reconstructed 3D clothing model can generate a very naturalpose and shape transfer, retaining the textures of clothes. A clothing refinement network further refines thealignment, eliminating the misalignment due to the errors in human pose estimation and 3D deformation.The deformed clothing images are combined utilizing conditional generative networks to in-paint the dis-occluded areas and blend them all. Experiments on an existing benchmark dataset demonstrate that CloTH-VTON+ generates higher quality results in comparison to the state-of-the-art VTON systems and CloTH-VTON. CloTH-VTON+ can be incorporated into extended applications such as multi-pose guided and Video VTON.

PDF Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Virtual Try-on VITON CloTH-VTON+ SSIM 0.8937 # 1
LPIPS 0.0958 # 1


No methods listed for this paper. Add relevant methods here