Paper

Learning Intrinsic Images for Clothing

Reconstruction of human clothing is an important task and often relies on intrinsic image decomposition. With a lack of domain-specific data and coarse evaluation metrics, existing models failed to produce satisfying results for graphics applications. In this paper, we focus on intrinsic image decomposition for clothing images and have comprehensive improvements. We collected CloIntrinsics, a clothing intrinsic image dataset, including a synthetic training set and a real-world testing set. A more interpretable edge-aware metric and an annotation scheme is designed for the testing set, which allows diagnostic evaluation for intrinsic models. Finally, we propose ClothInNet model with carefully designed loss terms and an adversarial module. It utilizes easy-to-acquire labels to learn from real-world shading, significantly improves performance with only minor additional annotation effort. We show that our proposed model significantly reduce texture-copying artifacts while retaining surprisingly tiny details, outperforming existing state-of-the-art methods.

Results in Papers With Code
(↓ scroll down to see all results)