Most recent garment capturing techniques rely on acquiring multiple views of
clothing, which may not always be readily available, especially in the case of
pre-existing photographs from the web. As an alternative, we pro- pose a method
that is able to compute a rich and realistic 3D model of a human body and its
outfits from a single photograph with little human in- teraction...
is not only able to capture the global shape and geometry of the clothing, it
can also extract small but important details of cloth, such as occluded
wrinkles and folds. Unlike previous methods using full 3D information (i.e.
depth, multi-view images, or sampled 3D geom- etry), our approach achieves
detailed garment recovery from a single-view image by using statistical,
geometric, and physical priors and a combina- tion of parameter estimation,
semantic parsing, shape recovery, and physics- based cloth simulation. We
demonstrate the effectiveness of our algorithm by re-purposing the
reconstructed garments for virtual try-on and garment transfer applications, as
well as cloth animation for digital characters.