Learning-Based Cloth Material Recovery From Video

ICCV 2017  ·  Shan Yang, Junbang Liang, Ming C. Lin ·

Image understanding enables better reconstruction of the physical world from images and videos. Existing methods focus largely on geometry and visual appearance of the reconstructed scene. In this paper, we extend the frontier in image understanding and present a new technique to recover the material properties of cloth from a video.Previous cloth material recovery methods often require markers or complex experimental set-up to acquire physical properties, or are limited to certain types of images/videos. Our approach takes advantages of the appearance changes of the moving cloth to infer its physical properties. To extract information about the cloth, our method characterizes both the motion space and the visual appearance of the cloth geometry. We apply the Convolutional Neural Network (CNN) and the Long Short Term Memory (LSTM) neural network to material recovery of cloth properties from videos. We also exploit simulated data to help statistical learning of mapping between the visual appearance and motion dynamics of the cloth. The effectiveness of our method is demonstrated via validation using simulated datasets and real-life recorded videos.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here