Across applications spanning supervised classification and sequential control, deep learning has been reported to find "shortcut" solutions that fail catastrophically under minor changes in the data distribution.
When operating in partially observed settings, it is important for a control policy to fuse information from a history of observations.
Our approach trains a network to segment the edges and corners of a cloth from a depth image, distinguishing such regions from wrinkles or folds.
In this paper, we introduce a fusion-based depth prediction method, called FusionMapping.
Very deep convolutional neural networks (CNNs) have been firmly established as the primary methods for many computer vision tasks.