Deformable Object Manipulation
8 papers with code • 0 benchmarks • 0 datasets
These leaderboards are used to track progress in Deformable Object Manipulation
We compare coverage results from (1) human supervision, (2) a baseline of picking at the uppermost blanket point, and (3) learned pick points.
The goal of offline reinforcement learning is to learn a policy from a fixed dataset, without further interactions with the environment.
Moreover, due to the large amount of data needed to learn these end-to-end solutions, an emerging trend is to learn control policies in simulation and then transfer them over to the real world.
Second, instead of jointly learning both the pick and the place locations, we only explicitly learn the placing policy conditioned on random pick points.
Using visual model-based learning for deformable object manipulation is challenging due to difficulties in learning plannable visual representations along with complex dynamic models.
Further, we evaluate a variety of algorithms on these tasks and highlight challenges for reinforcement learning algorithms, including dealing with a state representation that has a high intrinsic dimensionality and is partially observable.
Learning non-rigid registration in an end-to-end manner is challenging due to the inherent high degrees of freedom and the lack of labeled training data.