Robot Manipulation
117 papers with code • 2 benchmarks • 7 datasets
Libraries
Use these libraries to find Robot Manipulation models and implementationsDatasets
Most implemented papers
Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation
In this paper, we extend the scope of this effectiveness by showing that visual robot manipulation can significantly benefit from large-scale video generative pre-training.
DeepIM: Deep Iterative Matching for 6D Pose Estimation
Estimating the 6D pose of objects from images is an important problem in various applications such as robot manipulation and virtual reality.
SilhoNet: An RGB Method for 6D Object Pose Estimation
Autonomous robot manipulation involves estimating the translation and orientation of the object to be manipulated as a 6-degree-of-freedom (6D) pose.
Reinforcement Learning for Robotic Manipulation using Simulated Locomotion Demonstrations
In order to exploit this idea, we introduce a framework whereby an object locomotion policy is initially obtained using a realistic physics simulator.
Learning 3D Dynamic Scene Representations for Robot Manipulation
3D scene representation for robot manipulation should capture three key object properties: permanency -- objects that become occluded over time continue to exist; amodal completeness -- objects have 3D occupancy, even if only partial observations are available; spatiotemporal continuity -- the movement of each object is continuous over space and time.
Mobile Robot Manipulation using Pure Object Detection
We develop an end-to-end manipulation method based solely on detection and introduce Task-focused Few-shot Object Detection (TFOD) to learn new objects and settings.
What Matters in Language Conditioned Robotic Imitation Learning over Unstructured Data
We have open-sourced our implementation to facilitate future research in learning to perform many complex manipulation skills in a row specified with natural language.
Reward Uncertainty for Exploration in Preference-based Reinforcement Learning
Our intuition is that disagreement in learned reward model reflects uncertainty in tailored human feedback and could be useful for exploration.
Instruction-driven history-aware policies for robotic manipulations
In human environments, robots are expected to accomplish a variety of manipulation tasks given simple natural language instructions.
VIMA: General Robot Manipulation with Multimodal Prompts
We show that a wide spectrum of robot manipulation tasks can be expressed with multimodal prompts, interleaving textual and visual tokens.