We introduce CyberDemo, a novel approach to robotic imitation learning that leverages simulated human demonstrations for real-world tasks.
In this paper, we introduce a multi-finger robot system designed to search for and manipulate objects using the sense of touch without relying on visual information.
In this paper, we introduce a system that leverages visual and tactile sensory inputs to enable dexterous in-hand manipulation.
Collecting large amounts of real-world interaction data to train general robotic policies is often prohibitively expensive, thus motivating the use of simulation data.
Humans throw and catch objects all the time.
For real-world experiments, AnyTeleop can outperform a previous system that was designed for a specific robot hardware with a higher success rate, using the same robot.
On the other hand, operating with a multi-finger robot hand will allow better approximation to human behavior and enable the robot to operate on diverse articulated objects.
Hand-eye calibration is a critical task in robotics, as it directly affects the efficacy of critical operations such as manipulation and grasping.
We propose a new dataset and a novel approach to learning hand-object interaction priors for hand and articulated object pose estimation.
Relying on touch-only sensing, we can directly deploy the policy in a real robot hand and rotate novel objects that are not presented in training.
We empirically evaluate our method using an Allegro Hand to grasp novel objects in both simulation and real world.
In the abstract environment, complex dynamics such as physical manipulation are removed, making abstract trajectories easier to generate.
We will first convert the large-scale human-object interaction trajectories to robot demonstrations via motion retargeting, and then use these demonstrations to train CGF.
We propose to perform imitation learning for dexterous manipulation with multi-finger robot hand from human demonstrations, and transfer the policy to the real robot hand.
While Reinforcement Learning (RL) provides a promising paradigm for agile locomotion skills with vision inputs in simulation, it is still very challenging to deploy the RL policy in the real world.
While significant progress has been made on understanding hand-object interactions in computer vision, it is still very challenging for robots to perform complex dexterous manipulation.
Contrary to the vast literature in modeling, perceiving, and understanding agent-object (e. g., human-object, hand-object, robot-object) interaction in computer vision and robotics, very few past works have studied the task of object-object interaction, which also plays an important role in robotic manipulation and planning tasks.
We propose a teleoperation system that uses a single RGB-D camera as the human motion capture device.
For the first time, we propose a unified framework that can handle 9DoF pose tracking for novel rigid object instances as well as per-part pose tracking for articulated objects from known categories.
Numerical simulations first illustrate the consistency of theoretical results on the sharp interface limit.
Numerical Analysis Numerical Analysis 76Z99, 92B05, 76R50
To achieve this task, a simulated environment with physically realistic simulation, sufficient articulated objects, and transferability to the real robot is indispensable.
The composition of elementary behaviors to solve challenging transfer learning problems is one of the key elements in building intelligent machines.