no code implementations • 13 Oct 2023 • Sudeep Dasari, Mohan Kumar Srirama, Unnat Jain, Abhinav Gupta
Visual representation learning hold great promise for robotics, but is severely hampered by the scarcity and homogeneity of robotics datasets.
no code implementations • 6 Sep 2023 • Vittorio Caggiano, Sudeep Dasari, Vikash Kumar
While prior work has synthesized single musculoskeletal control behaviors, MyoDex is the first generalizable manipulation prior that catalyzes the learning of dexterous physiological control across a large variety of contact-rich behaviors.
1 code implementation • 22 Sep 2022 • Sudeep Dasari, Abhinav Gupta, Vikash Kumar
This paper seeks to escape these constraints, by developing a Pre-Grasp informed Dexterous Manipulation (PGDM) framework that generates diverse dexterous manipulation behaviors, without any task-specific reasoning or hyper-parameter tuning.
1 code implementation • ICLR 2021 • Stephen Tian, Suraj Nair, Frederik Ebert, Sudeep Dasari, Benjamin Eysenbach, Chelsea Finn, Sergey Levine
In our experiments, we find that our method can successfully learn models that perform a variety of tasks at test-time, moving objects amid distractors with a simulated robotic arm and even learning to open and close a drawer using a real-world robot.
no code implementations • 11 Nov 2020 • Sudeep Dasari, Abhinav Gupta
Humans are able to seamlessly visually imitate others, by inferring their intentions and using past experience to achieve the same end goal.
no code implementations • 24 Oct 2019 • Sudeep Dasari, Frederik Ebert, Stephen Tian, Suraj Nair, Bernadette Bucher, Karl Schmeckpeper, Siddharth Singh, Sergey Levine, Chelsea Finn
This leads to a frequent tension in robotic learning: how can we learn generalizable robotic controllers without having to collect impractically large amounts of data for each separate experiment?
1 code implementation • 3 Dec 2018 • Frederik Ebert, Chelsea Finn, Sudeep Dasari, Annie Xie, Alex Lee, Sergey Levine
Deep reinforcement learning (RL) algorithms can learn complex robotic skills from raw sensory inputs, but have yet to achieve the kind of broad generalization and applicability demonstrated by deep learning methods in supervised domains.
3 code implementations • 6 Oct 2018 • Frederik Ebert, Sudeep Dasari, Alex X. Lee, Sergey Levine, Chelsea Finn
We demonstrate that this idea can be combined with a video-prediction based controller to enable complex behaviors to be learned from scratch using only raw visual inputs, including grasping, repositioning objects, and non-prehensile manipulation.
2 code implementations • 5 Feb 2018 • Tianhe Yu, Chelsea Finn, Annie Xie, Sudeep Dasari, Tianhao Zhang, Pieter Abbeel, Sergey Levine
Humans and animals are capable of learning a new behavior by observing others perform the skill just once.