no code implementations • 8 Feb 2023 • Cem Gokmen, Daniel Ho, Mohi Khansari
However, the black-box nature of end-to-end Imitation Learning models such as Behavioral Cloning, as well as the lack of an explicit state-value representation, make it difficult to predict failures.
no code implementations • 15 Feb 2022 • Yuqing Du, Daniel Ho, Alexander A. Alemi, Eric Jang, Mohi Khansari
In this work we investigate and demonstrate benefits of a Bayesian approach to imitation learning from multiple sensor inputs, as applied to the task of opening office doors with a mobile manipulator.
no code implementations • 4 Feb 2022 • Eric Jang, Alex Irpan, Mohi Khansari, Daniel Kappler, Frederik Ebert, Corey Lynch, Sergey Levine, Chelsea Finn
In this paper, we study the problem of enabling a vision-based robotic manipulation system to generalize to novel tasks, a long-standing challenge in robot learning.
no code implementations • 3 Feb 2022 • Mohi Khansari, Daniel Ho, Yuqing Du, Armando Fuentes, Matthew Bennice, Nicolas Sievers, Sean Kirmani, Yunfei Bai, Eric Jang
To the best of our knowledge, this is the first work to tackle latched door opening from a purely end-to-end learning approach, where the task of navigation and manipulation are jointly modeled by a single neural network.
no code implementations • CVPR 2020 • Kanishka Rao, Chris Harris, Alex Irpan, Sergey Levine, Julian Ibarz, Mohi Khansari
However, this sort of translation is typically task-agnostic, in that the translated images may not preserve all features that are relevant to the task.
no code implementations • 8 Jun 2020 • Sören Pirk, Karol Hausman, Alexander Toshev, Mohi Khansari
We show that complex plans can be carried out when executing the robotic task and the robot can interactively adapt to changes in the environment and recover from failure cases.
no code implementations • 13 May 2020 • Mohi Khansari, Daniel Kappler, Jianlan Luo, Jeff Bingham, Mrinal Kalakrishnan
Similar to computer vision problems, such as object detection, Action Image builds on the idea that object features are invariant to translation in image space.
no code implementations • ICLR 2020 • Allan Zhou, Eric Jang, Daniel Kappler, Alex Herzog, Mohi Khansari, Paul Wohlhart, Yunfei Bai, Mrinal Kalakrishnan, Sergey Levine, Chelsea Finn
Imitation learning allows agents to learn complex behaviors from demonstrations.
no code implementations • 25 Feb 2020 • Avi Singh, Eric Jang, Alexander Irpan, Daniel Kappler, Murtaza Dalal, Sergey Levine, Mohi Khansari, Chelsea Finn
In this work, we target this challenge, aiming to build an imitation learning system that can continuously improve through autonomous data collection, while simultaneously avoiding the explicit use of reinforcement learning, to maintain the stability, simplicity, and scalability of supervised imitation.
no code implementations • 21 Jun 2019 • Xinchen Yan, Mohi Khansari, Jasmine Hsu, Yuanzheng Gong, Yunfei Bai, Sören Pirk, Honglak Lee
Training a deep network policy for robot manipulation is notoriously costly and time consuming as it depends on collecting a significant amount of real world data.
no code implementations • 10 Jun 2019 • Sören Pirk, Mohi Khansari, Yunfei Bai, Corey Lynch, Pierre Sermanet
We propose a self-supervised approach for learning representations of objects from monocular videos and demonstrate it is particularly useful in situated settings such as robotics.
no code implementations • 7 Jun 2019 • Allan Zhou, Eric Jang, Daniel Kappler, Alex Herzog, Mohi Khansari, Paul Wohlhart, Yunfei Bai, Mrinal Kalakrishnan, Sergey Levine, Chelsea Finn
Imitation learning allows agents to learn complex behaviors from demonstrations.
1 code implementation • 5 Mar 2019 • Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, Pierre Sermanet
Learning from play (LfP) offers three main advantages: 1) It is cheap.
Robotics
no code implementations • 13 Apr 2018 • Vikas Sindhwani, Stephen Tu, Mohi Khansari
We propose a new non-parametric framework for learning incrementally stable dynamical systems x' = f(x) from a set of sampled trajectories.
1 code implementation • 24 Aug 2017 • Xinchen Yan, Jasmine Hsu, Mohi Khansari, Yunfei Bai, Arkanath Pathak, Abhinav Gupta, James Davidson, Honglak Lee
Our contributions are fourfold: (1) To best of our knowledge, we are presenting for the first time a method to learn a 6-DOF grasping net from RGBD input; (2) We build a grasping dataset from demonstrations in virtual reality with rich sensory and interaction annotations.