no code implementations • 8 Feb 2023 • Cem Gokmen, Daniel Ho, Mohi Khansari
However, the black-box nature of end-to-end Imitation Learning models such as Behavioral Cloning, as well as the lack of an explicit state-value representation, make it difficult to predict failures.
3 code implementations • 4 Apr 2022 • Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, Andy Zeng
We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions, while value functions associated with these skills provide the grounding necessary to connect this knowledge to a particular physical environment.
no code implementations • 15 Feb 2022 • Yuqing Du, Daniel Ho, Alexander A. Alemi, Eric Jang, Mohi Khansari
In this work we investigate and demonstrate benefits of a Bayesian approach to imitation learning from multiple sensor inputs, as applied to the task of opening office doors with a mobile manipulator.
no code implementations • 3 Feb 2022 • Mohi Khansari, Daniel Ho, Yuqing Du, Armando Fuentes, Matthew Bennice, Nicolas Sievers, Sean Kirmani, Yunfei Bai, Eric Jang
To the best of our knowledge, this is the first work to tackle latched door opening from a purely end-to-end learning approach, where the task of navigation and manipulation are jointly modeled by a single neural network.
no code implementations • 29 Sep 2021 • Michelle Guo, Wenhao Yu, Daniel Ho, Jiajun Wu, Yunfei Bai, Karen Liu, Wenlong Lu
In addition, we perform two studies showing that UC-DiffOSI operates well in environments with changing or unknown dynamics.
no code implementations • ICLR 2021 • Alvin Wan, Lisa Dunlap, Daniel Ho, Jihan Yin, Scott Lee, Suzanne Petryk, Sarah Adel Bargal, Joseph E. Gonzalez
Machine learning applications such as finance and medicine demand accurate and justifiable predictions, barring most deep learning methods from use.
no code implementations • 23 Nov 2020 • Zhuo Xu, Wenhao Yu, Alexander Herzog, Wenlong Lu, Chuyuan Fu, Masayoshi Tomizuka, Yunfei Bai, C. Karen Liu, Daniel Ho
General contact-rich manipulation problems are long-standing challenges in robotics due to the difficulty of understanding complicated contact physics.
1 code implementation • 11 Jun 2020 • Alvin Wan, Daniel Ho, Younjin Song, Henk Tillman, Sarah Adel Bargal, Joseph E. Gonzalez
To address this, prior work combines neural networks with decision trees.
no code implementations • EMNLP 2021 • Pei Zhou, Rahul Khanna, Seyeon Lee, Bill Yuchen Lin, Daniel Ho, Jay Pujara, Xiang Ren
Pre-trained language models (PTLMs) have achieved impressive performance on commonsense inference benchmarks, but their ability to employ commonsense to make robust inferences, which is crucial for effective communications with humans, is debated.
2 code implementations • 1 Apr 2020 • Alvin Wan, Lisa Dunlap, Daniel Ho, Jihan Yin, Scott Lee, Henry Jin, Suzanne Petryk, Sarah Adel Bargal, Joseph E. Gonzalez
Machine learning applications such as finance and medicine demand accurate and justifiable predictions, barring most deep learning methods from use.
3 code implementations • 14 May 2019 • Daniel Ho, Eric Liang, Ion Stoica, Pieter Abbeel, Xi Chen
A key challenge in leveraging data augmentation for neural network training is choosing an effective augmentation policy from a large search space of candidate operations.
Ranked #4 on Image Classification on SVHN
no code implementations • CVPR 2018 • T. Nathan Mundhenk, Daniel Ho, Barry Y. Chen
We develop a set of methods to improve on the results of self-supervised learning using context.