no code implementations • 5 Mar 2024 • Fangchen Liu, Kuan Fang, Pieter Abbeel, Sergey Levine
In this paper, we present MOKA (Marking Open-vocabulary Keypoint Affordances), an approach that employs VLMs to solve robotic manipulation tasks specified by free-form language descriptions.
no code implementations • 7 Jul 2023 • Xingyu Lin, John So, Sashwat Mahalingam, Fangchen Liu, Pieter Abbeel
In this work, we present a focused study of the generalization capabilities of the pre-trained visual representations at the categorical level.
no code implementations • 5 Jul 2023 • Fangchen Liu, Larissa Gaul, Andrea Giometto, Mingming Wu
Microalgae are key players in the global carbon cycle and emerging producers of biofuels.
1 code implementation • 3 Apr 2023 • Zhiwei Jia, Fangchen Liu, Vineet Thumuluri, Linghao Chen, Zhiao Huang, Hao Su
We study generalizable policy learning from demonstrations for complex low-level control tasks (e. g., contact-rich object manipulations).
1 code implementation • 10 Feb 2023 • Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, Joseph E. Gonzalez
In this paper, we consider an alternative approach: converting feedback to instruction by relabeling the original one and training the model for better alignment in a supervised manner.
1 code implementation • 23 Nov 2022 • Fangchen Liu, Hao liu, Aditya Grover, Pieter Abbeel
We are interested in learning scalable agents for reinforcement learning that can learn from large-scale, diverse sequential data similar to current large vision and language models.
no code implementations • 15 Sep 2022 • Younggyo Seo, Kimin Lee, Fangchen Liu, Stephen James, Pieter Abbeel
Video prediction is an important yet challenging problem; burdened with the tasks of generating future frames and learning environment dynamics.
no code implementations • 28 Jun 2022 • Younggyo Seo, Danijar Hafner, Hao liu, Fangchen Liu, Stephen James, Kimin Lee, Pieter Abbeel
Yet the current approaches typically train a single model end-to-end for learning both visual representations and dynamics, making it difficult to accurately model the interaction between robots and small objects.
Model-based Reinforcement Learning Reinforcement Learning (RL) +1
no code implementations • 26 Oct 2021 • Zhao Mandi, Fangchen Liu, Kimin Lee, Pieter Abbeel
We then study the multi-task setting, where multi-task training is followed by (i) one-shot imitation on variations within the training tasks, (ii) one-shot imitation on new tasks, and (iii) fine-tuning on new tasks.
no code implementations • 29 Sep 2021 • Younggyo Seo, Kimin Lee, Fangchen Liu, Stephen James, Pieter Abbeel
Video prediction is an important yet challenging problem; burdened with the tasks of generating future frames and learning environment dynamics.
1 code implementation • CVPR 2020 • Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, Li Yi, Angel X. Chang, Leonidas J. Guibas, Hao Su
To achieve this task, a simulated environment with physically realistic simulation, sufficient articulated objects, and transferability to the real robot is indispensable.
no code implementations • ICLR 2020 • Fangchen Liu, Zhan Ling, Tongzhou Mu, Hao Su
Consider an imitation learning problem that the imitator and the expert have different dynamics models.
1 code implementation • NeurIPS 2019 • Zhiao Huang, Fangchen Liu, Hao Su
An agent that has well understood the environment should be able to apply its skills for any given goals, leading to the fundamental problem of learning the Universal Value Function Approximator (UVFA).
1 code implementation • CVPR 2019 • Bo Sun, Nian-hsuan Tsai, Fangchen Liu, Ronald Yu, Hao Su
We propose an adversarial defense method that achieves state-of-the-art performance among attack-agnostic adversarial defense methods while also maintaining robustness to input resolution, scale of adversarial perturbation, and scale of dataset size.
4 code implementations • CVPR 2020 • Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, Trevor Darrell
Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving.
Ranked #5 on Multiple Object Tracking on BDD100K test
no code implementations • ICLR 2018 • Xiangyu Kong, Bo Xin, Fangchen Liu, Yizhou Wang
Many tasks in artificial intelligence require the collaboration of multiple agents.