no code implementations • 2 Nov 2023 • Carmelo Sferrazza, Younggyo Seo, Hao liu, Youngwoon Lee, Pieter Abbeel
For tasks requiring object manipulation, we seamlessly and effectively exploit the complementarity of our senses of vision and touch.
1 code implementation • 23 Aug 2023 • Ademi Adeniji, Amber Xie, Carmelo Sferrazza, Younggyo Seo, Stephen James, Pieter Abbeel
Using learned reward functions (LRFs) as a means to solve sparse-reward reinforcement learning (RL) tasks has yielded some steady progress in task-complexity through the years.
1 code implementation • 20 Mar 2023 • Junsu Kim, Younggyo Seo, Sungsoo Ahn, Kyunghwan Son, Jinwoo Shin
Recently, graph-based planning algorithms have gained much attention to solve goal-conditioned reinforcement learning (RL) tasks: they provide a sequence of subgoals to reach the target-goal, and the agents learn to execute subgoal-conditioned policies.
1 code implementation • 5 Feb 2023 • Younggyo Seo, Junsu Kim, Stephen James, Kimin Lee, Jinwoo Shin, Pieter Abbeel
In this paper, we investigate how to learn good representations with multi-view data and utilize them for visual robotic manipulation.
no code implementations • 15 Sep 2022 • Younggyo Seo, Kimin Lee, Fangchen Liu, Stephen James, Pieter Abbeel
Video prediction is an important yet challenging problem; burdened with the tasks of generating future frames and learning environment dynamics.
no code implementations • 28 Jun 2022 • Younggyo Seo, Danijar Hafner, Hao liu, Fangchen Liu, Stephen James, Kimin Lee, Pieter Abbeel
Yet the current approaches typically train a single model end-to-end for learning both visual representations and dynamics, making it difficult to accurately model the interaction between robots and small objects.
Model-based Reinforcement Learning Reinforcement Learning (RL) +1
2 code implementations • 25 Mar 2022 • Younggyo Seo, Kimin Lee, Stephen James, Pieter Abbeel
Our framework consists of two phases: we pre-train an action-free latent video prediction model, and then utilize the pre-trained representations for efficiently learning action-conditional world models on unseen environments.
no code implementations • ICLR 2022 • Jongjin Park, Younggyo Seo, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee
In order to leverage unlabeled samples for reward learning, we infer pseudo-labels of the unlabeled samples based on the confidence of the preference predictor.
1 code implementation • NeurIPS 2021 • Jongjin Park, Younggyo Seo, Chang Liu, Li Zhao, Tao Qin, Jinwoo Shin, Tie-Yan Liu
Behavioral cloning has proven to be effective for learning sequential decision-making policies from expert demonstrations.
1 code implementation • NeurIPS 2021 • Junsu Kim, Younggyo Seo, Jinwoo Shin
In this paper, we present HIerarchical reinforcement learning Guided by Landmarks (HIGL), a novel framework for training a high-level policy with a reduced action space guided by landmarks, i. e., promising states to explore.
Efficient Exploration Hierarchical Reinforcement Learning +2
no code implementations • 29 Sep 2021 • Younggyo Seo, Kimin Lee, Fangchen Liu, Stephen James, Pieter Abbeel
Video prediction is an important yet challenging problem; burdened with the tasks of generating future frames and learning environment dynamics.
1 code implementation • 1 Jul 2021 • SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin
Recent advance in deep offline reinforcement learning (RL) has made it possible to train strong robotic agents from offline datasets.
2 code implementations • ICLR Workshop SSL-RL 2021 • Younggyo Seo, Lili Chen, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee
Recent exploration methods have proven to be a recipe for improving sample-efficiency in deep reinforcement learning (RL).
no code implementations • 1 Jan 2021 • SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin
As it turns out, fine-tuning offline RL agents is a non-trivial challenge, due to distribution shift – the agent encounters out-of-distribution samples during online interaction, which may cause bootstrapping error in Q-learning and instability during fine-tuning.
1 code implementation • NeurIPS 2020 • Younggyo Seo, Kimin Lee, Ignasi Clavera, Thanard Kurutach, Jinwoo Shin, Pieter Abbeel
Model-based reinforcement learning (RL) has shown great potential in various control tasks in terms of both sample-efficiency and final performance.
1 code implementation • ICML 2020 • Sungsoo Ahn, Younggyo Seo, Jinwoo Shin
Designing efficient algorithms for combinatorial optimization appears ubiquitously in various scientific fields.
2 code implementations • ICML 2020 • Kimin Lee, Younggyo Seo, Seung-Hyun Lee, Honglak Lee, Jinwoo Shin
Model-based reinforcement learning (RL) enjoys several benefits, such as data-efficiency and planning, by learning a model of the environment's dynamics.
Model-based Reinforcement Learning reinforcement-learning +1
no code implementations • 25 Sep 2019 • Sungsoo Ahn, Younggyo Seo, Jinwoo Shin
Designing efficient algorithms for combinatorial optimization appears ubiquitously in various scientific fields.