no code implementations • 14 Dec 2023 • Taewook Nam, Juyong Lee, Jesse Zhang, Sung Ju Hwang, Joseph J. Lim, Karl Pertsch
We propose a framework that leverages foundation models as teachers, guiding a reinforcement learning agent to acquire semantically meaningful behavior without human feedback.
no code implementations • ICLR 2022 • Taewook Nam, Shao-Hua Sun, Karl Pertsch, Sung Ju Hwang, Joseph J Lim
While deep reinforcement learning methods have shown impressive results in robot learning, their sample inefficiency makes the learning of complex, long-horizon behaviors with real robot systems infeasible.
2 code implementations • ICLR 2020 • Hae Beom Lee, Taewook Nam, Eunho Yang, Sung Ju Hwang
Specifically, we meta-learn a noise generator which outputs a multiplicative noise distribution for latent features, to obtain low errors on the test instances in an input-dependent manner.
Ranked #1 on Meta-Learning on OMNIGLOT - 1-Shot, 20-way
1 code implementation • 30 May 2019 • Hae Beom Lee, Taewook Nam, Eunho Yang, Sung Ju Hwang
Specifically, we meta-learn a noise generator which outputs a multiplicative noise distribution for latent features, to obtain low errors on the test instances in an input-dependent manner.