no code implementations • 8 Mar 2024 • Katie Kang, Eric Wallace, Claire Tomlin, Aviral Kumar, Sergey Levine
Large language models (LLMs) have a tendency to generate plausible-sounding yet factually incorrect responses, especially when queried on unfamiliar concepts.
1 code implementation • 2 Oct 2023 • Katie Kang, Amrith Setlur, Claire Tomlin, Sergey Levine
Rather than extrapolating in arbitrary ways, we observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
no code implementations • 1 Dec 2022 • Thomas T. Zhang, Katie Kang, Bruce D. Lee, Claire Tomlin, Sergey Levine, Stephen Tu, Nikolai Matni
In particular, we consider a setting where learning is split into two phases: (a) a pre-training step where a shared $k$-dimensional representation is learned from $H$ source policies, and (b) a target policy fine-tuning step where the learned representation is used to parameterize the policy class.
no code implementations • 21 Jun 2022 • Katie Kang, Paula Gradu, Jason Choi, Michael Janner, Claire Tomlin, Sergey Levine
Learned models and policies can generalize effectively when evaluated within the distribution of the training data, but can produce unpredictable and erroneous outputs on out-of-distribution inputs.
no code implementations • 24 Jun 2021 • Katie Kang, Gregory Kahn, Sergey Levine
In this work, we propose a deep reinforcement learning algorithm with hierarchically integrated models (HInt).
1 code implementation • 11 Feb 2019 • Katie Kang, Suneel Belkhale, Gregory Kahn, Pieter Abbeel, Sergey Levine
Deep reinforcement learning provides a promising approach for vision-based control of real-world robots.