Search Results for author: Tobias Kreiman

Found 1 papers, 1 papers with code

Foundation Policies with Hilbert Representations

1 code implementation23 Feb 2024 Seohong Park, Tobias Kreiman, Sergey Levine

While a number of methods have been proposed to enable generic self-supervised RL, based on principles such as goal-conditioned RL, behavioral cloning, and unsupervised skill learning, such methods remain limited in terms of either the diversity of the discovered behaviors, the need for high-quality demonstration data, or the lack of a clear prompting or adaptation mechanism for downstream tasks.

Reinforcement Learning (RL) Unsupervised Pre-training

Cannot find the paper you are looking for? You can Submit a new open access paper.