Search Results for author: Katherine Shu

Found 1 papers, 1 papers with code

Reward Uncertainty for Exploration in Preference-based Reinforcement Learning

2 code implementations ICLR 2022 Xinran Liang, Katherine Shu, Kimin Lee, Pieter Abbeel

Our intuition is that disagreement in learned reward model reflects uncertainty in tailored human feedback and could be useful for exploration.

reinforcement-learning Reinforcement Learning (RL) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.