1 code implementation • 26 Apr 2024 • Stephen Zhao, Rob Brekelmans, Alireza Makhzani, Roger Grosse
Numerous capability and safety techniques of Large Language Models (LLMs), including RLHF, automated red-teaming, prompt engineering, and infilling, can be cast as sampling from an unnormalized target distribution defined by a given reward or potential function over the full sequence.
1 code implementation • 18 Oct 2022 • Stephen Zhao, Chris Lu, Roger Baker Grosse, Jakob Nicolaus Foerster
This problem is especially pronounced in the opponent modeling setting, where the opponent's policy is unknown and must be inferred from observations; in such settings, LOLA is ill-specified because behaviorally equivalent opponent policies can result in non-equivalent updates.
2 code implementations • ICML 2020 • Silviu Pitis, Harris Chan, Stephen Zhao, Bradly Stadie, Jimmy Ba
What goals should a multi-goal reinforcement learning agent pursue during training in long-horizon tasks?