no code implementations • 3 Oct 2019 • Kolby Nottingham, Anand Balakrishnan, Jyotirmoy Deshmukh, David Wingate
We propose using propositional logic to specify the importance of multiple objectives.
Multi-Objective Reinforcement Learning reinforcement-learning
no code implementations • 5 Sep 2021 • Kolby Nottingham, Litian Liang, Daeyun Shin, Charless C. Fowlkes, Roy Fox, Sameer Singh
Natural language instruction following tasks serve as a valuable test-bed for grounded language and robotics research.
no code implementations • 6 Sep 2021 • Robert Kirby, Kolby Nottingham, Rajarshi Roy, Saad Godil, Bryan Catanzaro
In this work we augment state-of-the-art, force-based global placement solvers with a reinforcement learning agent trained to improve the final detail placed Half Perimeter Wire Length (HPWL).
1 code implementation • 25 May 2022 • Kolby Nottingham, Alekhya Pyla, Sameer Singh, Roy Fox
We show that our method correctly learns to execute queries to maximize reward in a reinforcement learning setting.
no code implementations • 28 Jan 2023 • Kolby Nottingham, Prithviraj Ammanabrolu, Alane Suhr, Yejin Choi, Hannaneh Hajishirzi, Sameer Singh, Roy Fox
Reinforcement learning (RL) agents typically learn tabula rasa, without prior knowledge of the world.
no code implementations • 21 Jul 2023 • Kolby Nottingham, Yasaman Razeghi, KyungMin Kim, JB Lanier, Pierre Baldi, Roy Fox, Sameer Singh
Large language models (LLMs) are being applied as actors for sequential decision making tasks in domains such as robotics and games, utilizing their general world knowledge and planning abilities.
no code implementations • 5 Feb 2024 • Kolby Nottingham, Bodhisattwa Prasad Majumder, Bhavana Dalvi Mishra, Sameer Singh, Peter Clark, Roy Fox
We evaluate our method in the classic videogame NetHack and the text environment ScienceWorld to demonstrate SSO's ability to optimize a set of skills and perform in-context policy improvement.