no code implementations • 30 Mar 2022 • Han Wang, Erfan Miahi, Martha White, Marlos C. Machado, Zaheer Abbas, Raksha Kumaraswamy, Vincent Liu, Adam White
In this paper we investigate the properties of representations learned by deep reinforcement learning systems.
no code implementations • NeurIPS 2021 • Matthew McLeod, Chunlok Lo, Matthew Schlegel, Andrew Jacobsen, Raksha Kumaraswamy, Martha White, Adam White
Learning auxiliary tasks, such as multiple predictions about the world, can provide many benefits to reinforcement learning systems.
1 code implementation • 16 Nov 2021 • Eric Graves, Ehsan Imani, Raksha Kumaraswamy, Martha White
A variety of theoretically-sound policy gradient algorithms exist for the on-policy setting due to the policy gradient theorem, which provides a simplified form for the gradient.
1 code implementation • 9 Nov 2020 • Banafsheh Rafiee, Zaheer Abbas, Sina Ghiassian, Raksha Kumaraswamy, Richard Sutton, Elliot Ludvig, Adam White
We present three new diagnostic prediction problems inspired by classical-conditioning experiments to facilitate research in online prediction learning.
no code implementations • NeurIPS 2018 • Raksha Kumaraswamy, Matthew Schlegel, Adam White, Martha White
Directed exploration strategies for reinforcement learning are critical for learning an optimal policy in a minimal number of interactions with the environment.
no code implementations • 15 Nov 2018 • Vincent Liu, Raksha Kumaraswamy, Lei Le, Martha White
We investigate sparse representations for control in reinforcement learning.
no code implementations • 26 Jul 2017 • Lei Le, Raksha Kumaraswamy, Martha White
Outside of reinforcement learning, sparse coding representations have been widely used, with non-convex objectives that result in discriminative representations.