2 code implementations • NeurIPS 2023 • Michael Beukman, Devon Jarvis, Richard Klein, Steven James, Benjamin Rosman
To this end, we introduce a neural network architecture, the Decision Adapter, which generates the weights of an adapter module and conditions the behaviour of an agent on the context information.
no code implementations • 25 May 2022 • Geraud Nangue Tasse, Devon Jarvis, Steven James, Benjamin Rosman
The agent can then flexibly compose them both logically and temporally to provably achieve temporal logic specifications in any regular language, such as regular fragments of linear temporal logic.
no code implementations • 12 May 2022 • Nathan Michlo, Devon Jarvis, Richard Klein, Steven James
In this work, we investigate the properties of data that cause popular representation learning approaches to fail.
no code implementations • 29 Sep 2021 • Devon Jarvis, Richard Klein, Benjamin Rosman, Andrew M Saxe
We introduce a minimal space of datasets with systematic and non-systematic features in both the input and output.
no code implementations • 25 Sep 2019 • Devon Jarvis, Richard Klein, Benjamin Rosman
The efficacy of the width of the basin of attraction surrounding a minimum in parameter space as an indicator for the generalizability of a model parametrization is a point of contention surrounding the training of artificial neural networks, with the dominant view being that wider areas in the landscape reflect better generalizability by the trained model.