Search Results for author: Jessica B. Hamrick

Found 17 papers, 4 papers with code

Investigating the role of model-based learning in exploration and transfer

no code implementations8 Feb 2023 Jacob Walker, Eszter Vértes, Yazhe Li, Gabriel Dulac-Arnold, Ankesh Anand, Théophane Weber, Jessica B. Hamrick

Our results show that intrinsic exploration combined with environment models present a viable direction towards agents that are self-supervised and able to generalize to novel reward functions.

Transfer Learning

Towards Understanding How Machines Can Learn Causal Overhypotheses

1 code implementation16 Jun 2022 Eliza Kosoy, David M. Chan, Adrian Liu, Jasmine Collins, Bryanna Kaufmann, Sandy Han Huang, Jessica B. Hamrick, John Canny, Nan Rosemary Ke, Alison Gopnik

Recent work in machine learning and cognitive science has suggested that understanding causal information is essential to the development of intelligence.

BIG-bench Machine Learning Causal Inference

Procedural Generalization by Planning with Self-Supervised World Models

no code implementations ICLR 2022 Ankesh Anand, Jacob Walker, Yazhe Li, Eszter Vértes, Julian Schrittwieser, Sherjil Ozair, Théophane Weber, Jessica B. Hamrick

One of the key promises of model-based reinforcement learning is the ability to generalize using an internal model of the world to make predictions in novel environments and tasks.

 Ranked #1 on Meta-Learning on ML10 (Meta-test success rate (zero-shot) metric)

Benchmarking Meta-Learning +2

Exploring Exploration: Comparing Children with RL Agents in Unified Environments

1 code implementation6 May 2020 Eliza Kosoy, Jasmine Collins, David M. Chan, Sandy Huang, Deepak Pathak, Pulkit Agrawal, John Canny, Alison Gopnik, Jessica B. Hamrick

Research in developmental psychology consistently shows that children explore the world thoroughly and efficiently and that this exploration allows them to learn.

Object-oriented state editing for HRL

no code implementations31 Oct 2019 Victor Bapst, Alvaro Sanchez-Gonzalez, Omar Shams, Kimberly Stachenfeld, Peter W. Battaglia, Satinder Singh, Jessica B. Hamrick

We introduce agents that use object-oriented reasoning to consider alternate states of the world in order to more quickly find solutions to problems.

Object

Structured agents for physical construction

no code implementations5 Apr 2019 Victor Bapst, Alvaro Sanchez-Gonzalez, Carl Doersch, Kimberly L. Stachenfeld, Pushmeet Kohli, Peter W. Battaglia, Jessica B. Hamrick

Our results show that agents which use structured representations (e. g., objects and scene graphs) and structured policies (e. g., object-centric actions) outperform those which use less structured representations, and generalize better beyond their training when asked to reason about larger scenes.

Reinforcement Learning Scene Understanding

Relational inductive bias for physical construction in humans and machines

no code implementations4 Jun 2018 Jessica B. Hamrick, Kelsey R. Allen, Victor Bapst, Tina Zhu, Kevin R. McKee, Joshua B. Tenenbaum, Peter W. Battaglia

While current deep learning systems excel at tasks such as object classification, language processing, and gameplay, few can construct or modify a complex system such as a tower of blocks.

Inductive Bias Object +1

Generating Plans that Predict Themselves

no code implementations14 Feb 2018 Jaime F. Fisac, Chang Liu, Jessica B. Hamrick, S. Shankar Sastry, J. Karl Hedrick, Thomas L. Griffiths, Anca D. Dragan

We introduce $t$-\ACty{}: a measure that quantifies the accuracy and confidence with which human observers can predict the remaining robot plan from the overall task goal and the observed initial $t$ actions in the plan.

Pragmatic-Pedagogic Value Alignment

no code implementations20 Jul 2017 Jaime F. Fisac, Monica A. Gates, Jessica B. Hamrick, Chang Liu, Dylan Hadfield-Menell, Malayandi Palaniappan, Dhruv Malik, S. Shankar Sastry, Thomas L. Griffiths, Anca D. Dragan

In robotics, value alignment is key to the design of collaborative robots that can integrate into human workflows, successfully inferring and adapting to their users' objectives as they go.

Decision Making Reinforcement Learning

Metacontrol for Adaptive Imagination-Based Optimization

1 code implementation7 May 2017 Jessica B. Hamrick, Andrew J. Ballard, Razvan Pascanu, Oriol Vinyals, Nicolas Heess, Peter W. Battaglia

The metacontroller component is a model-free reinforcement learning agent, which decides both how many iterations of the optimization procedure to run, as well as which model to consult on each iteration.

Decision Making Reinforcement Learning

Algorithm selection by rational metareasoning as a model of human strategy selection

no code implementations NeurIPS 2014 Falk Lieder, Dillon Plunkett, Jessica B. Hamrick, Stuart J. Russell, Nicholas Hay, Tom Griffiths

Rational metareasoning appears to be a promising framework for reverse-engineering how people choose among cognitive strategies and translating the results into better solutions to the algorithm selection problem.

Cannot find the paper you are looking for? You can Submit a new open access paper.