Search Results for author: Richard L. Lewis

Found 9 papers, 2 papers with code

In-Context Analogical Reasoning with Pre-Trained Language Models

1 code implementation28 May 2023 Xiaoyang Hu, Shane Storks, Richard L. Lewis, Joyce Chai

Analogical reasoning is a fundamental capacity of human cognition that allows us to reason abstractly about novel situations by relating them to past experiences.

In-Context Learning Relational Reasoning

Composing Task Knowledge with Modular Successor Feature Approximators

1 code implementation28 Jan 2023 Wilka Carvalho, Angelos Filos, Richard L. Lewis, Honglak Lee, Satinder Singh

Recently, the Successor Features and Generalized Policy Improvement (SF&GPI) framework has been proposed as a method for learning, composing, and transferring predictive knowledge and behavior.

Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention

no code implementations NAACL (CMCL) 2021 Soo Hyun Ryu, Richard L. Lewis

We advance a novel explanation of similarity-based interference effects in subject-verb and reflexive pronoun agreement processing, grounded in surprisal values computed from a pretrained large-scale Transformer model, GPT-2.

Retrieval Sentence

Reinforcement Learning of Implicit and Explicit Control Flow in Instructions

no code implementations25 Feb 2021 Ethan A. Brooks, Janarthanan Rajendran, Richard L. Lewis, Satinder Singh

Learning to flexibly follow task instructions in dynamic environments poses interesting challenges for reinforcement learning agents.

reinforcement-learning Reinforcement Learning (RL) +2

Reinforcement Learning for Sparse-Reward Object-Interaction Tasks in a First-person Simulated 3D Environment

no code implementations28 Oct 2020 Wilka Carvalho, Anthony Liang, Kimin Lee, Sungryull Sohn, Honglak Lee, Richard L. Lewis, Satinder Singh

In this work, we show that one can learn object-interaction tasks from scratch without supervision by learning an attentive object-model as an auxiliary task during task learning with an object-centric relational RL agent.

Object Reinforcement Learning (RL) +1

Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning

no code implementations NeurIPS 2014 Xiaoxiao Guo, Satinder Singh, Honglak Lee, Richard L. Lewis, Xiaoshi Wang

The combination of modern Reinforcement Learning and Deep Learning approaches holds the promise of making significant progress on challenging applications requiring both rich perception and policy-selection.

Atari Games reinforcement-learning +1

Reward Mapping for Transfer in Long-Lived Agents

no code implementations NeurIPS 2013 Xiaoxiao Guo, Satinder Singh, Richard L. Lewis

We demonstrate that our approach can substantially improve the agent's performance relative to other approaches, including an approach that transfers policies.

Reward Design via Online Gradient Ascent

no code implementations NeurIPS 2010 Jonathan Sorg, Richard L. Lewis, Satinder P. Singh

In this work, we develop a gradient ascent approach with formal convergence guarantees for approximately solving the optimal reward problem online during an agent's lifetime.

Cannot find the paper you are looking for? You can Submit a new open access paper.