Search Results for author: Tim Klinger

Found 22 papers, 8 papers with code

What makes Models Compositional? A Theoretical View: With Supplement

no code implementations2 May 2024 Parikshit Ram, Tim Klinger, Alexander G. Gray

We then show how various existing general and special purpose sequence processing models (such as recurrent, convolution and attention-based ones) fit this definition and use it to analyze their compositional complexity.

Systematic Generalization

Compositional Program Generation for Few-Shot Systematic Generalization

1 code implementation28 Sep 2023 Tim Klinger, Luke Liu, Soham Dan, Maxwell Crouse, Parikshit Ram, Alexander Gray

Compositional generalization is a key ability of humans that enables us to learn new concepts from only a handful examples.

Systematic Generalization

Compositional generalization through abstract representations in human and artificial neural networks

no code implementations15 Sep 2022 Takuya Ito, Tim Klinger, Douglas H. Schultz, John D. Murray, Michael W. Cole, Mattia Rigotti

Our findings give empirical support to the role of compositional generalization in human behavior, implicate abstract representations as its neural implementation, and illustrate that these representations can be embedded into ANNs by designing simple and efficient pretraining procedures.

Zero-shot Generalization

Hierarchical Reinforcement Learning with AI Planning Models

1 code implementation1 Mar 2022 JunKyu Lee, Michael Katz, Don Joven Agravante, Miao Liu, Geraud Nangue Tasse, Tim Klinger, Shirin Sohrabi

Our approach defines options in hierarchical reinforcement learning (HRL) from AIP operators by establishing a correspondence between the state transition model of AI planning problem and the abstract state transition system of a Markov Decision Process (MDP).

Decision Making Hierarchical Reinforcement Learning +2

Consolidation via Policy Information Regularization in Deep RL for Multi-Agent Games

no code implementations23 Nov 2020 Tyler Malloy, Tim Klinger, Miao Liu, Matthew Riemer, Gerald Tesauro, Chris R. Sims

This paper introduces an information-theoretic constraint on learned policy complexity in the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) reinforcement learning algorithm.

Continual Learning Continuous Control +2

Deep RL With Information Constrained Policies: Generalization in Continuous Control

no code implementations9 Oct 2020 Tyler Malloy, Chris R. Sims, Tim Klinger, Miao Liu, Matthew Riemer, Gerald Tesauro

We focus on the model-free reinforcement learning (RL) setting and formalize our approach in terms of an information-theoretic constraint on the complexity of learned policies.

Continuous Control reinforcement-learning +1

A Study of Compositional Generalization in Neural Models

no code implementations16 Jun 2020 Tim Klinger, Dhaval Adjodah, Vincent Marois, Josh Joseph, Matthew Riemer, Alex 'Sandy' Pentland, Murray Campbell

One difficulty in the development of such models is the lack of benchmarks with clear compositional and relational task structure on which to systematically evaluate them.

Image Classification Relational Reasoning

Efficient Black-Box Planning Using Macro-Actions with Focused Effects

2 code implementations28 Apr 2020 Cameron Allen, Michael Katz, Tim Klinger, George Konidaris, Matthew Riemer, Gerald Tesauro

Focused macros dramatically improve black-box planning efficiency across a wide range of planning domains, sometimes beating even state-of-the-art planners with access to a full domain model.


no code implementations25 Sep 2019 Tyler James Malloy, Matthew Riemer, Miao Liu, Tim Klinger, Gerald Tesauro, Chris R. Sims

We formalize this type of bounded rationality in terms of an information-theoretic constraint on the complexity of policies that agents seek to learn.

Continuous Control reinforcement-learning +1

Logical Rule Induction and Theory Learning Using Neural Theorem Proving

no code implementations6 Sep 2018 Andres Campero, Aldo Pareja, Tim Klinger, Josh Tenenbaum, Sebastian Riedel

Our approach is neuro-symbolic in the sense that the rule pred- icates and core facts are given dense vector representations.

Automated Theorem Proving

Scalable Recollections for Continual Lifelong Learning

no code implementations17 Nov 2017 Matthew Riemer, Tim Klinger, Djallel Bouneffouf, Michele Franceschini

Given the recent success of Deep Learning applied to a variety of single tasks, it is natural to consider more human-realistic settings.

e-QRAQ: A Multi-turn Reasoning Dataset and Simulator with Explanations

no code implementations5 Aug 2017 Clemens Rosenbaum, Tian Gao, Tim Klinger

In this paper we present a new dataset and user simulator e-QRAQ (explainable Query, Reason, and Answer Question) which tests an Agent's ability to read an ambiguous text; ask questions until it can answer a challenge question; and explain the reasoning behind its questions and answer.

Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation

4 code implementations2 Jun 2016 Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kartik Talamadupula, Bo-Wen Zhou, Yoshua Bengio, Aaron Courville

We introduce the multiresolution recurrent neural network, which extends the sequence-to-sequence framework to model natural language generation as two parallel discrete stochastic processes: a sequence of high-level coarse tokens, and a sequence of natural language tokens.

Dialogue Generation Response Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.