no code implementations • 20 May 2019 • Ashley D. Edwards, Charles L. Isbell
Imitation by observation is an approach for learning from expert demonstrations that lack action information, such as videos.
no code implementations • 3 Aug 2018 • Rahul Sawhney, Fuxin Li, Henrik I. Christensen, Charles L. Isbell
We show how it can be employed to select a diverse set of data frames which have structurally similar content, and how to validate whether views with similar geometric content are from the same scene.
2 code implementations • 21 May 2018 • Ashley D. Edwards, Himanshu Sahni, Yannick Schroecker, Charles L. Isbell
In this paper, we describe a novel approach to imitation learning that infers latent policies directly from state observations.
no code implementations • NeurIPS 2017 • Yannick Schroecker, Charles L. Isbell
Imitation learning is the study of learning how to act given a set of demonstrations provided by a human expert.
no code implementations • 4 May 2017 • Hamid Reza Hassanzadeh, Pushkar Kolhe, Charles L. Isbell, May D. Wang
A number of high-throughput technologies have recently emerged that try to quantify the affinity between proteins and DNA motifs.
no code implementations • NeurIPS 2013 • Liam C. Macdermed, Charles L. Isbell
(2) We show that a DecPOMDP with bounded belief can be converted to a POMDP (albeit with actions exponential in the number of beliefs).
no code implementations • NeurIPS 2013 • Shane Griffith, Kaushik Subramanian, Jonathan Scholz, Charles L. Isbell, Andrea L. Thomaz
A long term goal of Interactive Reinforcement Learning is to incorporate non-expert human feedback to solve complex tasks.
no code implementations • NeurIPS 2009 • Liam M. Dermed, Charles L. Isbell
Solving multi-agent reinforcement learning problems has proven difficult because of the lack of tractable algorithms.
Multi-agent Reinforcement Learning reinforcement-learning +1