Search Results for author: Charles L. Isbell

Found 8 papers, 1 papers with code

Perceptual Values from Observation

no code implementations20 May 2019 Ashley D. Edwards, Charles L. Isbell

Imitation by observation is an approach for learning from expert demonstrations that lack action information, such as videos.

reinforcement-learning Reinforcement Learning (RL)

Purely Geometric Scene Association and Retrieval - A Case for Macro Scale 3D Geometry

no code implementations3 Aug 2018 Rahul Sawhney, Fuxin Li, Henrik I. Christensen, Charles L. Isbell

We show how it can be employed to select a diverse set of data frames which have structurally similar content, and how to validate whether views with similar geometric content are from the same scene.

Retrieval

Imitating Latent Policies from Observation

2 code implementations21 May 2018 Ashley D. Edwards, Himanshu Sahni, Yannick Schroecker, Charles L. Isbell

In this paper, we describe a novel approach to imitation learning that infers latent policies directly from state observations.

Imitation Learning

State Aware Imitation Learning

no code implementations NeurIPS 2017 Yannick Schroecker, Charles L. Isbell

Imitation learning is the study of learning how to act given a set of demonstrations provided by a human expert.

Imitation Learning

MotifMark: Finding Regulatory Motifs in DNA Sequences

no code implementations4 May 2017 Hamid Reza Hassanzadeh, Pushkar Kolhe, Charles L. Isbell, May D. Wang

A number of high-throughput technologies have recently emerged that try to quantify the affinity between proteins and DNA motifs.

Specificity

Point Based Value Iteration with Optimal Belief Compression for Dec-POMDPs

no code implementations NeurIPS 2013 Liam C. Macdermed, Charles L. Isbell

(2) We show that a DecPOMDP with bounded belief can be converted to a POMDP (albeit with actions exponential in the number of beliefs).

Solving Stochastic Games

no code implementations NeurIPS 2009 Liam M. Dermed, Charles L. Isbell

Solving multi-agent reinforcement learning problems has proven difficult because of the lack of tractable algorithms.

Multi-agent Reinforcement Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.