Search Results for author: Caleb Chuck

Found 6 papers, 1 papers with code

Automated Discovery of Functional Actual Causes in Complex Environments

no code implementations16 Apr 2024 Caleb Chuck, Sankaran Vaidyanathan, Stephen Giguere, Amy Zhang, David Jensen, Scott Niekum

This paper introduces functional actual cause (FAC), a framework that uses context-specific independencies in the environment to restrict the set of actual causes.

Attribute Reinforcement Learning (RL)

Learning Action-based Representations Using Invariance

no code implementations25 Mar 2024 Max Rudolph, Caleb Chuck, Kevin Black, Misha Lvovsky, Scott Niekum, Amy Zhang

Robust reinforcement learning agents using high-dimensional observations must be able to identify relevant state features amidst many exogeneous distractors.

Granger-Causal Hierarchical Skill Discovery

no code implementations15 Jun 2023 Caleb Chuck, Kevin Black, Aditya Arjun, Yuke Zhu, Scott Niekum

Reinforcement Learning (RL) has demonstrated promising results in learning policies for complex tasks, but it often suffers from low sample efficiency and limited transferability.

reinforcement-learning Reinforcement Learning (RL)

ScrewNet: Category-Independent Articulation Model Estimation From Depth Images Using Screw Theory

1 code implementation24 Aug 2020 Ajinkya Jain, Rudolf Lioutikov, Caleb Chuck, Scott Niekum

Robots in human environments will need to interact with a wide variety of articulated objects such as cabinets, drawers, and dishwashers while assisting humans in performing day-to-day tasks.

Benchmarking

Hypothesis-Driven Skill Discovery for Hierarchical Deep Reinforcement Learning

no code implementations27 May 2019 Caleb Chuck, Supawit Chockchowwat, Scott Niekum

Deep reinforcement learning (DRL) is capable of learning high-performing policies on a variety of complex high-dimensional tasks, ranging from video games to robotic manipulation.

reinforcement-learning Reinforcement Learning (RL)

Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations

no code implementations4 Oct 2016 Michael Laskey, Caleb Chuck, Jonathan Lee, Jeffrey Mahler, Sanjay Krishnan, Kevin Jamieson, Anca Dragan, Ken Goldberg

Although policies learned with RC sampling can be superior to HC sampling for standard learning models such as linear SVMs, policies learned with HC sampling may be comparable with highly-expressive learning models such as deep learning and hyper-parametric decision trees, which have little model error.

Cannot find the paper you are looking for? You can Submit a new open access paper.