Search Results for author: Ryan Hoque

Found 10 papers, 4 papers with code

IIFL: Implicit Interactive Fleet Learning from Heterogeneous Human Supervisors

1 code implementation27 Jun 2023 Gaurav Datta, Ryan Hoque, Anrui Gu, Eugen Solowjow, Ken Goldberg

Imitation learning has been applied to a range of robotic tasks, but can struggle when robots encounter edge cases that are not represented in the training data (i. e., distribution shift).

Imitation Learning Uncertainty Quantification

Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision

1 code implementation29 Jun 2022 Ryan Hoque, Lawrence Yunliang Chen, Satvik Sharma, Karthik Dharmarajan, Brijen Thananjeyan, Pieter Abbeel, Ken Goldberg

With continual learning, interventions from the remote pool of humans can also be used to improve the robot fleet control policy over time.

Continual Learning

Learning to Fold Real Garments with One Arm: A Case Study in Cloud-Based Robotics Research

no code implementations21 Apr 2022 Ryan Hoque, Kaushik Shivakumar, Shrey Aeron, Gabriel Deza, Aditya Ganapathi, Adrian Wong, Johnny Lee, Andy Zeng, Vincent Vanhoucke, Ken Goldberg

Autonomous fabric manipulation is a longstanding challenge in robotics, but evaluating progress is difficult due to the cost and diversity of robot hardware.

Benchmarking Imitation Learning

ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning

no code implementations17 Sep 2021 Ryan Hoque, Ashwin Balakrishna, Ellen Novoseller, Albert Wilcox, Daniel S. Brown, Ken Goldberg

Effective robot learning often requires online human feedback and interventions that can cost significant human time, giving rise to the central challenge in interactive imitation learning: is it possible to control the timing and length of interventions to both facilitate learning and limit burden on the human supervisor?

Imitation Learning

LazyDAgger: Reducing Context Switching in Interactive Imitation Learning

no code implementations31 Mar 2021 Ryan Hoque, Ashwin Balakrishna, Carl Putterman, Michael Luo, Daniel S. Brown, Daniel Seita, Brijen Thananjeyan, Ellen Novoseller, Ken Goldberg

Corrective interventions while a robot is learning to automate a task provide an intuitive method for a human supervisor to assist the robot and convey information about desired behavior.

Continuous Control Imitation Learning

VisuoSpatial Foresight for Physical Sequential Fabric Manipulation

no code implementations19 Feb 2021 Ryan Hoque, Daniel Seita, Ashwin Balakrishna, Aditya Ganapathi, Ajay Kumar Tanwani, Nawid Jamali, Katsu Yamane, Soshi Iba, Ken Goldberg

We build upon the Visual Foresight framework to learn fabric dynamics that can be efficiently reused to accomplish different sequential fabric manipulation tasks with a single goal-conditioned policy.

Deep Imitation Learning of Sequential Fabric Smoothing From an Algorithmic Supervisor

1 code implementation23 Sep 2019 Daniel Seita, Aditya Ganapathi, Ryan Hoque, Minho Hwang, Edward Cen, Ajay Kumar Tanwani, Ashwin Balakrishna, Brijen Thananjeyan, Jeffrey Ichnowski, Nawid Jamali, Katsu Yamane, Soshi Iba, John Canny, Ken Goldberg

In 180 physical experiments with the da Vinci Research Kit (dVRK) surgical robot, RGBD policies trained in simulation attain coverage of 83% to 95% depending on difficulty tier, suggesting that effective fabric smoothing policies can be learned from an algorithmic supervisor and that depth sensing is a valuable addition to color alone.

Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.