Search Results for author: Ellen Novoseller

Found 16 papers, 3 papers with code

Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits

1 code implementation13 Mar 2020 Maegan Tucker, Myra Cheng, Ellen Novoseller, Richard Cheng, Yisong Yue, Joel W. Burdick, Aaron D. Ames

Optimizing lower-body exoskeleton walking gaits for user comfort requires understanding users' preferences over a high-dimensional gait parameter space.

Imitation Learning with Human Eye Gaze via Multi-Objective Prediction

1 code implementation25 Feb 2021 Ravi Kumar Thakur, MD-Nazmus Samin Sunbeam, Vinicius G. Goecks, Ellen Novoseller, Ritwik Bera, Vernon J. Lawhern, Gregory M. Gremillion, John Valasek, Nicholas R. Waytowich

In this work, we propose Gaze Regularized Imitation Learning (GRIL), a novel context-aware, imitation learning architecture that learns concurrently from both human demonstrations and eye gaze to solve tasks where visual attention provides important context.

Continuous Control Imitation Learning +4

LazyDAgger: Reducing Context Switching in Interactive Imitation Learning

no code implementations31 Mar 2021 Ryan Hoque, Ashwin Balakrishna, Carl Putterman, Michael Luo, Daniel S. Brown, Daniel Seita, Brijen Thananjeyan, Ellen Novoseller, Ken Goldberg

Corrective interventions while a robot is learning to automate a task provide an intuitive method for a human supervisor to assist the robot and convey information about desired behavior.

Continuous Control Imitation Learning

ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning

no code implementations17 Sep 2021 Ryan Hoque, Ashwin Balakrishna, Ellen Novoseller, Albert Wilcox, Daniel S. Brown, Ken Goldberg

Effective robot learning often requires online human feedback and interventions that can cost significant human time, giving rise to the central challenge in interactive imitation learning: is it possible to control the timing and length of interventions to both facilitate learning and limit burden on the human supervisor?

Imitation Learning

Policy-Based Bayesian Experimental Design for Non-Differentiable Implicit Models

no code implementations8 Mar 2022 Vincent Lim, Ellen Novoseller, Jeffrey Ichnowski, Huang Huang, Ken Goldberg

For applications in healthcare, physics, energy, robotics, and many other fields, designing maximally informative experiments is valuable, particularly when experiments are expensive, time-consuming, or pose safety hazards.

Experimental Design reinforcement-learning +1

Efficient Preference-Based Reinforcement Learning Using Learned Dynamics Models

no code implementations11 Jan 2023 Yi Liu, Gaurav Datta, Ellen Novoseller, Daniel S. Brown

In particular, we provide evidence that a learned dynamics model offers the following benefits when performing PbRL: (1) preference elicitation and policy optimization require significantly fewer environment interactions than model-free PbRL, (2) diverse preference queries can be synthesized safely and efficiently as a byproduct of standard model-based RL, and (3) reward pre-training based on suboptimal demonstrations can be performed without any environmental interaction.

reinforcement-learning Reinforcement Learning (RL)

DIP-RL: Demonstration-Inferred Preference Learning in Minecraft

no code implementations22 Jul 2023 Ellen Novoseller, Vinicius G. Goecks, David Watkins, Josh Miller, Nicholas Waytowich

In machine learning for sequential decision-making, an algorithmic agent learns to interact with an environment while receiving feedback in the form of a reward signal.

Decision Making reinforcement-learning +1

Rating-based Reinforcement Learning

no code implementations30 Jul 2023 Devin White, Mingkang Wu, Ellen Novoseller, Vernon J. Lawhern, Nicholas Waytowich, Yongcan Cao

This paper develops a novel rating-based reinforcement learning approach that uses human ratings to obtain human guidance in reinforcement learning.

reinforcement-learning

Crowd-PrefRL: Preference-Based Reward Learning from Crowds

no code implementations17 Jan 2024 David Chhan, Ellen Novoseller, Vernon J. Lawhern

In this work, we introduce Crowd-PrefRL, a framework for performing preference-based RL leveraging feedback from crowds.

Reinforcement Learning (RL)

Scalable Interactive Machine Learning for Future Command and Control

no code implementations9 Feb 2024 Anna Madison, Ellen Novoseller, Vinicius G. Goecks, Benjamin T. Files, Nicholas Waytowich, Alfred Yu, Vernon J. Lawhern, Steven Thurman, Christopher Kelshaw, Kaleb McDowell

Future warfare will require Command and Control (C2) personnel to make decisions at shrinking timescales in complex and potentially ill-defined situations.

Decision Making

Re-Envisioning Command and Control

no code implementations9 Feb 2024 Kaleb McDowell, Ellen Novoseller, Anna Madison, Vinicius G. Goecks, Christopher Kelshaw

Future warfare will require Command and Control (C2) decision-making to occur in more complex, fast-paced, ill-structured, and demanding conditions.

Decision Making Unity

Cannot find the paper you are looking for? You can Submit a new open access paper.