Search Results for author: Nicholas R. Waytowich

Found 9 papers, 4 papers with code

Gaze-Informed Multi-Objective Imitation Learning from Human Demonstrations

no code implementations25 Feb 2021 Ritwik Bera, Vinicius G. Goecks, Gregory M. Gremillion, Vernon J. Lawhern, John Valasek, Nicholas R. Waytowich

In the field of human-robot interaction, teaching learning agents from human demonstrations via supervised learning has been widely studied and successfully applied to multiple domains such as self-driving cars and robot manipulation.

Imitation Learning Navigate +2

PODNet: A Neural Network for Discovery of Plannable Options

no code implementations1 Nov 2019 Ritwik Bera, Vinicius G. Goecks, Gregory M. Gremillion, John Valasek, Nicholas R. Waytowich

Learning from demonstration has been widely studied in machine learning but becomes challenging when the demonstrated trajectories are unstructured and follow different objectives.

Integrating Behavior Cloning and Reinforcement Learning for Improved Performance in Dense and Sparse Reward Environments

no code implementations9 Oct 2019 Vinicius G. Goecks, Gregory M. Gremillion, Vernon J. Lawhern, John Valasek, Nicholas R. Waytowich

However, it is currently unclear how to efficiently update that policy using reinforcement learning as these approaches are inherently optimizing different objective functions.

Q-Learning reinforcement-learning

On Memory Mechanism in Multi-Agent Reinforcement Learning

no code implementations11 Sep 2019 Yilun Zhou, Derrik E. Asher, Nicholas R. Waytowich, Julie A. Shah

Multi-agent reinforcement learning (MARL) extends (single-agent) reinforcement learning (RL) by introducing additional agents and (potentially) partial observability of the environment.

Multi-agent Reinforcement Learning reinforcement-learning

Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time

1 code implementation26 Oct 2018 Vinicius G. Goecks, Gregory M. Gremillion, Vernon J. Lawhern, John Valasek, Nicholas R. Waytowich

This paper investigates how to utilize different forms of human interaction to safely train autonomous systems in real-time by learning from both human demonstrations and interventions.

Imitation Learning

Coordination-driven learning in multi-agent problem spaces

no code implementations13 Sep 2018 Sean L. Barton, Nicholas R. Waytowich, Derrik E. Asher

We discuss the role of coordination as a direct learning objective in multi-agent reinforcement learning (MARL) domains.

Multi-agent Reinforcement Learning reinforcement-learning

Cycle-of-Learning for Autonomous Systems from Human Interaction

1 code implementation28 Aug 2018 Nicholas R. Waytowich, Vinicius G. Goecks, Vernon J. Lawhern

We discuss different types of human-robot interaction paradigms in the context of training end-to-end reinforcement learning algorithms.

reinforcement-learning

Compact Convolutional Neural Networks for Classification of Asynchronous Steady-state Visual Evoked Potentials

1 code implementation12 Mar 2018 Nicholas R. Waytowich, Vernon Lawhern, Javier O. Garcia, Jennifer Cummings, Josef Faller, Paul Sajda, Jean M. Vettel

Steady-State Visual Evoked Potentials (SSVEPs) are neural oscillations from the parietal and occipital regions of the brain that are evoked from flickering visual stimuli.

EEG General Classification

EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces

11 code implementations23 Nov 2016 Vernon J. Lawhern, Amelia J. Solon, Nicholas R. Waytowich, Stephen M. Gordon, Chou P. Hung, Brent J. Lance

We introduce the use of depthwise and separable convolutions to construct an EEG-specific model which encapsulates well-known EEG feature extraction concepts for BCI.

EEG speech-recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.