Search Results for author: Vinicius G. Goecks

Found 17 papers, 4 papers with code

Combining Learning from Human Feedback and Knowledge Engineering to Solve Hierarchical Tasks in Minecraft

1 code implementation7 Dec 2021 Vinicius G. Goecks, Nicholas Waytowich, David Watkins-Valls, Bharat Prakash

In this work, we present the solution that won first place and was awarded the most human-like agent in the 2021 NeurIPS Competition MineRL BASALT Challenge: Learning from Human Feedback in Minecraft, which challenged participants to use human data to solve four tasks defined only by a natural language description and no reward function.

Imitation Learning

Cycle-of-Learning for Autonomous Systems from Human Interaction

1 code implementation28 Aug 2018 Nicholas R. Waytowich, Vinicius G. Goecks, Vernon J. Lawhern

We discuss different types of human-robot interaction paradigms in the context of training end-to-end reinforcement learning algorithms.

reinforcement-learning Reinforcement Learning (RL)

Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time

1 code implementation26 Oct 2018 Vinicius G. Goecks, Gregory M. Gremillion, Vernon J. Lawhern, John Valasek, Nicholas R. Waytowich

This paper investigates how to utilize different forms of human interaction to safely train autonomous systems in real-time by learning from both human demonstrations and interventions.

Imitation Learning

Imitation Learning with Human Eye Gaze via Multi-Objective Prediction

1 code implementation25 Feb 2021 Ravi Kumar Thakur, MD-Nazmus Samin Sunbeam, Vinicius G. Goecks, Ellen Novoseller, Ritwik Bera, Vernon J. Lawhern, Gregory M. Gremillion, John Valasek, Nicholas R. Waytowich

In this work, we propose Gaze Regularized Imitation Learning (GRIL), a novel context-aware, imitation learning architecture that learns concurrently from both human demonstrations and eye gaze to solve tasks where visual attention provides important context.

Continuous Control Imitation Learning +4

Integrating Behavior Cloning and Reinforcement Learning for Improved Performance in Dense and Sparse Reward Environments

no code implementations9 Oct 2019 Vinicius G. Goecks, Gregory M. Gremillion, Vernon J. Lawhern, John Valasek, Nicholas R. Waytowich

However, it is currently unclear how to efficiently update that policy using reinforcement learning as these approaches are inherently optimizing different objective functions.

Q-Learning reinforcement-learning +1

PODNet: A Neural Network for Discovery of Plannable Options

no code implementations1 Nov 2019 Ritwik Bera, Vinicius G. Goecks, Gregory M. Gremillion, John Valasek, Nicholas R. Waytowich

Learning from demonstration has been widely studied in machine learning but becomes challenging when the demonstrated trajectories are unstructured and follow different objectives.

Combining Visible and Infrared Spectrum Imagery using Machine Learning for Small Unmanned Aerial System Detection

no code implementations27 Mar 2020 Vinicius G. Goecks, Grayson Woods, John Valasek

However, compared to widely available visible spectrum sensors, LWIR sensors have lower resolution and may produce more false positives when exposed to birds or other heat sources.

BIG-bench Machine Learning object-detection +1

Human-in-the-Loop Methods for Data-Driven and Reinforcement Learning Systems

no code implementations30 Aug 2020 Vinicius G. Goecks

This can be attributed to the fact that current state-of-the-art, end-to-end reinforcement learning approaches still require thousands or millions of data samples to converge to a satisfactory policy and are subject to catastrophic failures during training.

reinforcement-learning Reinforcement Learning (RL)

On games and simulators as a platform for development of artificial intelligence for command and control

no code implementations21 Oct 2021 Vinicius G. Goecks, Nicholas Waytowich, Derrik E. Asher, Song Jun Park, Mark Mittrick, John Richardson, Manuel Vindiola, Anne Logie, Mark Dennison, Theron Trout, Priya Narayanan, Alexander Kott

Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios with significant parallels to military applications: multiple participants manage resources and make decisions that command assets to secure specific areas of a map or neutralize opposing forces.

Starcraft Starcraft II

Learning to Guide Multiple Heterogeneous Actors from a Single Human Demonstration via Automatic Curriculum Learning in StarCraft II

no code implementations11 May 2022 Nicholas Waytowich, James Hare, Vinicius G. Goecks, Mark Mittrick, John Richardson, Anjon Basak, Derrik E. Asher

Traditionally, learning from human demonstrations via direct behavior cloning can lead to high-performance policies given that the algorithm has access to large amounts of high-quality data covering the most likely scenarios to be encountered when the agent is operating.

reinforcement-learning Reinforcement Learning (RL) +2

DIP-RL: Demonstration-Inferred Preference Learning in Minecraft

no code implementations22 Jul 2023 Ellen Novoseller, Vinicius G. Goecks, David Watkins, Josh Miller, Nicholas Waytowich

In machine learning for sequential decision-making, an algorithmic agent learns to interact with an environment while receiving feedback in the form of a reward signal.

Decision Making reinforcement-learning +1

Scalable Interactive Machine Learning for Future Command and Control

no code implementations9 Feb 2024 Anna Madison, Ellen Novoseller, Vinicius G. Goecks, Benjamin T. Files, Nicholas Waytowich, Alfred Yu, Vernon J. Lawhern, Steven Thurman, Christopher Kelshaw, Kaleb McDowell

Future warfare will require Command and Control (C2) personnel to make decisions at shrinking timescales in complex and potentially ill-defined situations.

Decision Making

Re-Envisioning Command and Control

no code implementations9 Feb 2024 Kaleb McDowell, Ellen Novoseller, Anna Madison, Vinicius G. Goecks, Christopher Kelshaw

Future warfare will require Command and Control (C2) decision-making to occur in more complex, fast-paced, ill-structured, and demanding conditions.

Decision Making Unity

Cannot find the paper you are looking for? You can Submit a new open access paper.