no code implementations • 22 Feb 2024 • Reed River Chen, Christopher Ribaudo, Jennifer Sleeman, Chace Ashcraft, Collin Kofroth, Marisa Hughes, Ivanka Stajner, Kevin Viner, Kai Wang
Due to a recent increase in extreme air quality events, both globally and locally in the United States, finer resolution air quality forecasting guidance is needed to effectively adapt to these events.
no code implementations • 19 Jun 2023 • Chace Ashcraft, Jennifer Sleeman, Caroline Tang, Jay Brett, Anand Gnanadesikan
In this work we propose a neuro-symbolic approach called Neuro-Symbolic Question-Answer Program Translator, or NS-QAPT, to address explainability and interpretability for deep learning climate simulation, applied to climate tipping point discovery.
no code implementations • 16 Feb 2023 • Jennifer Sleeman, David Chung, Anand Gnanadesikan, Jay Brett, Yannis Kevrekidis, Marisa Hughes, Thomas Haine, Marie-Aude Pradal, Renske Gelderloos, Chace Ashcraft, Caroline Tang, Anshu Saksena, Larry White
We describe an adversarial game to explore the parameter space of these models, detect upcoming tipping points, and discover the drivers of tipping points.
no code implementations • 14 Feb 2023 • Jennifer Sleeman, David Chung, Chace Ashcraft, Jay Brett, Anand Gnanadesikan, Yannis Kevrekidis, Marisa Hughes, Thomas Haine, Marie-Aude Pradal, Renske Gelderloos, Caroline Tang, Anshu Saksena, Larry White
We describe how this methodology can be applied to the discovery of climate tipping points and, in particular, the collapse of the Atlantic Meridional Overturning Circulation (AMOC).
no code implementations • 28 Nov 2022 • Nathan Drenkow, Alvin Tan, Chace Ashcraft, Kiran Karra
The deployment of machine learning models in safety-critical applications comes with the expectation that such models will perform well over a range of contexts (e. g., a vision model for classifying street signs should work in rural, city, and highway settings under varying lighting/weather conditions).
no code implementations • 28 Jul 2022 • Corban Rivera, Chace Ashcraft, Alexander New, James Schmidt, Gautam Vallabha
Creating artificial intelligence (AI) systems capable of demonstrating lifelong learning is a fundamental challenge, and many approaches and metrics have been proposed to analyze algorithmic properties.
1 code implementation • 14 Mar 2022 • Erik C. Johnson, Eric Q. Nguyen, Blake Schreurs, Chigozie S. Ewulum, Chace Ashcraft, Neil M. Fendley, Megan M. Baker, Alexander New, Gautam K. Vallabha
Despite groundbreaking progress in reinforcement learning for robotics, gameplay, and other complex domains, major challenges remain in applying reinforcement learning to the evolving, open-world problems often found in critical application spaces.
no code implementations • 1 Dec 2021 • Edward W. Staley, Chace Ashcraft, Benjamin Stoler, Jared Markowitz, Gautam Vallabha, Christopher Ratto, Kapil D. Katyal
Most approaches to deep reinforcement learning (DRL) attempt to solve a single task at a time.
no code implementations • 1 Nov 2021 • Chace Ashcraft, Kiran Karra
We present a crop simulation environment with an OpenAI Gym interface, and apply modern deep reinforcement learning (DRL) algorithms to optimize yield.
no code implementations • 9 Sep 2021 • Kiran Karra, Chace Ashcraft, Cash Costello
Self-supervised learning (SSL) methods have resulted in broad improvements to neural network performance by leveraging large, untapped collections of unlabeled data to learn generalized underlying structure.
no code implementations • 14 Jun 2021 • Chace Ashcraft, Kiran Karra
In this paper, we propose a new data poisoning attack and apply it to deep reinforcement learning agents.
no code implementations • 22 Jun 2020 • Corban G. Rivera, Katie M. Popek, Chace Ashcraft, Edward W. Staley, Kapil D. Katyal, Bart L. Paulhamus
In this work, we explore a novel framework for control of complex systems called Primitive Imitation for Control PICO.
1 code implementation • 13 Mar 2020 • Kiran Karra, Chace Ashcraft, Neil Fendley
In this paper, we introduce the TrojAI software framework, an open source set of Python tools capable of generating triggered (poisoned) datasets and associated deep learning (DL) models with trojans at scale.