no code implementations • 11 Oct 2024 • Claas A Voelcker, Marcel Hussing, Eric Eaton
Empirical, benchmark-driven testing is a fundamental paradigm in the current RL community.
no code implementations • 11 Oct 2024 • Claas A Voelcker, Marcel Hussing, Eric Eaton, Amir-Massoud Farahmand, Igor Gilitschenski
Our method, Model-Augmented Data for Temporal Difference learning (MAD-TD) uses small amounts of generated data to stabilize high UTD training and achieve competitive performance on the most challenging tasks in the DeepMind control suite.
no code implementations • 3 Oct 2024 • Long Le, Jason Xie, William Liang, Hung-Ju Wang, Yue Yang, Yecheng Jason Ma, Kyle Vedder, Arjun Krishna, Dinesh Jayaraman, Eric Eaton
Interactive 3D simulated objects are crucial in AR/VR, animations, and robotics, driving immersive experiences and advanced automation.
no code implementations • 2 Oct 2024 • Kyle Vedder, Neehar Peri, Ishan Khatri, Siyi Li, Eric Eaton, Mehmet Kocamaz, Yue Wang, Zhiding Yu, Deva Ramanan, Joachim Pehserl
We reframe scene flow as the task of estimating a continuous space-time ODE that describes motion for an entire observation sequence, represented with a neural prior.
no code implementations • 22 Aug 2024 • Jean Park, Kuk Jin Jang, Basam Alasaly, Sriharsha Mopidevi, Andrew Zolensky, Eric Eaton, Insup Lee, Kevin Johnson
In this work, we introduce the modality importance score (MIS) to identify such bias.
1 code implementation • 22 Jul 2024 • Guiqiu Liao, Matjaz Jogan, Sai Koushik, Eric Eaton, Daniel A. Hashimoto
Weakly supervised video object segmentation (WSVOS) enables the identification of segmentation maps without requiring an extensive training dataset of object masks, relying instead on coarse video labels indicating object presence.
no code implementations • 23 May 2024 • Long Le, Marcel Hussing, Eric Eaton
This work studies the intersection of continual and federated learning, in which independent agents face unique tasks in their environments and incrementally develop and share knowledge.
no code implementations • 9 Mar 2024 • Marcel Hussing, Claas Voelcker, Igor Gilitschenski, Amir-Massoud Farahmand, Eric Eaton
We show that deep reinforcement learning algorithms can retain their ability to learn without resetting network parameters in settings where the number of gradient updates greatly exceeds the number of environment samples by combatting value function divergence.
1 code implementation • 4 Oct 2023 • Pengyuan Lu, Michele Caprio, Eric Eaton, Insup Lee
Upon a new task, IBCL (1) updates a knowledge base in the form of a convex hull of model parameter distributions and (2) obtains particular models to address task trade-off preferences with zero-shot.
no code implementations • 4 Oct 2023 • Meghna Gummadi, Cassandra Kent, Karl Schmeckpeper, Eric Eaton
Despite outstanding semantic scene segmentation in closed-worlds, deep neural networks segment novel instances poorly, which is required for autonomous agents acting in an open world.
Out-of-Distribution Detection Out of Distribution (OOD) Detection +2
1 code implementation • 13 Jul 2023 • Marcel Hussing, Jorge A. Mendez, Anisha Singrodia, Cassandra Kent, Eric Eaton
We provide training and evaluation settings for assessing an agent's ability to learn compositional task policies.
1 code implementation • 31 May 2023 • Duo Lu, Eric Eaton, Matt Weg, Wei Wang, Steven Como, Jeffrey Wishart, Hongbin Yu, Yezhou Yang
Road traffic scene reconstruction from videos has been desirable by road safety regulators, city planners, researchers, and autonomous driving technology developers.
1 code implementation • 24 May 2023 • Pengyuan Lu, Michele Caprio, Eric Eaton, Insup Lee
Upon a new task, IBCL (1) updates a knowledge base in the form of a convex hull of model parameter distributions and (2) obtains particular models to address task trade-off preferences with zero-shot.
1 code implementation • 17 May 2023 • Kyle Vedder, Neehar Peri, Nathaniel Chodosh, Ishan Khatri, Eric Eaton, Dinesh Jayaraman, Yang Liu, Deva Ramanan, James Hays
Scene flow estimation is the task of describing the 3D motion field between temporally successive point clouds.
Ranked #2 on Self-supervised Scene Flow Estimation on Argoverse 2 (using extra training data)
no code implementations • 18 Jan 2023 • Megan M. Baker, Alexander New, Mario Aguilar-Simon, Ziad Al-Halah, Sébastien M. R. Arnold, Ese Ben-Iwhiwhu, Andrew P. Brna, Ethan Brooks, Ryan C. Brown, Zachary Daniels, Anurag Daram, Fabien Delattre, Ryan Dellana, Eric Eaton, Haotian Fu, Kristen Grauman, Jesse Hostetler, Shariq Iqbal, Cassandra Kent, Nicholas Ketz, Soheil Kolouri, George Konidaris, Dhireesha Kudithipudi, Erik Learned-Miller, Seungwon Lee, Michael L. Littman, Sandeep Madireddy, Jorge A. Mendez, Eric Q. Nguyen, Christine D. Piatko, Praveen K. Pilly, Aswin Raghavan, Abrar Rahman, Santhosh Kumar Ramakrishnan, Neale Ratzlaff, Andrea Soltoggio, Peter Stone, Indranil Sur, Zhipeng Tang, Saket Tiwari, Kyle Vedder, Felix Wang, Zifan Xu, Angel Yanguas-Gil, Harel Yedidsion, Shangqun Yu, Gautam K. Vallabha
Despite the advancement of machine learning techniques in recent years, state-of-the-art systems lack robustness to "real world" events, where the input distributions and tasks encountered by the deployed systems will not be limited to the original training context, and systems will instead need to adapt to novel distributions and tasks while deployed.
no code implementations • 4 Dec 2022 • Marcel Hussing, Karen Li, Eric Eaton
Satellite image analysis has important implications for land use, urbanization, and ecosystem monitoring.
no code implementations • 15 Jul 2022 • Jorge A. Mendez, Eric Eaton
A major goal of artificial intelligence (AI) is to create an agent capable of acquiring a general understanding of the world.
1 code implementation • 8 Jul 2022 • Jorge A. Mendez, Marcel Hussing, Meghna Gummadi, Eric Eaton
We present CompoSuite, an open-source simulated robotic manipulation benchmark for compositional multi-task reinforcement learning (RL).
1 code implementation • NeurIPS 2018 • Jorge A. Mendez, Shashank Shivkumar, Eric Eaton
Methods for learning from demonstration (LfD) have shown success in acquiring behavior policies by imitating a user.
1 code implementation • ICLR 2022 • Jorge A. Mendez, Harm van Seijen, Eric Eaton
Empirically, we demonstrate that neural composition indeed captures the underlying structure of this space of problems.
1 code implementation • 28 Jun 2022 • Meghna Gummadi, David Kent, Jorge A. Mendez, Eric Eaton
Inspired by natural learners, we introduce a Sparse High-level-Exclusive, Low-level-Shared feature representation (SHELS) that simultaneously encourages learning exclusive sets of high-level features and essential, shared low-level features.
no code implementations • 26 Jan 2022 • Boyu Wang, Jorge Mendez, Changjian Shui, Fan Zhou, Di wu, Gezheng Xu, Christian Gagné, Eric Eaton
Unlike existing measures which are used as tools to bound the difference of expected risks between tasks (e. g., $\mathcal{H}$-divergence or discrepancy distance), we theoretically show that the performance gap can be viewed as a data- and algorithm-dependent regularizer, which controls the model complexity and leads to finer guarantees.
no code implementations • 19 Jan 2022 • Ashwin De Silva, Rahul Ramesh, Lyle Ungar, Marshall Hussain Shuler, Noah J. Cowan, Michael Platt, Chen Li, Leyla Isik, Seung-Eon Roh, Adam Charles, Archana Venkataraman, Brian Caffo, Javier J. How, Justus M Kebschull, John W. Krakauer, Maxim Bichuch, Kaleab Alemayehu Kinfu, Eva Yezerets, Dinesh Jayaraman, Jong M. Shin, Soledad Villar, Ian Phillips, Carey E. Priebe, Thomas Hartung, Michael I. Miller, Jayanta Dey, Ningyuan, Huang, Eric Eaton, Ralph Etienne-Cummings, Elizabeth L. Ogburn, Randal Burns, Onyema Osuagwu, Brett Mensh, Alysson R. Muotri, Julia Brown, Chris White, Weiwei Yang, Andrei A. Rusu, Timothy Verstynen, Konrad P. Kording, Pratik Chaudhari, Joshua T. Vogelstein
We conjecture that certain sequences of tasks are not retrospectively learnable (in which the data distribution is fixed), but are prospectively learnable (in which distributions may be dynamic), suggesting that prospective learning is more difficult in kind than retrospective learning.
no code implementations • 29 Sep 2021 • Pengyuan Lu, Seungwon Lee, Amanda Watson, David Kent, Insup Lee, Eric Eaton, James Weimer
This tool achieves similar performance, in terms of per-task accuracy and resistance to catastrophic forgetting, as compared to fully labeled data.
no code implementations • 29 Sep 2021 • Jayanta Dey, Ali Geisa, Ronak Mehta, Tyler M. Tomita, Hayden S. Helm, Haoyin Xu, Eric Eaton, Jeffery Dick, Carey E. Priebe, Joshua T. Vogelstein
Establishing proper and universally agreed-upon definitions for these learning setups is essential for thoroughly exploring the evolution of ideas across different learning scenarios and deriving generalized mathematical bounds for these learners.
3 code implementations • 12 Jun 2021 • Kyle Vedder, Eric Eaton
Bird's Eye View (BEV) is a popular representation for processing 3D point clouds, and by its nature is fundamentally sparse.
no code implementations • ICML Workshop LifelongML 2020 • Seungwon Lee, Sima Behpour, Eric Eaton
In deep networks, transferring the appropriate granularity of knowledge is as important as the transfer mechanism, and must be driven by the relationships among tasks.
1 code implementation • ICLR 2021 • Jorge A. Mendez, Eric Eaton
A hallmark of human intelligence is the ability to construct self-contained chunks of knowledge and adequately reuse them in novel combinations for solving different yet structurally related problems.
2 code implementations • NeurIPS 2020 • Jorge A. Mendez, Boyu Wang, Eric Eaton
Policy gradient methods have shown success in learning control policies for high-dimensional dynamical systems.
no code implementations • ICML Workshop LifelongML 2020 • Jorge A Mendez, Eric Eaton
Policy gradient methods have shown success in learning continuous control policies for high-dimensional dynamical systems.
1 code implementation • NeurIPS 2019 • Boyu Wang, Jorge Mendez, Mingbo Cai, Eric Eaton
We propose a new principle for transfer learning, based on a straightforward intuition: if two domains are similar to each other, the model trained on one domain should also perform well on the other domain, and vice versa.
no code implementations • 10 Jun 2019 • Mohammad Rostami, Soheil Kolouri, Zak Murez, Yuri Owekcho, Eric Eaton, Kuyngnam Kim
Zero-shot learning (ZSL) is a framework to classify images belonging to unseen classes based on solely semantic information about these unseen classes.
no code implementations • 6 Apr 2019 • Julia E. Reid, Eric Eaton
KEYWORDS: pediatric ophthalmology, machine learning, artificial intelligence, deep learning
no code implementations • 18 Nov 2017 • José Marcio Luna, Eric Eaton, Lyle H. Ungar, Eric Diffenderfer, Shane T. Jensen, Efstathios D. Gennatas, Mateo Wirth, Charles B. Simone II, Timothy D. Solberg, Gilmer Valdes
Additive models, such as produced by gradient boosting, and full interaction models, such as classification and regression trees (CART), are widely used algorithms that have been investigated largely in isolation.
no code implementations • 10 Oct 2017 • David Isele, Mohammad Rostami, Eric Eaton
Knowledge transfer between tasks can improve the performance of learned models, but requires an accurate estimate of the inter-task relationships to identify the relevant knowledge to transfer.
no code implementations • 15 Sep 2017 • Mohammad Rostami, Soheil Kolouri, Kyungnam Kim, Eric Eaton
Lifelong machine learning methods acquire knowledge over a series of consecutive tasks, continually building upon their experience.
no code implementations • 1 Feb 2017 • Eric Eaton, Sven Koenig, Claudia Schulz, Francesco Maurelli, John Lee, Joshua Eckroth, Mark Crowley, Richard G. Freedman, Rogelio E. Cardona-Rivera, Tiago Machado, Tom Williams
The 7th Symposium on Educational Advances in Artificial Intelligence (EAAI'17, co-chaired by Sven Koenig and Eric Eaton) launched the EAAI New and Future AI Educator Program to support the training of early-career university faculty, secondary school faculty, and future educators (PhD candidates or postdocs who intend a career in academia).
no code implementations • 18 Oct 2016 • Decebal Constantin Mocanu, Maria Torres Vega, Eric Eaton, Peter Stone, Antonio Liotta
Conceived in the early 1990s, Experience Replay (ER) has been shown to be a successful mechanism to allow online learning algorithms to reuse past experiences.
no code implementations • 20 Apr 2016 • Decebal Constantin Mocanu, Haitham Bou Ammar, Luis Puig, Eric Eaton, Antonio Liotta
Estimation, recognition, and near-future prediction of 3D trajectories based on their two dimensional projections available from one camera source is an exceptionally difficult problem due to uncertainty in the trajectories and environment, high dimensionality of the specific trajectory states, lack of enough labeled data and so on.
no code implementations • 21 May 2015 • Haitham Bou Ammar, Rasul Tutunov, Eric Eaton
Lifelong reinforcement learning provides a promising framework for developing versatile agents that can accumulate knowledge over a lifetime of experience and rapidly learn new tasks by building upon prior knowledge.