no code implementations • 5 Oct 2023 • Dhireesha Kudithipudi, Anurag Daram, Abdullah M. Zyarah, Fatima Tuz Zohora, James B. Aimone, Angel Yanguas-Gil, Nicholas Soures, Emre Neftci, Matthew Mattina, Vincenzo Lomonaco, Clare D. Thiem, Benjamin Epstein
Lifelong learning - an agent's ability to learn throughout its lifetime - is a hallmark of biological learning systems and a central challenge for artificial intelligence (AI).
no code implementations • 8 Aug 2023 • Sandeep Madireddy, Angel Yanguas-Gil, Prasanna Balaprakash
The ability to learn continuously from an incoming data stream without catastrophic forgetting is critical to designing intelligent systems.
1 code implementation • 26 Feb 2023 • Angel Yanguas-Gil, Sandeep Madireddy
In this work we have extended AutoML inspired approaches to the exploration and optimization of neuromorphic architectures.
no code implementations • 18 Jan 2023 • Megan M. Baker, Alexander New, Mario Aguilar-Simon, Ziad Al-Halah, Sébastien M. R. Arnold, Ese Ben-Iwhiwhu, Andrew P. Brna, Ethan Brooks, Ryan C. Brown, Zachary Daniels, Anurag Daram, Fabien Delattre, Ryan Dellana, Eric Eaton, Haotian Fu, Kristen Grauman, Jesse Hostetler, Shariq Iqbal, Cassandra Kent, Nicholas Ketz, Soheil Kolouri, George Konidaris, Dhireesha Kudithipudi, Erik Learned-Miller, Seungwon Lee, Michael L. Littman, Sandeep Madireddy, Jorge A. Mendez, Eric Q. Nguyen, Christine D. Piatko, Praveen K. Pilly, Aswin Raghavan, Abrar Rahman, Santhosh Kumar Ramakrishnan, Neale Ratzlaff, Andrea Soltoggio, Peter Stone, Indranil Sur, Zhipeng Tang, Saket Tiwari, Kyle Vedder, Felix Wang, Zifan Xu, Angel Yanguas-Gil, Harel Yedidsion, Shangqun Yu, Gautam K. Vallabha
Despite the advancement of machine learning techniques in recent years, state-of-the-art systems lack robustness to "real world" events, where the input distributions and tasks encountered by the deployed systems will not be limited to the original training context, and systems will instead need to adapt to novel distributions and tasks while deployed.
1 code implementation • 30 Nov 2022 • Angel Yanguas-Gil, Sandeep Madireddy
Our model leverages the offline training of a feature extraction and a common general policy layer to enable the convergence of RL algorithms in online settings.
1 code implementation • 10 May 2022 • Angel Yanguas-Gil, Jeffrey W. Elam
In this work we explore the application of deep neural networks to the optimization of atomic layer deposition processes based on thickness values obtained at different points of an ALD reactor.
no code implementations • 9 Apr 2021 • Angel Yanguas-Gil
Neuromorphic architectures are ideally suited for the implementation of smart sensors able to react, learn, and respond to a changing environment.
no code implementations • 16 Jul 2020 • Sandeep Madireddy, Angel Yanguas-Gil, Prasanna Balaprakash
Using high performing configurations metalearned in the single task learning setting, we achieve superior continual learning performance on Split-MNIST, and Split-CIFAR-10 data as compared with other memory-constrained learning approaches, and match that of the state-of-the-art memory-intensive replay-based approaches.
1 code implementation • 13 Jul 2020 • Angel Yanguas-Gil
In this work we explore recurrent representations of leaky integrate and fire neurons operating at a timescale equal to their absolute refractory period.
no code implementations • ICML Workshop LifelongML 2020 • Sandeep Madireddy, Angel Yanguas-Gil, Prasanna Balaprakash
We focus on the problem of how to achieve online continual learning under memory-constrained conditions where the input data may not be known \emph{a priori}.
no code implementations • 4 Jun 2019 • Sandeep Madireddy, Angel Yanguas-Gil, Prasanna Balaprakash
Our results show that optimal learning rules can be dataset-dependent even within similar tasks.