Search Results for author: Aswin Raghavan

Found 17 papers, 0 papers with code

Enhancing Multi-Agent Coordination through Common Operating Picture Integration

no code implementations8 Nov 2023 Peihong Yu, Bhoram Lee, Aswin Raghavan, Supun Samarasekara, Pratap Tokekar, James Zachary Hare

Our results demonstrate the efficacy of COP integration, and show that COP-based training leads to robust policies compared to state-of-the-art Multi-Agent Reinforcement Learning (MARL) methods when faced with out-of-distribution initial states.

Multi-agent Reinforcement Learning

System Design for an Integrated Lifelong Reinforcement Learning Agent for Real-Time Strategy Games

no code implementations8 Dec 2022 Indranil Sur, Zachary Daniels, Abrar Rahman, Kamil Faber, Gianmarco J. Gallardo, Tyler L. Hayes, Cameron E. Taylor, Mustafa Burak Gurbuz, James Smith, Sahana Joshi, Nathalie Japkowicz, Michael Baron, Zsolt Kira, Christopher Kanan, Roberto Corizzo, Ajay Divakaran, Michael Piacentino, Jesse Hostetler, Aswin Raghavan

In this paper, we introduce the Lifelong Reinforcement Learning Components Framework (L2RLCF), which standardizes L2RL systems and assimilates different continual learning components (each addressing different aspects of the lifelong learning problem) into a unified system.

Continual Learning reinforcement-learning +2

Model-Free Generative Replay for Lifelong Reinforcement Learning: Application to Starcraft-2

no code implementations9 Aug 2022 Zachary Daniels, Aswin Raghavan, Jesse Hostetler, Abrar Rahman, Indranil Sur, Michael Piacentino, Ajay Divakaran

We present a version of GR for LRL that satisfies two desiderata: (a) Introspective density modelling of the latent representations of policies learned using deep RL, and (b) Model-free end-to-end learning.

Management reinforcement-learning +3

Real-time Hyper-Dimensional Reconfiguration at the Edge using Hardware Accelerators

no code implementations10 Jun 2022 Indhumathi Kandaswamy, Saurabh Farkya, Zachary Daniels, Gooitzen van der Wal, Aswin Raghavan, Yuzheng Zhang, Jun Hu, Michael Lomnitz, Michael Isnardi, David Zhang, Michael Piacentino

In this paper we present Hyper-Dimensional Reconfigurable Analytics at the Tactical Edge (HyDRATE) using low-SWaP embedded hardware that can perform real-time reconfiguration at the edge leveraging non-MAC (free of floating-point MultiplyACcumulate operations) deep neural nets (DNN) combined with hyperdimensional (HD) computing accelerators.

Few-Shot Learning Quantization

Lifelong Learning using Eigentasks: Task Separation, Skill Acquisition, and Selective Transfer

no code implementations14 Jul 2020 Aswin Raghavan, Jesse Hostetler, Indranil Sur, Abrar Rahman, Ajay Divakaran

We propose a wake-sleep cycle of alternating task learning and knowledge consolidation for learning in our framework, and instantiate it for lifelong supervised learning and lifelong RL.

Continual Learning Transfer Learning

Lifelong Learning using Eigentasks: Task Separation, Skill Acquisition and Selective Transfer

no code implementations ICML Workshop LifelongML 2020 Aswin Raghavan, Jesse Hostetler, Indranil Sur, Abrar Rahman, Ajay Divakaran

We propose a wake-sleep cycle of alternating task learning and knowledge consolidation for learning in our framework, and instantiate it for lifelong supervised learning and lifelong RL.

Continual Learning Starcraft +1

Generative Memory for Lifelong Reinforcement Learning

no code implementations22 Feb 2019 Aswin Raghavan, Jesse Hostetler, Sek Chai

Our research is focused on understanding and applying biological memory transfers to new AI systems that can fundamentally improve their performance, throughout their fielded lifetime experience.

reinforcement-learning Reinforcement Learning (RL)

Generalized Ternary Connect: End-to-End Learning and Compression of Multiplication-Free Deep Neural Networks

no code implementations12 Nov 2018 Samyak Parajuli, Aswin Raghavan, Sek Chai

The use of deep neural networks in edge computing devices hinges on the balance between accuracy and complexity of computations.

Edge-computing General Classification +1

Power-Grid Controller Anomaly Detection with Enhanced Temporal Deep Learning

no code implementations18 Jun 2018 Zecheng He, Aswin Raghavan, Guangyuan Hu, Sek Chai, Ruby Lee

Specifically, we first train a temporal deep learning model, using only normal HPC readings from legitimate processes that run daily in these power-grid systems, to model the normal behavior of the power-grid controller.

Anomaly Detection

Bit-Regularized Optimization of Neural Nets

no code implementations ICLR 2018 Mohamed Amer, Aswin Raghavan, Graham W. Taylor, Sek Chai

Our key idea is to control the expressive power of the network by dynamically quantizing the range and set of values that the parameters can take.

Translation

BitNet: Bit-Regularized Deep Neural Networks

no code implementations16 Aug 2017 Aswin Raghavan, Mohamed Amer, Sek Chai, Graham Taylor

The parameters of neural networks are usually unconstrained and have a dynamic range dispersed over all real values.

Translation

Low Precision Neural Networks using Subband Decomposition

no code implementations24 Mar 2017 Sek Chai, Aswin Raghavan, David Zhang, Mohamed Amer, Tim Shields

In this paper, we present a unique approach using lower precision weights for more efficient and faster training phase.

Symbolic Opportunistic Policy Iteration for Factored-Action MDPs

no code implementations NeurIPS 2013 Aswin Raghavan, Roni Khardon, Alan Fern, Prasad Tadepalli

We address the scalability of symbolic planning under uncertainty with factored states and actions.

Cannot find the paper you are looking for? You can Submit a new open access paper.