Search Results for author: Sandeep Madireddy

Found 21 papers, 4 papers with code

REMEDI: Corrective Transformations for Improved Neural Entropy Estimation

1 code implementation8 Feb 2024 Viktor Nilsson, Anirban Samaddar, Sandeep Madireddy, Pierre Nyquist

The approach combines the minimization of the cross-entropy for simple, adaptive base models and the estimation of their deviation, in terms of the relative entropy, from the data density.

Towards Continually Learning Application Performance Models

no code implementations25 Oct 2023 Ray A. O. Sinurat, Anurag Daram, Haryadi S. Gunawi, Robert B. Ross, Sandeep Madireddy

Machine learning-based performance models are increasingly being used to build critical job scheduling and application optimization decisions.

Scheduling

Improving Performance in Continual Learning Tasks using Bio-Inspired Architectures

no code implementations8 Aug 2023 Sandeep Madireddy, Angel Yanguas-Gil, Prasanna Balaprakash

The ability to learn continuously from an incoming data stream without catastrophic forgetting is critical to designing intelligent systems.

Continual Learning Split-CIFAR-10 +1

AutoML for neuromorphic computing and application-driven co-design: asynchronous, massively parallel optimization of spiking architectures

1 code implementation26 Feb 2023 Angel Yanguas-Gil, Sandeep Madireddy

In this work we have extended AutoML inspired approaches to the exploration and optimization of neuromorphic architectures.

AutoML

General policy mapping: online continual reinforcement learning inspired on the insect brain

1 code implementation30 Nov 2022 Angel Yanguas-Gil, Sandeep Madireddy

Our model leverages the offline training of a feature extraction and a common general policy layer to enable the convergence of RL algorithms in online settings.

reinforcement-learning Reinforcement Learning (RL)

Unified Probabilistic Neural Architecture and Weight Ensembling Improves Model Robustness

no code implementations8 Oct 2022 Sumegha Premchandar, Sandeep Madireddy, Sanket Jantre, Prasanna Balaprakash

To this end, we propose a Unified probabilistic architecture and weight ensembling Neural Architecture Search (UraeNAS) that leverages advances in probabilistic neural architecture search and approximate Bayesian inference to generate ensembles form the joint distribution of neural network architectures and weights.

Bayesian Inference Neural Architecture Search

HPC Storage Service Autotuning Using Variational-Autoencoder-Guided Asynchronous Bayesian Optimization

no code implementations3 Oct 2022 Matthieu Dorier, Romain Egele, Prasanna Balaprakash, Jaehoon Koo, Sandeep Madireddy, Srinivasan Ramesh, Allen D. Malony, Rob Ross

Distributed data storage services tailored to specific applications have grown popular in the high-performance computing (HPC) community as a way to address I/O and storage challenges.

Bayesian Optimization Transfer Learning

Sequential Bayesian Neural Subnetwork Ensembles

no code implementations1 Jun 2022 Sanket Jantre, Sandeep Madireddy, Shrijita Bhattacharya, Tapabrata Maiti, Prasanna Balaprakash

Deep neural network ensembles that appeal to model diversity have been used successfully to improve predictive performance and model robustness in several applications.

Sparsity-Inducing Categorical Prior Improves Robustness of the Information Bottleneck

no code implementations4 Mar 2022 Anirban Samaddar, Sandeep Madireddy, Prasanna Balaprakash, Tapabrata Maiti, Gustavo de los Campos, Ian Fischer

In addition, it provides a mechanism for learning a joint distribution of the latent variable and the sparsity and hence can account for the complete uncertainty in the latent space.

DeepAdversaries: Examining the Robustness of Deep Learning Models for Galaxy Morphology Classification

no code implementations28 Dec 2021 Aleksandra Ćiprijanović, Diana Kafkes, Gregory Snyder, F. Javier Sánchez, Gabriel Nathan Perdue, Kevin Pedro, Brian Nord, Sandeep Madireddy, Stefan M. Wild

On the other hand, we show that training with domain adaptation improves model robustness and mitigates the effects of these perturbations, improving the classification accuracy by 23% on data with higher observational noise.

Domain Adaptation Image Compression +1

Applications and Techniques for Fast Machine Learning in Science

no code implementations25 Oct 2021 Allison McCarn Deiana, Nhan Tran, Joshua Agar, Michaela Blott, Giuseppe Di Guglielmo, Javier Duarte, Philip Harris, Scott Hauck, Mia Liu, Mark S. Neubauer, Jennifer Ngadiuba, Seda Ogrenci-Memik, Maurizio Pierini, Thea Aarrestad, Steffen Bahr, Jurgen Becker, Anne-Sophie Berthold, Richard J. Bonventre, Tomas E. Muller Bravo, Markus Diefenthaler, Zhen Dong, Nick Fritzsche, Amir Gholami, Ekaterina Govorkova, Kyle J Hazelwood, Christian Herwig, Babar Khan, Sehoon Kim, Thomas Klijnsma, Yaling Liu, Kin Ho Lo, Tri Nguyen, Gianantonio Pezzullo, Seyedramin Rasoulinezhad, Ryan A. Rivera, Kate Scholberg, Justin Selig, Sougata Sen, Dmitri Strukov, William Tang, Savannah Thais, Kai Lukas Unger, Ricardo Vilalta, Belinavon Krosigk, Thomas K. Warburton, Maria Acosta Flechas, Anthony Aportela, Thomas Calvet, Leonardo Cristella, Daniel Diaz, Caterina Doglioni, Maria Domenica Galati, Elham E Khoda, Farah Fahim, Davide Giri, Benjamin Hawks, Duc Hoang, Burt Holzman, Shih-Chieh Hsu, Sergo Jindariani, Iris Johnson, Raghav Kansal, Ryan Kastner, Erik Katsavounidis, Jeffrey Krupa, Pan Li, Sandeep Madireddy, Ethan Marx, Patrick McCormack, Andres Meza, Jovan Mitrevski, Mohammed Attia Mohammed, Farouk Mokhtar, Eric Moreno, Srishti Nagu, Rohin Narayan, Noah Palladino, Zhiqiang Que, Sang Eon Park, Subramanian Ramamoorthy, Dylan Rankin, Simon Rothman, ASHISH SHARMA, Sioni Summers, Pietro Vischia, Jean-Roch Vlimant, Olivia Weng

In this community review report, we discuss applications and techniques for fast machine learning (ML) in science -- the concept of integrating power ML methods into the real-time experimental data processing loop to accelerate scientific discovery.

BIG-bench Machine Learning

Neuromodulated Neural Architectures with Local Error Signals for Memory-Constrained Online Continual Learning

no code implementations16 Jul 2020 Sandeep Madireddy, Angel Yanguas-Gil, Prasanna Balaprakash

Using high performing configurations metalearned in the single task learning setting, we achieve superior continual learning performance on Split-MNIST, and Split-CIFAR-10 data as compared with other memory-constrained learning approaches, and match that of the state-of-the-art memory-intensive replay-based approaches.

Bayesian Optimization Class Incremental Learning +4

Neuromorphic Architecture Optimization for Task-Specific Dynamic Learning

no code implementations4 Jun 2019 Sandeep Madireddy, Angel Yanguas-Gil, Prasanna Balaprakash

Our results show that optimal learning rules can be dataset-dependent even within similar tasks.

Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.