Search Results for author: Sai Aparna Aketi

Found 12 papers, 7 papers with code

AdaGossip: Adaptive Consensus Step-size for Decentralized Deep Learning with Communication Compression

no code implementations9 Apr 2024 Sai Aparna Aketi, Abolfazl Hashemi, Kaushik Roy

Decentralized learning is crucial in supporting on-device learning over large distributed datasets, eliminating the need for a central server.

Towards Two-Stream Foveation-based Active Vision Learning

no code implementations24 Mar 2024 Timur Ibrayev, Amitangshu Mukherjee, Sai Aparna Aketi, Kaushik Roy

Specifically, the proposed framework models the following mechanisms: 1) ventral (what) stream focusing on the input regions perceived by the fovea part of an eye (foveation), 2) dorsal (where) stream providing visual guidance, and 3) iterative processing of the two streams to calibrate visual focus and process the sequence of focused image patches.

Foveation Object +1

Averaging Rate Scheduler for Decentralized Learning on Heterogeneous Data

1 code implementation5 Mar 2024 Sai Aparna Aketi, Sakshi Choudhary, Kaushik Roy

State-of-the-art decentralized learning algorithms typically require the data distribution to be Independent and Identically Distributed (IID).

Scheduling

Cross-feature Contrastive Loss for Decentralized Deep Learning on Heterogeneous Data

1 code implementation24 Oct 2023 Sai Aparna Aketi, Kaushik Roy

The current state-of-the-art decentralized learning algorithms mostly assume the data distribution to be Independent and Identically Distributed (IID).

Data-free Knowledge Distillation

Global Update Tracking: A Decentralized Learning Algorithm for Heterogeneous Data

1 code implementation NeurIPS 2023 Sai Aparna Aketi, Abolfazl Hashemi, Kaushik Roy

Decentralized learning enables the training of deep learning models over large distributed datasets generated at different locations, without the need for a central server.

Neighborhood Gradient Clustering: An Efficient Decentralized Learning Method for Non-IID Data Distributions

1 code implementation28 Sep 2022 Sai Aparna Aketi, Sangamesh Kodge, Kaushik Roy

Our experiments demonstrate that \textit{NGC} and \textit{CompNGC} outperform (by $0-6\%$) the existing SoTA decentralized learning algorithm over non-IID data with significantly less compute and memory requirements.

Clustering

Low Precision Decentralized Distributed Training over IID and non-IID Data

1 code implementation17 Nov 2021 Sai Aparna Aketi, Sangamesh Kodge, Kaushik Roy

In this paper, we propose and show the convergence of low precision decentralized training that aims to reduce the computational complexity and communication cost of decentralized training.

Quantization

Sparse-Push: Communication- & Energy-Efficient Decentralized Distributed Learning over Directed & Time-Varying Graphs with non-IID Datasets

no code implementations10 Feb 2021 Sai Aparna Aketi, Amandeep Singh, Jan Rabaey

Current deep learning (DL) systems rely on a centralized computing paradigm which limits the amount of available training data, increases system latency, and adds privacy and security constraints.

Relevant-features based Auxiliary Cells for Energy Efficient Detection of Natural Errors

no code implementations25 Feb 2020 Sai Aparna Aketi, Priyadarshini Panda, Kaushik Roy

To address this issue, we propose an ensemble of classifiers at hidden layers to enable energy efficient detection of natural errors.

Classification General Classification +1

Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural Networks

1 code implementation23 Feb 2020 Sai Aparna Aketi, Sourjya Roy, Anand Raghunathan, Kaushik Roy

To address all the above issues, we present a simple-yet-effective gradual channel pruning while training methodology using a novel data-driven metric referred to as feature relevance score.

Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.