Search Results for author: Saurav Prakash

Found 15 papers, 5 papers with code

ATP: Enabling Fast LLM Serving via Attention on Top Principal Keys

no code implementations1 Mar 2024 Yue Niu, Saurav Prakash, Salman Avestimehr

In particular, ATP barely loses accuracy with only $1/2$ principal keys, and only incurs around $2\%$ accuracy drops with $1/4$ principal keys.

Llama

All Rivers Run to the Sea: Private Learning with Asymmetric Flows

no code implementations5 Dec 2023 Yue Niu, Ramy E. Ali, Saurav Prakash, Salman Avestimehr

The main part flows into a small model while the residuals are offloaded to a large model.

Quantization

Federated Classification in Hyperbolic Spaces via Secure Aggregation of Convex Hulls

2 code implementations14 Aug 2023 Saurav Prakash, Jin Sima, Chao Pan, Eli Chien, Olgica Milenkovic

Third, we compute the complexity of the convex hulls in hyperbolic spaces to assess the extent of data leakage; at the same time, in order to limit communication cost for the hulls, we propose a new quantization method for the Poincar\'e disc coupled with Reed-Solomon-like encoding.

Federated Learning graph partitioning +2

Machine Unlearning of Federated Clusters

1 code implementation28 Oct 2022 Chao Pan, Jin Sima, Saurav Prakash, Vishal Rana, Olgica Milenkovic

We introduce, for the first time, the problem of machine unlearning for FC, and propose an efficient unlearning mechanism for a customized secure FC framework.

Clustering Federated Learning +2

Federated Learning of Large Models at the Edge via Principal Sub-Model Training

1 code implementation28 Aug 2022 Yue Niu, Saurav Prakash, Souvik Kundu, Sunwoo Lee, Salman Avestimehr

However, the heterogeneous-client setting requires some clients to train full model, which is not aligned with the resource-constrained setting; while the latter ones break privacy promises in FL when sharing intermediate representations or labels with the server.

Federated Learning

Lottery Aware Sparsity Hunting: Enabling Federated Learning on Resource-Limited Edge

1 code implementation27 Aug 2022 Sara Babakniya, Souvik Kundu, Saurav Prakash, Yue Niu, Salman Avestimehr

A possible solution to this problem is to utilize off-the-shelf sparse learning algorithms at the clients to meet their resource budget.

Federated Learning Model Compression +1

Basil: A Fast and Byzantine-Resilient Approach for Decentralized Training

no code implementations16 Sep 2021 Ahmed Roushdy Elkordy, Saurav Prakash, A. Salman Avestimehr

As our main contribution, we propose Basil, a fast and computationally efficient Byzantine robust algorithm for decentralized training systems, which leverages a novel sequential, memory assisted and performance-based criteria for training over a logical ring while filtering the Byzantine users.

Coded Computing for Low-Latency Federated Learning over Wireless Edge Networks

no code implementations12 Nov 2020 Saurav Prakash, Sagar Dhakal, Mustafa Akdeniz, Yair Yona, Shilpa Talwar, Salman Avestimehr, Nageen Himayat

For minimizing the epoch deadline time at the MEC server, we provide a tractable approach for finding the amount of coding redundancy and the number of local data points that a client processes during training, by exploiting the statistical properties of compute as well as communication delays.

Edge-computing Federated Learning

Secure and Fault Tolerant Decentralized Learning

no code implementations15 Oct 2020 Saurav Prakash, Amir Salman Avestimehr

To implement our novel per-client criteria for fault mitigation, DiverseFL creates a TEE-based secure enclave within the FL server, which in addition to performing secure aggregation for carrying out the global model update step, securely receives a small representative sample of local data from each client only once before training, and computes guiding updates for each participating client during training.

Distributed, Parallel, and Cluster Computing

Coded Computing for Federated Learning at the Edge

no code implementations7 Jul 2020 Saurav Prakash, Sagar Dhakal, Mustafa Akdeniz, A. Salman Avestimehr, Nageen Himayat

Federated Learning (FL) is an exciting new paradigm that enables training a global model from data generated locally at the client nodes, without moving client data to a centralized server.

Edge-computing Federated Learning +1

Half-skyrmions and Bimerons in an antiferromagnetic insulator at room temperature

no code implementations23 Jun 2020 Hariom Jani, Jheng-Cyuan Lin, Jiahao Chen, Jack Harrison, Francesco Maccherozzi, Jonathan Schad, Saurav Prakash, Chang-Beom Eom, A. Ariando, T. Venkatesan, Paolo G. Radaelli

In the quest for post-CMOS technologies, ferromagnetic skyrmions and their anti-particles have shown great promise as topologically protected solitonic information carriers in memory-in-logic or neuromorphic devices.

Materials Science

Coded Federated Learning

no code implementations21 Feb 2020 Sagar Dhakal, Saurav Prakash, Yair Yona, Shilpa Talwar, Nageen Himayat

Here, model parameters are computed locally by each client device and exchanged with a central server, which aggregates the local models for a global view, without requiring sharing of training data.

Federated Learning

A Pre-defined Sparse Kernel Based Convolution for Deep CNNs

no code implementations2 Oct 2019 Souvik Kundu, Saurav Prakash, Haleh Akrami, Peter A. Beerel, Keith M. Chugg

To explore the potential of this approach, we have experimented with two widely accepted datasets, CIFAR-10 and Tiny ImageNet, in sparse variants of both the ResNet18 and VGG16 architectures.

CodedReduce: A Fast and Robust Framework for Gradient Aggregation in Distributed Learning

no code implementations6 Feb 2019 Amirhossein Reisizadeh, Saurav Prakash, Ramtin Pedarsani, Amir Salman Avestimehr

That is, it parallelizes the communications over a tree topology leading to efficient bandwidth utilization, and carefully designs a redundant data set allocation and coding strategy at the nodes to make the proposed gradient aggregation scheme robust to stragglers.

Coded Computation over Heterogeneous Clusters

1 code implementation21 Jan 2017 Amirhossein Reisizadeh, Saurav Prakash, Ramtin Pedarsani, Amir Salman Avestimehr

There have been recent results that demonstrate the impact of coding for efficient utilization of computation and storage redundancy to alleviate the effect of stragglers and communication bottlenecks in homogeneous clusters.

Distributed, Parallel, and Cluster Computing Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.