no code implementations • 1 Mar 2024 • Yue Niu, Saurav Prakash, Salman Avestimehr
In particular, ATP barely loses accuracy with only $1/2$ principal keys, and only incurs around $2\%$ accuracy drops with $1/4$ principal keys.
no code implementations • 5 Dec 2023 • Yue Niu, Ramy E. Ali, Saurav Prakash, Salman Avestimehr
The main part flows into a small model while the residuals are offloaded to a large model.
2 code implementations • 14 Aug 2023 • Saurav Prakash, Jin Sima, Chao Pan, Eli Chien, Olgica Milenkovic
Third, we compute the complexity of the convex hulls in hyperbolic spaces to assess the extent of data leakage; at the same time, in order to limit communication cost for the hulls, we propose a new quantization method for the Poincar\'e disc coupled with Reed-Solomon-like encoding.
1 code implementation • 28 Oct 2022 • Chao Pan, Jin Sima, Saurav Prakash, Vishal Rana, Olgica Milenkovic
We introduce, for the first time, the problem of machine unlearning for FC, and propose an efficient unlearning mechanism for a customized secure FC framework.
1 code implementation • 28 Aug 2022 • Yue Niu, Saurav Prakash, Souvik Kundu, Sunwoo Lee, Salman Avestimehr
However, the heterogeneous-client setting requires some clients to train full model, which is not aligned with the resource-constrained setting; while the latter ones break privacy promises in FL when sharing intermediate representations or labels with the server.
1 code implementation • 27 Aug 2022 • Sara Babakniya, Souvik Kundu, Saurav Prakash, Yue Niu, Salman Avestimehr
A possible solution to this problem is to utilize off-the-shelf sparse learning algorithms at the clients to meet their resource budget.
no code implementations • 16 Sep 2021 • Ahmed Roushdy Elkordy, Saurav Prakash, A. Salman Avestimehr
As our main contribution, we propose Basil, a fast and computationally efficient Byzantine robust algorithm for decentralized training systems, which leverages a novel sequential, memory assisted and performance-based criteria for training over a logical ring while filtering the Byzantine users.
no code implementations • 12 Nov 2020 • Saurav Prakash, Sagar Dhakal, Mustafa Akdeniz, Yair Yona, Shilpa Talwar, Salman Avestimehr, Nageen Himayat
For minimizing the epoch deadline time at the MEC server, we provide a tractable approach for finding the amount of coding redundancy and the number of local data points that a client processes during training, by exploiting the statistical properties of compute as well as communication delays.
no code implementations • 15 Oct 2020 • Saurav Prakash, Amir Salman Avestimehr
To implement our novel per-client criteria for fault mitigation, DiverseFL creates a TEE-based secure enclave within the FL server, which in addition to performing secure aggregation for carrying out the global model update step, securely receives a small representative sample of local data from each client only once before training, and computes guiding updates for each participating client during training.
Distributed, Parallel, and Cluster Computing
no code implementations • 7 Jul 2020 • Saurav Prakash, Sagar Dhakal, Mustafa Akdeniz, A. Salman Avestimehr, Nageen Himayat
Federated Learning (FL) is an exciting new paradigm that enables training a global model from data generated locally at the client nodes, without moving client data to a centralized server.
no code implementations • 23 Jun 2020 • Hariom Jani, Jheng-Cyuan Lin, Jiahao Chen, Jack Harrison, Francesco Maccherozzi, Jonathan Schad, Saurav Prakash, Chang-Beom Eom, A. Ariando, T. Venkatesan, Paolo G. Radaelli
In the quest for post-CMOS technologies, ferromagnetic skyrmions and their anti-particles have shown great promise as topologically protected solitonic information carriers in memory-in-logic or neuromorphic devices.
Materials Science
no code implementations • 21 Feb 2020 • Sagar Dhakal, Saurav Prakash, Yair Yona, Shilpa Talwar, Nageen Himayat
Here, model parameters are computed locally by each client device and exchanged with a central server, which aggregates the local models for a global view, without requiring sharing of training data.
no code implementations • 2 Oct 2019 • Souvik Kundu, Saurav Prakash, Haleh Akrami, Peter A. Beerel, Keith M. Chugg
To explore the potential of this approach, we have experimented with two widely accepted datasets, CIFAR-10 and Tiny ImageNet, in sparse variants of both the ResNet18 and VGG16 architectures.
no code implementations • 6 Feb 2019 • Amirhossein Reisizadeh, Saurav Prakash, Ramtin Pedarsani, Amir Salman Avestimehr
That is, it parallelizes the communications over a tree topology leading to efficient bandwidth utilization, and carefully designs a redundant data set allocation and coding strategy at the nodes to make the proposed gradient aggregation scheme robust to stragglers.
1 code implementation • 21 Jan 2017 • Amirhossein Reisizadeh, Saurav Prakash, Ramtin Pedarsani, Amir Salman Avestimehr
There have been recent results that demonstrate the impact of coding for efficient utilization of computation and storage redundancy to alleviate the effect of stragglers and communication bottlenecks in homogeneous clusters.
Distributed, Parallel, and Cluster Computing Information Theory Information Theory