Search Results for author: Nirupam Gupta

Found 21 papers, 2 papers with code

On the Relevance of Byzantine Robust Optimization Against Data Poisoning

no code implementations1 May 2024 Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot

It has been argued that the seemingly weaker threat model where only workers' local datasets get poisoned is more reasonable.

Autonomous Driving Data Poisoning

Tackling Byzantine Clients in Federated Learning

no code implementations20 Feb 2024 Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Geovani Rizk, Sasha Voitovych

The natural approach to robustify FL against adversarial clients is to replace the simple averaging operation at the server in the standard $\mathsf{FedAvg}$ algorithm by a \emph{robust averaging rule}.

Federated Learning Image Classification

On the Privacy-Robustness-Utility Trilemma in Distributed Learning

no code implementations9 Feb 2023 Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

The latter amortizes the dependence on the dimension in the error (caused by adversarial workers and DP), while being agnostic to the statistical properties of the data.

Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity

no code implementations3 Feb 2023 Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

Byzantine machine learning (ML) aims to ensure the resilience of distributed learning algorithms to misbehaving (or Byzantine) machines.

Impact of Redundancy on Resilience in Distributed Optimization and Learning

no code implementations16 Nov 2022 Shuo Liu, Nirupam Gupta, Nitin H. Vaidya

In particular, we introduce the notion of $(f, r; \epsilon)$-resilience to characterize how well the true solution is approximated in the presence of up to $f$ Byzantine faulty agents, and up to $r$ slow agents (or stragglers) -- smaller $\epsilon$ represents a better approximation.

Distributed Optimization

On the Impossible Safety of Large AI Models

no code implementations30 Sep 2022 El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, Sébastien Rouault, John Stephan

Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance.

Privacy Preserving

Robust Collaborative Learning with Linear Gradient Overhead

1 code implementation22 Sep 2022 Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê Nguyên Hoang, Rafael Pinot, John Stephan

We present MoNNA, a new algorithm that (a) is provably robust under standard assumptions and (b) has a gradient computation overhead that is linear in the fraction of faulty machines, which is conjectured to be tight.

Image Classification

Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums

no code implementations24 May 2022 Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

We present \emph{RESAM (RESilient Averaging of Momentums)}, a unified framework that makes it simple to establish optimal Byzantine resilience, relying only on standard machine learning assumptions.

BIG-bench Machine Learning Distributed Optimization

Utilizing Redundancy in Cost Functions for Resilience in Distributed Optimization and Learning

no code implementations21 Oct 2021 Shuo Liu, Nirupam Gupta, Nitin Vaidya

We demonstrate, both theoretically and empirically, the merits of our proposed redundancy model in improving the robustness of DGD against asynchronous and Byzantine agents, and their extensions to distributed stochastic gradient descent (D-SGD) for robust distributed machine learning with asynchronous and Byzantine agents.

Distributed Optimization

Byzantine Fault-Tolerance in Federated Local SGD under 2f-Redundancy

no code implementations26 Aug 2021 Nirupam Gupta, Thinh T. Doan, Nitin Vaidya

However, we do not know of any such techniques for the federated local SGD algorithm - a more commonly used method for federated machine learning.

On Accelerating Distributed Convex Optimizations

no code implementations19 Aug 2021 Kushal Chakrabarti, Nirupam Gupta, Nikhil Chopra

The system comprises multiple agents in this problem, each with a set of local data points and an associated local cost function.

Differential Privacy and Byzantine Resilience in SGD: Do They Add Up?

1 code implementation16 Feb 2021 Rachid Guerraoui, Nirupam Gupta, Rafaël Pinot, Sébastien Rouault, John Stephan

This paper addresses the problem of combining Byzantine resilience with privacy in machine learning (ML).

Byzantine Fault-Tolerance in Peer-to-Peer Distributed Gradient-Descent

no code implementations28 Jan 2021 Nirupam Gupta, Nitin H. Vaidya

We consider the problem of Byzantine fault-tolerance in the peer-to-peer (P2P) distributed gradient-descent method -- a prominent algorithm for distributed optimization in a P2P system.

Distributed Optimization Distributed, Parallel, and Cluster Computing

Accelerating Distributed SGD for Linear Regression using Iterative Pre-Conditioning

no code implementations15 Nov 2020 Kushal Chakrabarti, Nirupam Gupta, Nikhil Chopra

The recently proposed Iteratively Pre-conditioned Gradient-descent (IPG) method has been shown to converge faster than other existing distributed algorithms that solve this problem.

regression

Byzantine Fault-Tolerant Distributed Machine Learning Using Stochastic Gradient Descent (SGD) and Norm-Based Comparative Gradient Elimination (CGE)

no code implementations11 Aug 2020 Nirupam Gupta, Shuo Liu, Nitin H. Vaidya

We show that the CGE gradient-filter guarantees fault-tolerance against a bounded fraction of Byzantine agents under standard stochastic assumptions, and is computationally simpler compared to many existing gradient-filters such as multi-KRUM, geometric median-of-means, and the spectral filters.

Iterative Pre-Conditioning for Expediting the Gradient-Descent Method: The Distributed Linear Least-Squares Problem

no code implementations6 Aug 2020 Kushal Chakrabarti, Nirupam Gupta, Nikhil Chopra

In this problem, the system comprises multiple agents, each having a set of local data points, that are connected to a server.

Iterative Pre-Conditioning to Expedite the Gradient-Descent Method

no code implementations13 Mar 2020 Kushal Chakrabarti, Nirupam Gupta, Nikhil Chopra

In this problem, there are multiple agents in the system, and each agent only knows its local cost function.

Distributed Optimization

Randomized Reactive Redundancy for Byzantine Fault-Tolerance in Parallelized Learning

no code implementations19 Dec 2019 Nirupam Gupta, Nitin H. Vaidya

The coding schemes use the concept of reactive redundancy for isolating Byzantine workers that eventually send faulty information.

Byzantine Fault Tolerant Distributed Linear Regression

no code implementations20 Mar 2019 Nirupam Gupta, Nitin H. Vaidya

This paper considers the problem of Byzantine fault tolerance in distributed linear regression in a multi-agent system.

Distributed Optimization regression

Cannot find the paper you are looking for? You can Submit a new open access paper.