Search Results for author: Anit Kumar Sahu

Found 15 papers, 9 papers with code

Partial Model Averaging in Federated Learning: Performance Guarantees and Benefits

no code implementations11 Jan 2022 Sunwoo Lee, Anit Kumar Sahu, Chaoyang He, Salman Avestimehr

We propose a partial model averaging framework that mitigates the model discrepancy issue in Federated Learning.

Federated Learning

You Only Query Once: Effective Black Box Adversarial Attacks with Minimal Repeated Queries

no code implementations29 Jan 2021 Devin Willmott, Anit Kumar Sahu, Fatemeh Sheikholeslami, Filipe Condessa, Zico Kolter

In this work, we instead show that it is possible to craft (universal) adversarial perturbations in the black-box setting by querying a sequence of different images only once.

Multiplicative Filter Networks

1 code implementation ICLR 2021 Rizal Fathony, Anit Kumar Sahu, Devin Willmott, J Zico Kolter

Although deep networks are typically used to approximate functions over high dimensional inputs, recent work has increased interest in neural networks as function approximators for low-dimensional-but-complex functions, such as representing images as a function of pixel coordinates, solving differential equations, or representing signed distance fields or neural radiance fields.

Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial Attacks

1 code implementation8 Oct 2020 Anit Kumar Sahu, Satya Narayan Shukla, J. Zico Kolter

We study the problem of generating adversarial examples in a black-box setting, where we only have access to a zeroth order oracle, providing us with loss function evaluations.

Simple and Efficient Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes

1 code implementation13 Jul 2020 Satya Narayan Shukla, Anit Kumar Sahu, Devin Willmott, J. Zico Kolter

We focus on the problem of black-box adversarial attacks, where the aim is to generate adversarial examples for deep learning models solely based on information limited to output label~(hard label) to a queried data input.

FedDANE: A Federated Newton-Type Method

1 code implementation7 Jan 2020 Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith

Federated learning aims to jointly learn statistical models over massively distributed remote devices.

Distributed Optimization Federated Learning

Black-box Adversarial Attacks with Bayesian Optimization

1 code implementation30 Sep 2019 Satya Narayan Shukla, Anit Kumar Sahu, Devin Willmott, J. Zico Kolter

We focus on the problem of black-box adversarial attacks, where the aim is to generate adversarial examples using information limited to loss function evaluations of input-output pairs.

Noisy Batch Active Learning with Deterministic Annealing

1 code implementation27 Sep 2019 Gaurav Gupta, Anit Kumar Sahu, Wan-Yi Lin

We study the problem of training machine learning models incrementally with batches of samples annotated with noisy oracles.

Active Learning Denoising +1

Learning in Confusion: Batch Active Learning with Noisy Oracle

no code implementations25 Sep 2019 Gaurav Gupta, Anit Kumar Sahu, Wan-Yi Lin

We study the problem of training machine learning models incrementally using active learning with access to imperfect or noisy oracles.

Active Learning Denoising +1

Federated Learning: Challenges, Methods, and Future Directions

1 code implementation21 Aug 2019 Tian Li, Anit Kumar Sahu, Ameet Talwalkar, Virginia Smith

Federated learning involves training statistical models over remote devices or siloed data centers, such as mobile phones or hospitals, while keeping data localized.

Distributed Optimization Federated Learning

MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling

3 code implementations23 May 2019 Jianyu Wang, Anit Kumar Sahu, Zhouyi Yang, Gauri Joshi, Soummya Kar

This paper studies the problem of error-runtime trade-off, typically encountered in decentralized training based on stochastic gradient descent (SGD) using a given network.

Distributed stochastic optimization with gradient tracking over strongly-connected networks

no code implementations18 Mar 2019 Ran Xin, Anit Kumar Sahu, Usman A. Khan, Soummya Kar

In this paper, we study distributed stochastic optimization to minimize a sum of smooth and strongly-convex local cost functions over a network of agents, communicating over a strongly-connected graph.

Stochastic Optimization

Federated Optimization in Heterogeneous Networks

8 code implementations14 Dec 2018 Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith

Theoretically, we provide convergence guarantees for our framework when learning over data from non-identical distributions (statistical heterogeneity), and while adhering to device-level systems constraints by allowing each participating device to perform a variable amount of work (systems heterogeneity).

Distributed Optimization Federated Learning

Managing App Install Ad Campaigns in RTB: A Q-Learning Approach

no code implementations11 Nov 2018 Anit Kumar Sahu, Shaunak Mishra, Narayan Bhamidipati

The policy based on this state space is trained on past decisions and outcomes via a novel Q-learning algorithm which accounts for the delay in install notifications.

Q-Learning

Towards Gradient Free and Projection Free Stochastic Optimization

no code implementations8 Oct 2018 Anit Kumar Sahu, Manzil Zaheer, Soummya Kar

This paper focuses on the problem of \emph{constrained} \emph{stochastic} optimization.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.