Search Results for author: Raj Kumar Maity

Found 8 papers, 0 papers with code

Shaping Proto-Value Functions via Rewards

no code implementations27 Nov 2015 Chandrashekar Lakshmi Narayanan, Raj Kumar Maity, Shalabh Bhatnagar

In this paper, we combine task-dependent reward shaping and task-independent proto-value functions to obtain reward dependent proto-value functions (RPVFs).

Robust Gradient Descent via Moment Encoding with LDPC Codes

no code implementations22 May 2018 Raj Kumar Maity, Ankit Singh Rawat, Arya Mazumdar

We, instead, propose to encode the second-moment of the data with a low density parity-check (LDPC) code.

Distributed Computing

High Dimensional Discrete Integration over the Hypergrid

no code implementations29 Jun 2018 Raj Kumar Maity, Arya Mazumdar, Soumyabrata Pal

Recently Ermon et al. (2013) pioneered a way to practically compute approximations to large scale counting or discrete integration problems by using random hashes.

Vocal Bursts Intensity Prediction

vqSGD: Vector Quantized Stochastic Gradient Descent

no code implementations18 Nov 2019 Venkata Gandikota, Daniel Kane, Raj Kumar Maity, Arya Mazumdar

In this work, we present a family of vector quantization schemes \emph{vqSGD} (Vector-Quantized Stochastic Gradient Descent) that provide an asymptotic reduction in the communication cost with convergence guarantees in first-order distributed optimization.

Distributed Optimization Quantization

Communication-Efficient and Byzantine-Robust Distributed Learning with Error Feedback

no code implementations21 Nov 2019 Avishek Ghosh, Raj Kumar Maity, Swanand Kadhe, Arya Mazumdar, Kannan Ramchandran

Moreover, we analyze the compressed gradient descent algorithm with error feedback (proposed in \cite{errorfeed}) in a distributed setting and in the presence of Byzantine worker machines.

Distributed Newton Can Communicate Less and Resist Byzantine Workers

no code implementations NeurIPS 2020 Avishek Ghosh, Raj Kumar Maity, Arya Mazumdar

We develop a distributed second order optimization algorithm that is communication-efficient as well as robust against Byzantine failures of the worker machines.

Distributed Optimization

Estimation of Shortest Path Covariance Matrices

no code implementations19 Nov 2020 Raj Kumar Maity, Cameron Musco

Such matrices generalize Toeplitz and circulant covariance matrices and are widely applied in signal processing applications, where the covariance between two measurements depends on the (shortest path) distance between them in time or space.

Escaping Saddle Points in Distributed Newton's Method with Communication Efficiency and Byzantine Resilience

no code implementations17 Mar 2021 Avishek Ghosh, Raj Kumar Maity, Arya Mazumdar, Kannan Ramchandran

Moreover, we validate our theoretical findings with experiments using standard datasets and several types of Byzantine attacks, and obtain an improvement of $25\%$ with respect to first order methods in iteration complexity.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.