Search Results for author: Advait Gadhikar

Found 6 papers, 2 papers with code

Why Random Pruning Is All We Need to Start Sparse

1 code implementation5 Oct 2022 Advait Gadhikar, Sohom Mukherjee, Rebekka Burkholz

Random masks define surprisingly effective sparse neural network models, as has been shown empirically.

Image Classification

Dynamical Isometry for Residual Networks

no code implementations5 Oct 2022 Advait Gadhikar, Rebekka Burkholz

We propose a random initialization scheme, RISOTTO, that achieves perfect dynamical isometry for residual networks with ReLU activation functions even for finite depth and width.

Lottery Tickets with Nonzero Biases

no code implementations21 Oct 2021 Jonas Fischer, Advait Gadhikar, Rebekka Burkholz

The strong lottery ticket hypothesis holds the promise that pruning randomly initialized deep neural networks could offer a computationally efficient alternative to deep learning with stochastic gradient descent.

Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation

no code implementations NeurIPS 2021 Divyansh Jhunjhunwala, Ankur Mallick, Advait Gadhikar, Swanand Kadhe, Gauri Joshi

We study the problem of estimating at a central server the mean of a set of vectors distributed across several nodes (one vector per node).

Federated Learning

Adaptive Quantization of Model Updates for Communication-Efficient Federated Learning

no code implementations8 Feb 2021 Divyansh Jhunjhunwala, Advait Gadhikar, Gauri Joshi, Yonina C. Eldar

Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning, especially in bandwidth-limited settings and high-dimensional models.

Federated Learning Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.