Search Results for author: Vishwak Srinivasan

Found 6 papers, 1 papers with code

Fast sampling from constrained spaces using the Metropolis-adjusted Mirror Langevin algorithm

1 code implementation14 Dec 2023 Vishwak Srinivasan, Andre Wibisono, Ashia Wilson

This algorithm adds an accept-reject filter to the Markov chain induced by a single step of the Mirror Langevin algorithm (Zhang et al., 2020), which is a basic discretisation of the Mirror Langevin dynamics.

Sample Efficient Reinforcement Learning In Continuous State Spaces: A Perspective Beyond Linearity

no code implementations15 Jun 2021 Dhruv Malik, Aldo Pacchiano, Vishwak Srinivasan, Yuanzhi Li

Reinforcement learning (RL) is empirically successful in complex nonlinear Markov decision processes (MDPs) with continuous state spaces.

Atari Games reinforcement-learning +1

Efficient Estimators for Heavy-Tailed Machine Learning

no code implementations1 Jan 2021 Vishwak Srinivasan, Adarsh Prasad, Sivaraman Balakrishnan, Pradeep Kumar Ravikumar

A dramatic improvement in data collection technologies has aided in procuring massive amounts of unstructured and heterogeneous datasets.

BIG-bench Machine Learning

On Learning Ising Models under Huber's Contamination Model

no code implementations NeurIPS 2020 Adarsh Prasad, Vishwak Srinivasan, Sivaraman Balakrishnan, Pradeep Ravikumar

We study the problem of learning Ising models in a setting where some of the samples from the underlying distribution can be arbitrarily corrupted.

On the Analysis of Trajectories of Gradient Descent in the Optimization of Deep Neural Networks

no code implementations21 Jul 2018 Adepu Ravi Sankar, Vishwak Srinivasan, Vineeth N. Balasubramanian

Theoretical analysis of the error landscape of deep neural networks has garnered significant interest in recent years.

ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent

no code implementations20 Dec 2017 Vishwak Srinivasan, Adepu Ravi Sankar, Vineeth N. Balasubramanian

Using this motivation, we propose our method $\textit{ADINE}$ that helps weigh the previous updates more (by setting the momentum parameter $> 1$), evaluate our proposed algorithm on deep neural networks and show that $\textit{ADINE}$ helps the learning algorithm to converge much faster without compromising on the generalization error.

Cannot find the paper you are looking for? You can Submit a new open access paper.