Search Results for author: Adepu Ravi Sankar

Found 5 papers, 1 papers with code

A Deeper Look at the Hessian Eigenspectrum of Deep Neural Networks and its Applications to Regularization

no code implementations7 Dec 2020 Adepu Ravi Sankar, Yash Khasbage, Rahul Vigneswaran, Vineeth N Balasubramanian

In this work, we propose a layerwise loss landscape analysis where the loss surface at every layer is studied independently and also on how each correlates to the overall loss surface.

DANTE: Deep AlterNations for Training nEural networks

no code implementations1 Feb 2019 Vaibhav B Sinha, Sneha Kudugunta, Adepu Ravi Sankar, Surya Teja Chavali, Purushottam Kar, Vineeth N. Balasubramanian

We present DANTE, a novel method for training neural networks using the alternating minimization principle.

On the Analysis of Trajectories of Gradient Descent in the Optimization of Deep Neural Networks

no code implementations21 Jul 2018 Adepu Ravi Sankar, Vishwak Srinivasan, Vineeth N. Balasubramanian

Theoretical analysis of the error landscape of deep neural networks has garnered significant interest in recent years.

ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent

no code implementations20 Dec 2017 Vishwak Srinivasan, Adepu Ravi Sankar, Vineeth N. Balasubramanian

Using this motivation, we propose our method $\textit{ADINE}$ that helps weigh the previous updates more (by setting the momentum parameter $> 1$), evaluate our proposed algorithm on deep neural networks and show that $\textit{ADINE}$ helps the learning algorithm to converge much faster without compromising on the generalization error.

Are Saddles Good Enough for Deep Learning?

1 code implementation7 Jun 2017 Adepu Ravi Sankar, Vineeth N. Balasubramanian

However, in this work, we propose a new hypothesis based on recent theoretical findings and empirical studies that deep neural network models actually converge to saddle points with high degeneracy.

Cannot find the paper you are looking for? You can Submit a new open access paper.