no code implementations • 7 Dec 2020 • Adepu Ravi Sankar, Yash Khasbage, Rahul Vigneswaran, Vineeth N Balasubramanian
In this work, we propose a layerwise loss landscape analysis where the loss surface at every layer is studied independently and also on how each correlates to the overall loss surface.
no code implementations • 1 Feb 2019 • Vaibhav B Sinha, Sneha Kudugunta, Adepu Ravi Sankar, Surya Teja Chavali, Purushottam Kar, Vineeth N. Balasubramanian
We present DANTE, a novel method for training neural networks using the alternating minimization principle.
no code implementations • 21 Jul 2018 • Adepu Ravi Sankar, Vishwak Srinivasan, Vineeth N. Balasubramanian
Theoretical analysis of the error landscape of deep neural networks has garnered significant interest in recent years.
no code implementations • 20 Dec 2017 • Vishwak Srinivasan, Adepu Ravi Sankar, Vineeth N. Balasubramanian
Using this motivation, we propose our method $\textit{ADINE}$ that helps weigh the previous updates more (by setting the momentum parameter $> 1$), evaluate our proposed algorithm on deep neural networks and show that $\textit{ADINE}$ helps the learning algorithm to converge much faster without compromising on the generalization error.
1 code implementation • 7 Jun 2017 • Adepu Ravi Sankar, Vineeth N. Balasubramanian
However, in this work, we propose a new hypothesis based on recent theoretical findings and empirical studies that deep neural network models actually converge to saddle points with high degeneracy.