Search Results for author: Tejas Prashanth

Found 2 papers, 1 papers with code

LALR: Theoretical and Experimental validation of Lipschitz Adaptive Learning Rate in Regression and Neural Networks

no code implementations19 May 2020 Snehanshu Saha, Tejas Prashanth, Suraj Aralihalli, Sumedh Basarkod, T. S. B Sudarshan, Soma S. Dhavala

We propose a theoretical framework for an adaptive learning rate policy for the Mean Absolute Error loss function and Quantile loss function and evaluate its effectiveness for regression tasks.

regression

LipschitzLR: Using theoretically computed adaptive learning rates for fast convergence

5 code implementations20 Feb 2019 Rahul Yedida, Snehanshu Saha, Tejas Prashanth

In this paper, we propose a novel method to compute the learning rate for training deep neural networks with stochastic gradient descent.

Handwritten Digit Recognition Object Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.