1 code implementation • 21 Jul 2021 • Deep Patel, P. S. Sastry
Deep Neural Networks, often owing to the overparameterization, are shown to be capable of exactly memorizing even randomly labelled data.
1 code implementation • 29 Jun 2021 • Deep Patel, P. S. Sastry
Deep Neural Networks (DNNs) have been shown to be susceptible to memorization or overfitting in the presence of noisily-labelled data.
Ranked #38 on Image Classification on Clothing1M
Image Classification Image Classification with Label Noise +1
no code implementations • 22 Apr 2019 • Kulin Shah, P. S. Sastry, Naresh Manwani
In this paper, we propose a novel mixture of expert architecture for learning polyhedral classifiers.
no code implementations • 1 Apr 2019 • Soumyajit Mitra, P. S. Sastry
By considering text documents as temporal sequences of words, the data mining algorithm can find a set of characteristic episodes for all the training data as a whole.
no code implementations • 25 Oct 2018 • Vidyadhar Upadhya, P. S. Sastry
Learning RBMs using standard algorithms such as CD(k) involves gradient descent on the negative log-likelihood.
1 code implementation • 27 Dec 2017 • Aritra Ghosh, Himanshu Kumar, P. S. Sastry
For binary classification there exist theoretical results on loss functions that are robust to label noise.
no code implementations • 21 Sep 2017 • Vidyadhar Upadhya, P. S. Sastry
By exploiting the property that the RBM log-likelihood function is the difference of convex functions, we formulate a stochastic variant of the difference of convex functions (DC) programming to minimize the negative log-likelihood.
no code implementations • 20 May 2016 • Aritra Ghosh, Naresh Manwani, P. S. Sastry
In most practical problems of classifier learning, the training data suffers from the label noise.
no code implementations • 8 Oct 2015 • Vidyadhar Upadhya, P. S. Sastry
The Restricted Boltzmann Machines (RBM) can be used either as classifiers or as generative models.
no code implementations • 14 Mar 2014 • Aritra Ghosh, Naresh Manwani, P. S. Sastry
Through extensive empirical studies, we show that risk minimization under the $0-1$ loss, the sigmoid loss and the ramp loss has much better robustness to label noise when compared to the SVM algorithm.
no code implementations • 7 Nov 2012 • Naresh Manwani, P. S. Sastry
In this paper, we present a novel algorithm for piecewise linear regression which can learn continuous as well as discontinuous piecewise linear functions.
no code implementations • 8 Jul 2011 • Naresh Manwani, P. S. Sastry
In this paper we propose a new algorithm for learning polyhedral classifiers which we call as Polyceptron.