Search Results for author: P. S. Sastry

Found 12 papers, 3 papers with code

Memorization in Deep Neural Networks: Does the Loss Function matter?

1 code implementation21 Jul 2021 Deep Patel, P. S. Sastry

Deep Neural Networks, often owing to the overparameterization, are shown to be capable of exactly memorizing even randomly labelled data.

Memorization

Adaptive Sample Selection for Robust Learning under Label Noise

1 code implementation29 Jun 2021 Deep Patel, P. S. Sastry

Deep Neural Networks (DNNs) have been shown to be susceptible to memorization or overfitting in the presence of noisily-labelled data.

Image Classification Image Classification with Label Noise +1

PLUME: Polyhedral Learning Using Mixture of Experts

no code implementations22 Apr 2019 Kulin Shah, P. S. Sastry, Naresh Manwani

In this paper, we propose a novel mixture of expert architecture for learning polyhedral classifiers.

Generalization Bounds

Summarizing Event Sequences with Serial Episodes: A Statistical Model and an Application

no code implementations1 Apr 2019 Soumyajit Mitra, P. S. Sastry

By considering text documents as temporal sequences of words, the data mining algorithm can find a set of characteristic episodes for all the training data as a whole.

General Classification Temporal Sequences +2

Efficient Learning of Restricted Boltzmann Machines Using Covariance Estimates

no code implementations25 Oct 2018 Vidyadhar Upadhya, P. S. Sastry

Learning RBMs using standard algorithms such as CD(k) involves gradient descent on the negative log-likelihood.

Robust Loss Functions under Label Noise for Deep Neural Networks

1 code implementation27 Dec 2017 Aritra Ghosh, Himanshu Kumar, P. S. Sastry

For binary classification there exist theoretical results on loss functions that are robust to label noise.

Binary Classification Classification +1

Learning RBM with a DC programming Approach

no code implementations21 Sep 2017 Vidyadhar Upadhya, P. S. Sastry

By exploiting the property that the RBM log-likelihood function is the difference of convex functions, we formulate a stochastic variant of the difference of convex functions (DC) programming to minimize the negative log-likelihood.

On the Robustness of Decision Tree Learning under Label Noise

no code implementations20 May 2016 Aritra Ghosh, Naresh Manwani, P. S. Sastry

In most practical problems of classifier learning, the training data suffers from the label noise.

Empirical Analysis of Sampling Based Estimators for Evaluating RBMs

no code implementations8 Oct 2015 Vidyadhar Upadhya, P. S. Sastry

The Restricted Boltzmann Machines (RBM) can be used either as classifiers or as generative models.

Making Risk Minimization Tolerant to Label Noise

no code implementations14 Mar 2014 Aritra Ghosh, Naresh Manwani, P. S. Sastry

Through extensive empirical studies, we show that risk minimization under the $0-1$ loss, the sigmoid loss and the ramp loss has much better robustness to label noise when compared to the SVM algorithm.

K-Plane Regression

no code implementations7 Nov 2012 Naresh Manwani, P. S. Sastry

In this paper, we present a novel algorithm for piecewise linear regression which can learn continuous as well as discontinuous piecewise linear functions.

Clustering regression

Polyceptron: A Polyhedral Learning Algorithm

no code implementations8 Jul 2011 Naresh Manwani, P. S. Sastry

In this paper we propose a new algorithm for learning polyhedral classifiers which we call as Polyceptron.

Cannot find the paper you are looking for? You can Submit a new open access paper.