1 code implementation • 15 May 2021 • Sunil Thulasidasan, Sushil Thapa, Sayera Dhaubhadel, Gopinath Chennupati, Tanmoy Bhattacharya, Jeff Bilmes
In this work we present a simple, but highly effective approach to deal with out-of-distribution detection that uses the principle of abstention: when encountering a sample from an unseen class, the desired behavior is to abstain from predicting.
Ranked #1 on Out-of-Distribution Detection on CIFAR-100 (using extra training data)
no code implementations • 1 Jan 2021 • Sunil Thulasidasan, Sushil Thapa, Sayera Dhaubhadel, Gopinath Chennupati, Tanmoy Bhattacharya, Jeff Bilmes
In this work we present a simple, but highly effective approach to deal with out-of-distribution detection that uses the principle of abstention: when encountering a sample from an unseen class, the desired behavior is to abstain from predicting.
no code implementations • 16 Oct 2020 • Xiaoying Pang, Sunil Thulasidasan, Larry Rybarcyk
We describe an approach to learning optimal control policies for a large, linear particle accelerator using deep reinforcement learning coupled with a high-fidelity physics engine.
no code implementations • 10 Sep 2020 • Sayera Dhaubhadel, Jamaludin Mohd-Yusof, Kumkum Ganguly, Gopinath Chennupati, Sunil Thulasidasan, Nicolas W. Hengartner, Brent J. Mumphrey, Eric B. Durbin, Jennifer A. Doherty, Mireille Lemieux, Noah Schaefferkoetter, Georgia Tourassi, Linda Coyle, Lynne Penberthy, Benjamin H. McMahon, Tanmoy Bhattacharya
We demonstrate an abstaining classifier in a multitask setting for classifying cancer pathology reports from the NCI SEER cancer registries on six tasks of interest.
2 code implementations • NeurIPS 2019 • Sunil Thulasidasan, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, Sarah Michalak
In this work, we discuss a hitherto untouched aspect of mixup training -- the calibration and predictive uncertainty of models trained with mixup.
Ranked #1 on Out-of-Distribution Detection on STL-10
2 code implementations • 27 May 2019 • Sunil Thulasidasan, Tanmoy Bhattacharya, Jeff Bilmes, Gopinath Chennupati, Jamal Mohd-Yusof
In the case of unstructured (arbitrary) label noise, abstention during training enables the DAC to be used as an effective data cleaner by identifying samples that are likely to have label noise.
no code implementations • ICLR 2019 • Sunil Thulasidasan, Tanmoy Bhattacharya, Jeffrey Bilmes, Gopinath Chennupati, Jamal Mohd-Yusof
We introduce the deep abstaining classifier -- a deep neural network trained with a novel loss function that provides an abstention option during training.
no code implementations • 15 Dec 2016 • Sunil Thulasidasan, Jeffrey Bilmes, Garrett Kenyon
We describe a computationally efficient, stochastic graph-regularization technique that can be utilized for the semi-supervised training of deep neural networks in a parallel or distributed setting.
no code implementations • 15 Dec 2016 • Sunil Thulasidasan, Jeffrey Bilmes
We describe a graph-based semi-supervised learning framework in the context of deep neural networks that uses a graph-based entropic regularizer to favor smooth solutions over a graph induced by the data.