Search Results for author: Bindya Venkatesh

Found 7 papers, 1 papers with code

Designing Accurate Emulators for Scientific Processes using Calibration-Driven Deep Models

no code implementations5 May 2020 Jayaraman J. Thiagarajan, Bindya Venkatesh, Rushil Anirudh, Peer-Timo Bremer, Jim Gaffney, Gemma Anderson, Brian Spears

Predictive models that accurately emulate complex scientific processes can achieve exponential speed-ups over numerical simulators or experiments, and at the same time provide surrogates for improving the subsequent analysis.

Small Data Image Classification

Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models

no code implementations27 Apr 2020 Jayaraman J. Thiagarajan, Prasanna Sattigeri, Deepta Rajan, Bindya Venkatesh

The wide-spread adoption of representation learning technologies in clinical decision making strongly emphasizes the need for characterizing model reliability and enabling rigorous introspection of model behavior.

Decision Making Lesion Classification +1

Calibrate and Prune: Improving Reliability of Lottery Tickets Through Prediction Calibration

no code implementations10 Feb 2020 Bindya Venkatesh, Jayaraman J. Thiagarajan, Kowshik Thopalli, Prasanna Sattigeri

The hypothesis that sub-network initializations (lottery) exist within the initializations of over-parameterized networks, which when trained in isolation produce highly generalizable models, has led to crucial insights into network initialization and has enabled efficient inferencing.

Transfer Learning

Learn-By-Calibrating: Using Calibration as a Training Objective

no code implementations30 Oct 2019 Jayaraman J. Thiagarajan, Bindya Venkatesh, Deepta Rajan

Calibration error is commonly adopted for evaluating the quality of uncertainty estimators in deep neural networks.

Prediction Intervals

Heteroscedastic Calibration of Uncertainty Estimators in Deep Learning

no code implementations30 Oct 2019 Bindya Venkatesh, Jayaraman J. Thiagarajan

The role of uncertainty quantification (UQ) in deep learning has become crucial with growing use of predictive models in high-risk applications.

Building Calibrated Deep Models via Uncertainty Matching with Auxiliary Interval Predictors

1 code implementation9 Sep 2019 Jayaraman J. Thiagarajan, Bindya Venkatesh, Prasanna Sattigeri, Peer-Timo Bremer

With rapid adoption of deep learning in critical applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties.

Object Localization Prediction Intervals +2

Cannot find the paper you are looking for? You can Submit a new open access paper.