1 code implementation • 9 Sep 2019 • Jayaraman J. Thiagarajan, Bindya Venkatesh, Prasanna Sattigeri, Peer-Timo Bremer
With rapid adoption of deep learning in critical applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties.
no code implementations • 30 Oct 2019 • Jayaraman J. Thiagarajan, Bindya Venkatesh, Deepta Rajan
Calibration error is commonly adopted for evaluating the quality of uncertainty estimators in deep neural networks.
no code implementations • 30 Oct 2019 • Bindya Venkatesh, Jayaraman J. Thiagarajan
The role of uncertainty quantification (UQ) in deep learning has become crucial with growing use of predictive models in high-risk applications.
no code implementations • 10 Feb 2020 • Bindya Venkatesh, Jayaraman J. Thiagarajan, Kowshik Thopalli, Prasanna Sattigeri
The hypothesis that sub-network initializations (lottery) exist within the initializations of over-parameterized networks, which when trained in isolation produce highly generalizable models, has led to crucial insights into network initialization and has enabled efficient inferencing.
no code implementations • 27 Apr 2020 • Jayaraman J. Thiagarajan, Prasanna Sattigeri, Deepta Rajan, Bindya Venkatesh
The wide-spread adoption of representation learning technologies in clinical decision making strongly emphasizes the need for characterizing model reliability and enabling rigorous introspection of model behavior.
no code implementations • 5 May 2020 • Jayaraman J. Thiagarajan, Bindya Venkatesh, Rushil Anirudh, Peer-Timo Bremer, Jim Gaffney, Gemma Anderson, Brian Spears
Predictive models that accurately emulate complex scientific processes can achieve exponential speed-ups over numerical simulators or experiments, and at the same time provide surrogates for improving the subsequent analysis.
no code implementations • 30 Sep 2020 • Bindya Venkatesh, Jayaraman J. Thiagarajan
Deep predictive models rely on human supervision in the form of labeled training data.