Search Results for author: Jeremiah Liu

Found 11 papers, 6 papers with code

On Uncertainty Calibration and Selective Generation in Probabilistic Neural Summarization: A Benchmark Study

no code implementations17 Apr 2023 Polina Zablotskaia, Du Phan, Joshua Maynez, Shashi Narayan, Jie Ren, Jeremiah Liu

Modern deep models for summarization attains impressive benchmark performance, but they are prone to generating miscalibrated predictive uncertainty.

Probabilistic Deep Learning

Neural-Symbolic Inference for Robust Autoregressive Graph Parsing via Compositional Uncertainty Quantification

1 code implementation26 Jan 2023 Zi Lin, Jeremiah Liu, Jingbo Shang

Pre-trained seq2seq models excel at graph semantic parsing with rich annotated data, but generalize worse to out-of-distribution (OOD) and long-tail examples.

Semantic Parsing

Reliable Graph Neural Networks for Drug Discovery Under Distributional Shift

no code implementations25 Nov 2021 Kehang Han, Balaji Lakshminarayanan, Jeremiah Liu

The concern of overconfident mis-predictions under distributional shift demands extensive reliability research on Graph Neural Networks used in critical tasks in drug discovery.

Drug Discovery

Deep Classifiers with Label Noise Modeling and Distance Awareness

no code implementations6 Oct 2021 Vincent Fortuin, Mark Collier, Florian Wenzel, James Allingham, Jeremiah Liu, Dustin Tran, Balaji Lakshminarayanan, Jesse Berent, Rodolphe Jenatton, Effrosyni Kokiopoulou

Uncertainty estimation in deep learning has recently emerged as a crucial area of interest to advance reliability and robustness in safety-critical applications.

Out-of-Distribution Detection

A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection

3 code implementations16 Jun 2021 Jie Ren, Stanislav Fort, Jeremiah Liu, Abhijit Guha Roy, Shreyas Padhy, Balaji Lakshminarayanan

Mahalanobis distance (MD) is a simple and popular post-processing method for detecting out-of-distribution (OOD) inputs in neural networks.

Intent Detection Out-of-Distribution Detection +1

Semi-Supervised Class Discovery

no code implementations10 Feb 2020 Jeremy Nixon, Jeremiah Liu, David Berthelot

One promising approach to dealing with datapoints that are outside of the initial training distribution (OOD) is to create new classes that capture similarities in the datapoints previously rejected as uncategorizable.

Measuring Calibration in Deep Learning

2 code implementations2 Apr 2019 Jeremy Nixon, Mike Dusenberry, Ghassen Jerfel, Timothy Nguyen, Jeremiah Liu, Linchuan Zhang, Dustin Tran

In this paper, we perform a comprehensive empirical study of choices in calibration measures including measuring all probabilities rather than just the maximum prediction, thresholding probability values, class conditionality, number of bins, bins that are adaptive to the datapoint density, and the norm used to compare accuracies to confidences.

Cannot find the paper you are looking for? You can Submit a new open access paper.