Search Results for author: Will LeVine

Found 6 papers, 4 papers with code

Out-of-Distribution Detection & Applications With Ablated Learned Temperature Energy

1 code implementation22 Jan 2024 Will LeVine, Benjamin Pikus, Jacob Phillips, Berk Norman, Fernando Amat Gil, Sean Hendryx

As deep neural networks become adopted in high-stakes domains, it is crucial to be able to identify when inference inputs are Out-of-Distribution (OOD) so that users can be alerted of likely drops in performance and calibration despite high confidence.

object-detection Object Detection +2

A Baseline Analysis of Reward Models' Ability To Accurately Analyze Foundation Models Under Distribution Shift

no code implementations21 Nov 2023 Will LeVine, Benjamin Pikus, Anthony Chen, Sean Hendryx

These reward models are additionally used at inference-time to estimate LLM responses' adherence to those desired behaviors.

Enabling Calibration In The Zero-Shot Inference of Large Vision-Language Models

no code implementations11 Mar 2023 Will LeVine, Benjamin Pikus, Pranav Raja, Fernando Amat Gil

Calibration of deep learning models is crucial to their trustworthiness and safe usage, and as such, has been extensively studied in supervised classification models, with methods crafted to decrease miscalibration.

Deep Discriminative to Kernel Density Networks for Calibrated Inference

1 code implementation31 Jan 2022 Jayanta Dey, Will LeVine, Haoyin Xu, Ashwin De Silva, Tyler M. Tomita, Ali Geisa, Tiffany Chu, Jacob Desman, Joshua T. Vogelstein

In this paper, we leveraged the fact that deep models, including both random forests and deep-nets, learn internal representations which are unions of polytopes with affine activation functions to conceptualize them both as partitioning rules of the feature space.

Out-of-Distribution Detection regression

Omnidirectional Transfer for Quasilinear Lifelong Learning

1 code implementation27 Apr 2020 Joshua T. Vogelstein, Jayanta Dey, Hayden S. Helm, Will LeVine, Ronak D. Mehta, Ali Geisa, Haoyin Xu, Gido M. van de Ven, Emily Chang, Chenyu Gao, Weiwei Yang, Bryan Tower, Jonathan Larson, Christopher M. White, Carey E. Priebe

But striving to avoid forgetting sets the goal unnecessarily low: the goal of lifelong learning, whether biological or artificial, should be to improve performance on all tasks (including past and future) with any new data.

Federated Learning Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.