1 code implementation • 22 Jan 2024 • Will LeVine, Benjamin Pikus, Jacob Phillips, Berk Norman, Fernando Amat Gil, Sean Hendryx
As deep neural networks become adopted in high-stakes domains, it is crucial to be able to identify when inference inputs are Out-of-Distribution (OOD) so that users can be alerted of likely drops in performance and calibration despite high confidence.
no code implementations • 21 Nov 2023 • Will LeVine, Benjamin Pikus, Anthony Chen, Sean Hendryx
These reward models are additionally used at inference-time to estimate LLM responses' adherence to those desired behaviors.
no code implementations • 11 Mar 2023 • Will LeVine, Benjamin Pikus, Pranav Raja, Fernando Amat Gil
Calibration of deep learning models is crucial to their trustworthiness and safe usage, and as such, has been extensively studied in supervised classification models, with methods crafted to decrease miscalibration.