Search Results for author: Lukas Mayer

Found 1 papers, 0 papers with code

The Calibration Gap between Model and Human Confidence in Large Language Models

no code implementations24 Jan 2024 Mark Steyvers, Heliodoro Tejeda, Aakriti Kumar, Catarina Belem, Sheer Karny, Xinyue Hu, Lukas Mayer, Padhraic Smyth

Recent work has focused on the quality of internal LLM confidence assessments, but the question remains of how well LLMs can communicate this internal model confidence to human users.

Multiple-choice

Cannot find the paper you are looking for? You can Submit a new open access paper.