no code implementations • 15 Nov 2023 • Lucas Torroba Hennigen, Shannon Shen, Aniruddha Nrusimha, Bernhard Gapp, David Sontag, Yoon Kim
LLMs are vulnerable to hallucinations, and thus their outputs generally require laborious human verification for high-stakes applications.