no code implementations • 4 Jun 2024 • Yasin Abbasi Yadkori, Ilja Kuzborskij, András György, Csaba Szepesvári
Such quantification, for instance, allows to detect hallucinations (cases when epistemic uncertainty is high) in both single- and multi-answer responses.
no code implementations • 4 Apr 2024 • Yasin Abbasi Yadkori, Ilja Kuzborskij, David Stutz, András György, Adam Fisch, Arnaud Doucet, Iuliya Beloshapka, Wei-Hung Weng, Yao-Yuan Yang, Csaba Szepesvári, Ali Taylan Cemgil, Nenad Tomasev
We develop a principled procedure for determining when a large language model (LLM) should abstain from responding (e. g., by saying "I don't know") in a general domain, instead of resorting to possibly "hallucinating" a non-sensical or incorrect answer.
no code implementations • NeurIPS 2019 • My Phan, Yasin Abbasi Yadkori, Justin Domke
We study the effects of approximate inference on the performance of Thompson sampling in the $k$-armed bandit problems.
no code implementations • 28 Jan 2018 • Ryan A. Rossi, Nesreen K. Ahmed, Eunyee Koh, Sungchul Kim, Anup Rao, Yasin Abbasi Yadkori
This paper describes a general framework for learning Higher-Order Network Embeddings (HONE) from graph data based on network motifs.