Search Results for author: Yasin Abbasi Yadkori

Found 4 papers, 0 papers with code

To Believe or Not to Believe Your LLM

no code implementations4 Jun 2024 Yasin Abbasi Yadkori, Ilja Kuzborskij, András György, Csaba Szepesvári

Such quantification, for instance, allows to detect hallucinations (cases when epistemic uncertainty is high) in both single- and multi-answer responses.

Uncertainty Quantification

Mitigating LLM Hallucinations via Conformal Abstention

no code implementations4 Apr 2024 Yasin Abbasi Yadkori, Ilja Kuzborskij, David Stutz, András György, Adam Fisch, Arnaud Doucet, Iuliya Beloshapka, Wei-Hung Weng, Yao-Yuan Yang, Csaba Szepesvári, Ali Taylan Cemgil, Nenad Tomasev

We develop a principled procedure for determining when a large language model (LLM) should abstain from responding (e. g., by saying "I don't know") in a general domain, instead of resorting to possibly "hallucinating" a non-sensical or incorrect answer.

Conformal Prediction Generative Question Answering +5

Thompson Sampling and Approximate Inference

no code implementations NeurIPS 2019 My Phan, Yasin Abbasi Yadkori, Justin Domke

We study the effects of approximate inference on the performance of Thompson sampling in the $k$-armed bandit problems.

Decision Making Thompson Sampling

HONE: Higher-Order Network Embeddings

no code implementations28 Jan 2018 Ryan A. Rossi, Nesreen K. Ahmed, Eunyee Koh, Sungchul Kim, Anup Rao, Yasin Abbasi Yadkori

This paper describes a general framework for learning Higher-Order Network Embeddings (HONE) from graph data based on network motifs.

Cannot find the paper you are looking for? You can Submit a new open access paper.