Search Results for author: Renat Aksitov

Found 6 papers, 1 papers with code

Universal Self-Consistency for Large Language Model Generation

no code implementations29 Nov 2023 Xinyun Chen, Renat Aksitov, Uri Alon, Jie Ren, Kefan Xiao, Pengcheng Yin, Sushant Prakash, Charles Sutton, Xuezhi Wang, Denny Zhou

Self-consistency with chain-of-thought prompting (CoT) has demonstrated remarkable performance gains on various challenging tasks, by utilizing multiple reasoning paths sampled from large language models (LLMs).

Code Generation Language Modelling +3

Hallucination Augmented Recitations for Language Models

no code implementations13 Nov 2023 Abdullatif Köksal, Renat Aksitov, Chung-Ching Chang

For open book QA as a case study, we demonstrate that models finetuned with our counterfactual datasets improve text grounding, leading to better open book QA performance, with up to an 8. 0% increase in F1 score.

counterfactual Hallucination +1

KL-Divergence Guided Temperature Sampling

2 code implementations2 Jun 2023 Chung-Ching Chang, David Reitter, Renat Aksitov, Yun-Hsuan Sung

One common approach to mitigate hallucinations is to provide source/grounding documents and the model is trained to produce predictions that bind to and are attributable to the provided source.

Conversational Question Answering Language Modelling +1

Characterizing Attribution and Fluency Tradeoffs for Retrieval-Augmented Large Language Models

no code implementations11 Feb 2023 Renat Aksitov, Chung-Ching Chang, David Reitter, Siamak Shakeri, YunHsuan Sung

One common solution to this is augmenting LLMs with a retrieval system and making sure that the generated output is attributable to the retrieved information.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.