Search Results for author: Or Honovich

Found 10 papers, 7 papers with code

Q^{2}: Evaluating Factual Consistency in Knowledge-Grounded Dialogues via Question Generation and Question Answering

no code implementations EMNLP 2021 Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, Omri Abend

Neural knowledge-grounded generative models for dialogue often produce content that is factually inconsistent with the knowledge they rely on, making them unreliable and limiting their applicability.

Abstractive Text Summarization Natural Language Inference +3

A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains

no code implementations1 Feb 2024 Alon Jacovi, Yonatan Bitton, Bernd Bohnet, Jonathan Herzig, Or Honovich, Michael Tseng, Michael Collins, Roee Aharoni, Mor Geva

REVEAL includes comprehensive labels for the relevance, attribution to evidence passages, and logical correctness of each reasoning step in a language model's answer, across a variety of datasets and state-of-the-art language models.

Open-Domain Question Answering

Surfacing Biases in Large Language Models using Contrastive Input Decoding

no code implementations12 May 2023 Gal Yona, Or Honovich, Itay Laish, Roee Aharoni

We use CID to highlight context-specific biases that are hard to detect with standard decoding strategies and quantify the effect of different input perturbations.

Text Generation

Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor

3 code implementations19 Dec 2022 Or Honovich, Thomas Scialom, Omer Levy, Timo Schick

We collect 64, 000 examples by prompting a language model with three seed examples of instructions and eliciting a fourth.

Language Modelling

DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering

1 code implementation10 Nov 2022 Ella Neeman, Roee Aharoni, Or Honovich, Leshem Choshen, Idan Szpektor, Omri Abend

Question answering models commonly have access to two sources of "knowledge" during inference time: (1) parametric knowledge - the factual knowledge encoded in the model weights, and (2) contextual knowledge - external knowledge (e. g., a Wikipedia passage) given to the model to generate a grounded answer.

counterfactual Data Augmentation +2

LMentry: A Language Model Benchmark of Elementary Language Tasks

1 code implementation3 Nov 2022 Avia Efrat, Or Honovich, Omer Levy

As the performance of large language models rapidly improves, benchmarks are getting larger and more complex as well.

Language Modelling Sentence

Instruction Induction: From Few Examples to Natural Language Task Descriptions

1 code implementation22 May 2022 Or Honovich, Uri Shaham, Samuel R. Bowman, Omer Levy

Large language models are able to perform a task by conditioning on a few input-output demonstrations - a paradigm known as in-context learning.

In-Context Learning

$Q^{2}$: Evaluating Factual Consistency in Knowledge-Grounded Dialogues via Question Generation and Question Answering

1 code implementation16 Apr 2021 Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, Omri Abend

Neural knowledge-grounded generative models for dialogue often produce content that is factually inconsistent with the knowledge they rely on, making them unreliable and limiting their applicability.

Abstractive Text Summarization Dialogue Evaluation +4

Cannot find the paper you are looking for? You can Submit a new open access paper.