Search Results for author: Uri Katz

Found 5 papers, 3 papers with code

What’s in Your Head? Emergent Behaviour in Multi-Task Transformer Models

no code implementations EMNLP 2021 Mor Geva, Uri Katz, Aviv Ben-Arie, Jonathan Berant

In this work, we examine the behaviour of non-target heads, that is, the output of heads when given input that belongs to a different task than the one they were trained for.

Language Modelling Question Answering

NERetrieve: Dataset for Next Generation Named Entity Recognition and Retrieval

1 code implementation22 Oct 2023 Uri Katz, Matan Vetzler, Amir DN Cohen, Yoav Goldberg

The third, and most challenging, is the move from the recognition setup to a novel retrieval setup, where the query is a zero-shot entity type, and the expected result is all the sentences from a large, pre-indexed corpus that contain entities of these types, and their corresponding spans.

named-entity-recognition Named Entity Recognition +3

Answering Questions by Meta-Reasoning over Multiple Chains of Thought

1 code implementation25 Apr 2023 Ori Yoran, Tomer Wolfson, Ben Bogin, Uri Katz, Daniel Deutch, Jonathan Berant

Modern systems for multi-hop question answering (QA) typically break questions into a sequence of reasoning steps, termed chain-of-thought (CoT), before arriving at a final answer.

Multi-hop Question Answering Question Answering

Inferring Implicit Relations in Complex Questions with Language Models

1 code implementation28 Apr 2022 Uri Katz, Mor Geva, Jonathan Berant

A prominent challenge for modern language understanding systems is the ability to answer implicit reasoning questions, where the required reasoning steps for answering the question are not mentioned in the text explicitly.

Implicit Relations Question Answering +1

What's in your Head? Emergent Behaviour in Multi-Task Transformer Models

no code implementations13 Apr 2021 Mor Geva, Uri Katz, Aviv Ben-Arie, Jonathan Berant

In this work, we examine the behaviour of non-target heads, that is, the output of heads when given input that belongs to a different task than the one they were trained for.

Language Modelling Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.