Search Results for author: Ofir Arviv

Found 10 papers, 5 papers with code

Unitxt: Flexible, Shareable and Reusable Data Preparation and Evaluation for Generative AI

1 code implementation25 Jan 2024 Elron Bandel, Yotam Perlitz, Elad Venezian, Roni Friedman-Melamed, Ofir Arviv, Matan Orbach, Shachar Don-Yehyia, Dafna Sheinwald, Ariel Gera, Leshem Choshen, Michal Shmueli-Scheuer, Yoav Katz

In the dynamic landscape of generative NLP, traditional text processing pipelines limit research flexibility and reproducibility, as they are tailored to specific dataset, task, and model combinations.

Genie: Achieving Human Parity in Content-Grounded Datasets Generation

no code implementations25 Jan 2024 Asaf Yehudai, Boaz Carmeli, Yosi Mass, Ofir Arviv, Nathaniel Mills, Assaf Toledo, Eyal Shnarch, Leshem Choshen

Furthermore, we compare models trained on our data with models trained on human-written data -- ELI5 and ASQA for LFQA and CNN-DailyMail for Summarization.

Long Form Question Answering

Improving Cross-Lingual Transfer through Subtree-Aware Word Reordering

1 code implementation20 Oct 2023 Ofir Arviv, Dmitry Nikolaev, Taelin Karidi, Omri Abend

Despite the impressive growth of the abilities of multilingual language models, such as XLM-R and mT5, it has been shown that they still face difficulties when tackling typologically-distant languages, particularly in the low-resource setting.

Cross-Lingual Transfer POS +1

Efficient Benchmarking of Language Models

no code implementations22 Aug 2023 Yotam Perlitz, Elron Bandel, Ariel Gera, Ofir Arviv, Liat Ein-Dor, Eyal Shnarch, Noam Slonim, Michal Shmueli-Scheuer, Leshem Choshen

Based on our findings we outline a set of concrete recommendations for more efficient benchmark design and utilization practices leading to dramatic cost savings with minimal loss of benchmark reliability often reducing computation by x100 or more.

Benchmarking

The Benefits of Bad Advice: Autocontrastive Decoding across Model Layers

1 code implementation2 May 2023 Ariel Gera, Roni Friedman, Ofir Arviv, Chulaka Gunasekara, Benjamin Sznajder, Noam Slonim, Eyal Shnarch

Applying language models to natural language processing tasks typically relies on the representations in the final model layer, as intermediate hidden layer representations are presumed to be less informative.

Language Modelling Text Generation

On the Relation between Syntactic Divergence and Zero-Shot Performance

1 code implementation EMNLP 2021 Ofir Arviv, Dmitry Nikolaev, Taelin Karidi, Omri Abend

We explore the link between the extent to which syntactic relations are preserved in translation and the ease of correctly constructing a parse tree in a zero-shot setting.

Cross-lingual zero-shot dependency parsing Relation +1

HUJI-KU at MRP 2020: Two Transition-based Neural Parsers

no code implementations CONLL 2020 Ofir Arviv, Ruixiang Cui, Daniel Hershcovich

This paper describes the HUJI-KU system submission to the shared task on CrossFramework Meaning Representation Parsing (MRP) at the 2020 Conference for Computational Language Learning (CoNLL), employing TUPA and the HIT-SCIR parser, which were, respectively, the baseline system and winning system in the 2019 MRP shared task.

Vocal Bursts Valence Prediction

HUJI-KU at MRP~2020: Two Transition-based Neural Parsers

no code implementations12 Oct 2020 Ofir Arviv, Ruixiang Cui, Daniel Hershcovich

This paper describes the HUJI-KU system submission to the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2020 Conference for Computational Language Learning (CoNLL), employing TUPA and the HIT-SCIR parser, which were, respectively, the baseline system and winning system in the 2019 MRP shared task.

Semantic Parsing Vocal Bursts Valence Prediction

Fine-Grained Analysis of Cross-Linguistic Syntactic Divergences

1 code implementation ACL 2020 Dmitry Nikolaev, Ofir Arviv, Taelin Karidi, Neta Kenneth, Veronika Mitnik, Lilja Maria Saeboe, Omri Abend

The patterns in which the syntax of different languages converges and diverges are often used to inform work on cross-lingual transfer.

Cross-Lingual Transfer

TUPA at MRP 2019: A Multi-Task Baseline System

no code implementations CONLL 2019 Daniel Hershcovich, Ofir Arviv

This paper describes the TUPA system submission to the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2019 Conference for Computational Language Learning (CoNLL).

Multi-Task Learning UCCA Parsing

Cannot find the paper you are looking for? You can Submit a new open access paper.