Search Results for author: Ofir Arviv

Found 12 papers, 6 papers with code

Stay Tuned: An Empirical Study of the Impact of Hyperparameters on LLM Tuning in Real-World Applications

no code implementations25 Jul 2024 Alon Halfon, Shai Gretz, Ofir Arviv, Artem Spector, Orith Toledo-Ronen, Yoav Katz, Liat Ein-Dor, Michal Shmueli-Scheuer, Noam Slonim

Here, we provide recommended HP configurations for practical use-cases that represent a better starting point for practitioners, when considering two SOTA LLMs and two commonly used tuning methods.

Do These LLM Benchmarks Agree? Fixing Benchmark Evaluation with BenchBench

1 code implementation18 Jul 2024 Yotam Perlitz, Ariel Gera, Ofir Arviv, Asaf Yehudai, Elron Bandel, Eyal Shnarch, Michal Shmueli-Scheuer, Leshem Choshen

Despite the crucial role of BAT for benchmark builders and consumers, there are no standardized procedures for such agreement testing.

Language Modelling

Genie: Achieving Human Parity in Content-Grounded Datasets Generation

no code implementations25 Jan 2024 Asaf Yehudai, Boaz Carmeli, Yosi Mass, Ofir Arviv, Nathaniel Mills, Assaf Toledo, Eyal Shnarch, Leshem Choshen

Furthermore, we compare models trained on our data with models trained on human-written data -- ELI5 and ASQA for LFQA and CNN-DailyMail for Summarization.

Long Form Question Answering

Unitxt: Flexible, Shareable and Reusable Data Preparation and Evaluation for Generative AI

1 code implementation25 Jan 2024 Elron Bandel, Yotam Perlitz, Elad Venezian, Roni Friedman-Melamed, Ofir Arviv, Matan Orbach, Shachar Don-Yehyia, Dafna Sheinwald, Ariel Gera, Leshem Choshen, Michal Shmueli-Scheuer, Yoav Katz

In the dynamic landscape of generative NLP, traditional text processing pipelines limit research flexibility and reproducibility, as they are tailored to specific dataset, task, and model combinations.

Improving Cross-Lingual Transfer through Subtree-Aware Word Reordering

1 code implementation20 Oct 2023 Ofir Arviv, Dmitry Nikolaev, Taelin Karidi, Omri Abend

Despite the impressive growth of the abilities of multilingual language models, such as XLM-R and mT5, it has been shown that they still face difficulties when tackling typologically-distant languages, particularly in the low-resource setting.

Cross-Lingual Transfer POS +1

Efficient Benchmarking of Language Models

no code implementations22 Aug 2023 Yotam Perlitz, Elron Bandel, Ariel Gera, Ofir Arviv, Liat Ein-Dor, Eyal Shnarch, Noam Slonim, Michal Shmueli-Scheuer, Leshem Choshen

The increasing versatility of language models (LMs) has given rise to a new class of benchmarks that comprehensively assess a broad range of capabilities.

Benchmarking

The Benefits of Bad Advice: Autocontrastive Decoding across Model Layers

1 code implementation2 May 2023 Ariel Gera, Roni Friedman, Ofir Arviv, Chulaka Gunasekara, Benjamin Sznajder, Noam Slonim, Eyal Shnarch

Applying language models to natural language processing tasks typically relies on the representations in the final model layer, as intermediate hidden layer representations are presumed to be less informative.

Language Modelling Text Generation

On the Relation between Syntactic Divergence and Zero-Shot Performance

1 code implementation EMNLP 2021 Ofir Arviv, Dmitry Nikolaev, Taelin Karidi, Omri Abend

We explore the link between the extent to which syntactic relations are preserved in translation and the ease of correctly constructing a parse tree in a zero-shot setting.

Cross-lingual zero-shot dependency parsing Relation +1

HUJI-KU at MRP 2020: Two Transition-based Neural Parsers

no code implementations CONLL 2020 Ofir Arviv, Ruixiang Cui, Daniel Hershcovich

This paper describes the HUJI-KU system submission to the shared task on CrossFramework Meaning Representation Parsing (MRP) at the 2020 Conference for Computational Language Learning (CoNLL), employing TUPA and the HIT-SCIR parser, which were, respectively, the baseline system and winning system in the 2019 MRP shared task.

Vocal Bursts Valence Prediction

HUJI-KU at MRP~2020: Two Transition-based Neural Parsers

no code implementations12 Oct 2020 Ofir Arviv, Ruixiang Cui, Daniel Hershcovich

This paper describes the HUJI-KU system submission to the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2020 Conference for Computational Language Learning (CoNLL), employing TUPA and the HIT-SCIR parser, which were, respectively, the baseline system and winning system in the 2019 MRP shared task.

Semantic Parsing Vocal Bursts Valence Prediction

Fine-Grained Analysis of Cross-Linguistic Syntactic Divergences

1 code implementation ACL 2020 Dmitry Nikolaev, Ofir Arviv, Taelin Karidi, Neta Kenneth, Veronika Mitnik, Lilja Maria Saeboe, Omri Abend

The patterns in which the syntax of different languages converges and diverges are often used to inform work on cross-lingual transfer.

Cross-Lingual Transfer

TUPA at MRP 2019: A Multi-Task Baseline System

no code implementations CONLL 2019 Daniel Hershcovich, Ofir Arviv

This paper describes the TUPA system submission to the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2019 Conference for Computational Language Learning (CoNLL).

Multi-Task Learning UCCA Parsing

Cannot find the paper you are looking for? You can Submit a new open access paper.