Search Results for author: Ariel Gera

Found 15 papers, 6 papers with code

Active Learning for BERT: An Empirical Study

1 code implementation EMNLP 2020 Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, Noam Slonim

Here, we present a large-scale empirical study on active learning techniques for BERT-based classification, addressing a diverse set of AL strategies and datasets.

Active Learning Binary text classification +3

Label-Efficient Model Selection for Text Generation

no code implementations12 Feb 2024 Shir Ashury-Tahan, Benjamin Sznajder, Leshem Choshen, Liat Ein-Dor, Eyal Shnarch, Ariel Gera

DiffUse reduces the required amount of preference annotations, thus saving valuable time and resources in performing evaluation.

Model Selection Text Generation

Unitxt: Flexible, Shareable and Reusable Data Preparation and Evaluation for Generative AI

1 code implementation25 Jan 2024 Elron Bandel, Yotam Perlitz, Elad Venezian, Roni Friedman-Melamed, Ofir Arviv, Matan Orbach, Shachar Don-Yehyia, Dafna Sheinwald, Ariel Gera, Leshem Choshen, Michal Shmueli-Scheuer, Yoav Katz

In the dynamic landscape of generative NLP, traditional text processing pipelines limit research flexibility and reproducibility, as they are tailored to specific dataset, task, and model combinations.

Efficient Benchmarking of Language Models

no code implementations22 Aug 2023 Yotam Perlitz, Elron Bandel, Ariel Gera, Ofir Arviv, Liat Ein-Dor, Eyal Shnarch, Noam Slonim, Michal Shmueli-Scheuer, Leshem Choshen

Based on our findings we outline a set of concrete recommendations for more efficient benchmark design and utilization practices leading to dramatic cost savings with minimal loss of benchmark reliability often reducing computation by x100 or more.

Benchmarking

Active Learning for Natural Language Generation

no code implementations24 May 2023 Yotam Perlitz, Ariel Gera, Michal Shmueli-Scheuer, Dafna Sheinwald, Noam Slonim, Liat Ein-Dor

In this paper, we present a first systematic study of active learning for NLG, considering a diverse set of tasks and multiple leading selection strategies, and harnessing a strong instruction-tuned model.

Active Learning text-classification +2

The Benefits of Bad Advice: Autocontrastive Decoding across Model Layers

1 code implementation2 May 2023 Ariel Gera, Roni Friedman, Ofir Arviv, Chulaka Gunasekara, Benjamin Sznajder, Noam Slonim, Eyal Shnarch

Applying language models to natural language processing tasks typically relies on the representations in the final model layer, as intermediate hidden layer representations are presumed to be less informative.

Language Modelling Text Generation

Zero-Shot Text Classification with Self-Training

1 code implementation31 Oct 2022 Ariel Gera, Alon Halfon, Eyal Shnarch, Yotam Perlitz, Liat Ein-Dor, Noam Slonim

Recent advances in large pretrained language models have increased attention to zero-shot text classification.

Natural Language Inference text-classification +2

Cluster & Tune: Enhance BERT Performance in Low Resource Text Classification

no code implementations1 Jan 2021 Eyal Shnarch, Ariel Gera, Alon Halfon, Lena Dankin, Leshem Choshen, Ranit Aharonov, Noam Slonim

In such low resources scenarios, we suggest performing an unsupervised classification task prior to fine-tuning on the target task.

Clustering General Classification +2

Financial Event Extraction Using Wikipedia-Based Weak Supervision

no code implementations WS 2019 Liat Ein-Dor, Ariel Gera, Orith Toledo-Ronen, Alon Halfon, Benjamin Sznajder, Lena Dankin, Yonatan Bilu, Yoav Katz, Noam Slonim

Extraction of financial and economic events from text has previously been done mostly using rule-based methods, with more recent works employing machine learning techniques.

BIG-bench Machine Learning Event Extraction

Argument Invention from First Principles

no code implementations ACL 2019 Yonatan Bilu, Ariel Gera, Daniel Hershcovich, Benjamin Sznajder, Dan Lahav, Guy Moshkowich, Anael Malet, Assaf Gavron, Noam Slonim

In this work we aim to explicitly define a taxonomy of such principled recurring arguments, and, given a controversial topic, to automatically identify which of these arguments are relevant to the topic.

Controversy in Context

no code implementations20 Aug 2019 Benjamin Sznajder, Ariel Gera, Yonatan Bilu, Dafna Sheinwald, Ella Rabinovich, Ranit Aharonov, David Konopnicki, Noam Slonim

With the growing interest in social applications of Natural Language Processing and Computational Argumentation, a natural question is how controversial a given concept is.

Cannot find the paper you are looking for? You can Submit a new open access paper.