Search Results for author: Eyal Shnarch

Found 19 papers, 6 papers with code

Active Learning for BERT: An Empirical Study

1 code implementation EMNLP 2020 Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, Noam Slonim

Here, we present a large-scale empirical study on active learning techniques for BERT-based classification, addressing a diverse set of AL strategies and datasets.

Active Learning Binary text classification +3

Label-Efficient Model Selection for Text Generation

no code implementations12 Feb 2024 Shir Ashury-Tahan, Benjamin Sznajder, Leshem Choshen, Liat Ein-Dor, Eyal Shnarch, Ariel Gera

DiffUse reduces the required amount of preference annotations, thus saving valuable time and resources in performing evaluation.

Model Selection Text Generation

Genie: Achieving Human Parity in Content-Grounded Datasets Generation

no code implementations25 Jan 2024 Asaf Yehudai, Boaz Carmeli, Yosi Mass, Ofir Arviv, Nathaniel Mills, Assaf Toledo, Eyal Shnarch, Leshem Choshen

Furthermore, we compare models trained on our data with models trained on human-written data -- ELI5 and ASQA for LFQA and CNN-DailyMail for Summarization.

Long Form Question Answering

Efficient Benchmarking of Language Models

no code implementations22 Aug 2023 Yotam Perlitz, Elron Bandel, Ariel Gera, Ofir Arviv, Liat Ein-Dor, Eyal Shnarch, Noam Slonim, Michal Shmueli-Scheuer, Leshem Choshen

The increasing versatility of language models (LMs) has given rise to a new class of benchmarks that comprehensively assess a broad range of capabilities.


The Benefits of Bad Advice: Autocontrastive Decoding across Model Layers

1 code implementation2 May 2023 Ariel Gera, Roni Friedman, Ofir Arviv, Chulaka Gunasekara, Benjamin Sznajder, Noam Slonim, Eyal Shnarch

Applying language models to natural language processing tasks typically relies on the representations in the final model layer, as intermediate hidden layer representations are presumed to be less informative.

Language Modelling Text Generation

Zero-Shot Text Classification with Self-Training

1 code implementation31 Oct 2022 Ariel Gera, Alon Halfon, Eyal Shnarch, Yotam Perlitz, Liat Ein-Dor, Noam Slonim

Recent advances in large pretrained language models have increased attention to zero-shot text classification.

Natural Language Inference text-classification +2

Heuristic-based Inter-training to Improve Few-shot Multi-perspective Dialog Summarization

no code implementations29 Mar 2022 Benjamin Sznajder, Chulaka Gunasekara, Guy Lev, Sachin Joshi, Eyal Shnarch, Noam Slonim

We observe that there are different heuristics that are associated with summaries of different perspectives, and explore these heuristics to create weak-labeled data for intermediate training of the models before fine-tuning with scarce human annotated summaries.

Decision Making

Cluster & Tune: Enhance BERT Performance in Low Resource Text Classification

no code implementations1 Jan 2021 Eyal Shnarch, Ariel Gera, Alon Halfon, Lena Dankin, Leshem Choshen, Ranit Aharonov, Noam Slonim

In such low resources scenarios, we suggest performing an unsupervised classification task prior to fine-tuning on the target task.

Clustering General Classification +2

Unsupervised Expressive Rules Provide Explainability and Assist Human Experts Grasping New Domains

no code implementations Findings of the Association for Computational Linguistics 2020 Eyal Shnarch, Leshem Choshen, Guy Moshkowich, Noam Slonim, Ranit Aharonov

Approaching new data can be quite deterrent; you do not know how your categories of interest are realized in it, commonly, there is no labeled data at hand, and the performance of domain adaptation methods is unsatisfactory.

Domain Adaptation

Are You Convinced? Choosing the More Convincing Evidence with a Siamese Network

no code implementations ACL 2019 Martin Gleize, Eyal Shnarch, Leshem Choshen, Lena Dankin, Guy Moshkowich, Ranit Aharonov, Noam Slonim

With the advancement in argument detection, we suggest to pay more attention to the challenging task of identifying the more convincing arguments.

GRASP: Rich Patterns for Argumentation Mining

no code implementations EMNLP 2017 Eyal Shnarch, Ran Levy, Vikas Raykar, Noam Slonim

A human observer may notice the following underlying common structure, or pattern: [someone][argue/suggest/state][that][topic term][sentiment term].

Document Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.