Here, we present a large-scale empirical study on active learning techniques for BERT-based classification, addressing a diverse set of AL strategies and datasets.
Such a model, finetuned on some source dataset, may provide a better starting point for a new finetuning process on a desired target dataset.
1 code implementation • 2 Aug 2022 • Eyal Shnarch, Alon Halfon, Ariel Gera, Marina Danilevsky, Yannis Katsis, Leshem Choshen, Martin Santillan Cooper, Dina Epelboim, Zheng Zhang, Dakuo Wang, Lucy Yip, Liat Ein-Dor, Lena Dankin, Ilya Shnayderman, Ranit Aharonov, Yunyao Li, Naftali Liberman, Philip Levin Slesarev, Gwilym Newton, Shila Ofek-Koifman, Noam Slonim, Yoav Katz
Text classification can be useful in many real-world scenarios, saving a lot of time for end users.
We use this framework to report baseline intent discovery results over VIRADialogs, that highlight the difficulty of this task.
Public trust in medical information is crucial for successful application of public health policies such as vaccine uptake.
We describe the 2021 Key Point Analysis (KPA-2021) shared task on key point analysis that we organized as a part of the 8th Workshop on Argument Mining (ArgMining 2021) at EMNLP 2021.
Engaging in a live debate requires a diverse set of skills, and Project Debater has been developed accordingly as a collection of components, each designed to perform a specific subtask.
Current TSA evaluation in a cross-domain setup is restricted to the small set of review domains available in existing datasets.
Ranked #1 on Aspect Extraction on YASO - YELP
Extraction of financial and economic events from text has previously been done mostly using rule-based methods, with more recent works employing machine learning techniques.
Wikification of large corpora is beneficial for various NLP applications.
When debating a controversial topic, it is often desirable to expand the boundaries of discussion.
We also present a spellchecker created for this task which outperforms standard spellcheckers tested on the task of spellchecking.
Ranked #2 on Grammatical Error Correction on BEA-2019 (test)