1 code implementation • 24 Apr 2024 • Hannah Rose Kirk, Alexander Whitefield, Paul Röttger, Andrew Bean, Katerina Margatina, Juan Ciro, Rafael Mosquera, Max Bartolo, Adina Williams, He He, Bertie Vidgen, Scott A. Hale
Human feedback plays a central role in the alignment of Large Language Models (LLMs).
no code implementations • 26 Oct 2023 • Ahmed Alajrami, Katerina Margatina, Nikolaos Aletras
Understanding how and what pre-trained language models (PLMs) learn about language is an open challenge in natural language processing.
no code implementations • 23 May 2023 • Katerina Margatina, Timo Schick, Nikolaos Aletras, Jane Dwivedi-Yu
The remarkable advancements in large language models (LLMs) have significantly enhanced the performance in few-shot learning settings.
no code implementations • 21 May 2023 • Katerina Margatina, Nikolaos Aletras
Active learning (AL) is a human-and-model-in-the-loop paradigm that iteratively selects informative unlabeled data for human annotation, aiming to improve over random sampling.
no code implementations • 23 Feb 2023 • Katerina Margatina, Shuai Wang, Yogarshi Vyas, Neha Anna John, Yassine Benajiba, Miguel Ballesteros
Temporal concept drift refers to the problem of data changing over time.
1 code implementation • 14 Feb 2023 • Ard Snijders, Douwe Kiela, Katerina Margatina
We show that four popular active learning schemes fail to outperform random selection when applied to unlabelled pools comprised of multiple data sources on the task of natural language inference.
no code implementations • ACL 2022 • Daniel Hershcovich, Stella Frank, Heather Lent, Miryam de Lhoneux, Mostafa Abdou, Stephanie Brandl, Emanuele Bugliarello, Laura Cabello Piqueras, Ilias Chalkidis, Ruixiang Cui, Constanza Fierro, Katerina Margatina, Phillip Rust, Anders Søgaard
Various efforts in the Natural Language Processing (NLP) community have been made to accommodate linguistic diversity and serve speakers of many different languages.
1 code implementation • EMNLP 2021 • Katerina Margatina, Giorgos Vernikos, Loïc Barrault, Nikolaos Aletras
Common acquisition functions for active learning use either uncertainty or diversity sampling, aiming to select difficult and diverse data points from the pool of unlabeled data, respectively.
1 code implementation • EMNLP 2021 • Atsuki Yamaguchi, George Chrysostomou, Katerina Margatina, Nikolaos Aletras
Masked language modeling (MLM), a self-supervised pretraining objective, is widely used in natural language processing for learning text representations.
1 code implementation • ACL 2022 • Katerina Margatina, Loïc Barrault, Nikolaos Aletras
Recent Active Learning (AL) approaches in Natural Language Processing (NLP) proposed using off-the-shelf pretrained language models (LMs).
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Giorgos Vernikos, Katerina Margatina, Alexandra Chronopoulou, Ion Androutsopoulos
To address this issue, we introduce a new regularization technique, AFTER; domain Adversarial Fine-Tuning as an Effective Regularizer.
1 code implementation • ACL 2019 • Katerina Margatina, Christos Baziotis, Alexandros Potamianos
This form of conditioning on the attention distribution, enforces the contribution of the most salient words for the task at hand.