Search Results for author: Katerina Margatina

Found 12 papers, 7 papers with code

Understanding the Role of Input Token Characters in Language Models: How Does Information Loss Affect Performance?

no code implementations26 Oct 2023 Ahmed Alajrami, Katerina Margatina, Nikolaos Aletras

Understanding how and what pre-trained language models (PLMs) learn about language is an open challenge in natural language processing.

Active Learning Principles for In-Context Learning with Large Language Models

no code implementations23 May 2023 Katerina Margatina, Timo Schick, Nikolaos Aletras, Jane Dwivedi-Yu

The remarkable advancements in large language models (LLMs) have significantly enhanced the performance in few-shot learning settings.

Active Learning Few-Shot Learning +1

On the Limitations of Simulating Active Learning

no code implementations21 May 2023 Katerina Margatina, Nikolaos Aletras

Active learning (AL) is a human-and-model-in-the-loop paradigm that iteratively selects informative unlabeled data for human annotation, aiming to improve over random sampling.

Active Learning Fairness

Investigating Multi-source Active Learning for Natural Language Inference

1 code implementation14 Feb 2023 Ard Snijders, Douwe Kiela, Katerina Margatina

We show that four popular active learning schemes fail to outperform random selection when applied to unlabelled pools comprised of multiple data sources on the task of natural language inference.

Active Learning Natural Language Inference

Active Learning by Acquiring Contrastive Examples

1 code implementation EMNLP 2021 Katerina Margatina, Giorgos Vernikos, Loïc Barrault, Nikolaos Aletras

Common acquisition functions for active learning use either uncertainty or diversity sampling, aiming to select difficult and diverse data points from the pool of unlabeled data, respectively.

Active Learning Natural Language Understanding

Frustratingly Simple Pretraining Alternatives to Masked Language Modeling

1 code implementation EMNLP 2021 Atsuki Yamaguchi, George Chrysostomou, Katerina Margatina, Nikolaos Aletras

Masked language modeling (MLM), a self-supervised pretraining objective, is widely used in natural language processing for learning text representations.

Language Modelling Masked Language Modeling +1

Attention-based Conditioning Methods for External Knowledge Integration

1 code implementation ACL 2019 Katerina Margatina, Christos Baziotis, Alexandros Potamianos

This form of conditioning on the attention distribution, enforces the contribution of the most salient words for the task at hand.

Cannot find the paper you are looking for? You can Submit a new open access paper.