Search Results for author: Antonia Karamolegkou

Found 8 papers, 4 papers with code

Argumentation Mining in Scientific Literature for Sustainable Development

1 code implementation EMNLP (ArgMining) 2021 Aris Fergadis, Dimitris Pappas, Antonia Karamolegkou, Haris Papageorgiou

We also present a set of strong, BERT-based neural baselines achieving an f1-score of 70. 0 for Claim and 62. 4 for Evidence identification evaluated with 10-fold cross-validation.

Exploring Visual Culture Awareness in GPT-4V: A Comprehensive Probing

no code implementations8 Feb 2024 Yong Cao, Wenyan Li, Jiaang Li, Yifei Yuan, Antonia Karamolegkou, Daniel Hershcovich

Pretrained large Vision-Language models have drawn considerable interest in recent years due to their remarkable performance.

Image Captioning TAG

Cultural Adaptation of Recipes

no code implementations26 Oct 2023 Yong Cao, Yova Kementchedjhieva, Ruixiang Cui, Antonia Karamolegkou, Li Zhou, Megan Dare, Lucia Donatelli, Daniel Hershcovich

We introduce a new task involving the translation and cultural adaptation of recipes between Chinese and English-speaking cuisines.

Information Retrieval Machine Translation +1

Copyright Violations and Large Language Models

1 code implementation20 Oct 2023 Antonia Karamolegkou, Jiaang Li, Li Zhou, Anders Søgaard

Language models may memorize more than just facts, including entire chunks of texts seen during training.

Memorization

Cultural Compass: Predicting Transfer Learning Success in Offensive Language Detection with Cultural Features

1 code implementation10 Oct 2023 Li Zhou, Antonia Karamolegkou, Wenyu Chen, Daniel Hershcovich

The increasing ubiquity of language technology necessitates a shift towards considering cultural diversity in the machine learning realm, particularly for subjective tasks that rely heavily on cultural nuances, such as Offensive Language Detection (OLD).

Transfer Learning

Mapping Brains with Language Models: A Survey

no code implementations8 Jun 2023 Antonia Karamolegkou, Mostafa Abdou, Anders Søgaard

Over the years, many researchers have seemingly made the same observation: Brain and language model activations exhibit some structural similarities, enabling linear partial mappings between features extracted from neural recordings and computational language models.

Language Modelling

Structural Similarities Between Language Models and Neural Response Measurements

1 code implementation2 Jun 2023 Jiaang Li, Antonia Karamolegkou, Yova Kementchedjhieva, Mostafa Abdou, Sune Lehmann, Anders Søgaard

Human language processing is also opaque, but neural response measurements can provide (noisy) recordings of activation during listening or reading, from which we can extract similar representations of words and phrases.

Brain Decoding

Cannot find the paper you are looking for? You can Submit a new open access paper.