no code implementations • 25 Feb 2025 • Miles Williams, George Chrysostomou, Vitor Jeronymo, Nikolaos Aletras
Compression techniques such as pruning and quantization offer a solution for more efficient deployment of language models (LMs), albeit with small performance drops in benchmark performance.
no code implementations • 22 Oct 2024 • Miles Williams, George Chrysostomou, Nikolaos Aletras
In a post-training setting, state-of-the-art quantization and pruning methods require calibration data, a small set of unlabeled examples.
1 code implementation • 15 Nov 2023 • George Chrysostomou, Zhixue Zhao, Miles Williams, Nikolaos Aletras
Despite the remarkable performance of generative large language models (LLMs) on abstractive summarization, they face two significant challenges: their considerable size and tendency to hallucinate.
1 code implementation • 17 Oct 2022 • Zhixue Zhao, George Chrysostomou, Kalina Bontcheva, Nikolaos Aletras
Explanation faithfulness of model predictions in natural language processing is typically evaluated on held-out data from the same temporal distribution as the training data (i. e. synchronous settings).
1 code implementation • ACL 2022 • George Chrysostomou, Nikolaos Aletras
Recent work in Natural Language Processing has focused on developing approaches that extract faithful explanations, either via identifying the most important tokens in the input (i. e. post-hoc explanations) or by designing inherently faithful models that first select the most important tokens and then use them to predict the correct label (i. e. select-then-predict models).
1 code implementation • EMNLP 2021 • Atsuki Yamaguchi, George Chrysostomou, Katerina Margatina, Nikolaos Aletras
Masked language modeling (MLM), a self-supervised pretraining objective, is widely used in natural language processing for learning text representations.
1 code implementation • EMNLP 2021 • George Chrysostomou, Nikolaos Aletras
In this paper, we hypothesize that salient information extracted a priori from the training data can complement the task-specific information learned by the model during fine-tuning on a downstream task.
1 code implementation • ACL 2021 • George Chrysostomou, Nikolaos Aletras
In this paper, we seek to improve the faithfulness of attention-based explanations for text classification.
1 code implementation • 16 Apr 2021 • George Chrysostomou, Nikolaos Aletras
Recent research on model interpretability in natural language processing extensively uses feature scoring methods for identifying which parts of the input are the most important for a model to make a prediction (i. e. explanation or rationale).