1 code implementation • 16 Apr 2022 • Ania Wróblewska, Agnieszka Kaliska, Maciej Pawłowski, Dawid Wiśniewski, Witold Sosnowski, Agnieszka Ławrynowicz
We provide a few state-of-the-art baselines of named entity recognition models, which show that our dataset poses a solid challenge to existing models.
no code implementations • 15 Dec 2021 • Witold Sosnowski, Anna Wroblewska, Piotr Gawrysiak
We introduce a new loss function TripleEntropy, to improve classification performance for fine-tuning general knowledge pre-trained language models based on cross-entropy and SoftTriple loss.
no code implementations • 28 Nov 2022 • Witold Sosnowski, Anna Wróblewska, Karolina Seweryn, Piotr Gawrysiak
Our systematic experiments have shown that under few-shot learning settings, particularly proxy-based DML losses can positively affect the fine-tuning and inference of a supervised language model.
no code implementations • 28 Nov 2022 • Witold Sosnowski, Karolina Seweryn, Anna Wróblewska, Piotr Gawrysiak
This paper presents an analysis regarding an influence of the Distance Metric Learning (DML) loss functions on the supervised fine-tuning of the language models for classification tasks.