1 code implementation • 25 Jan 2024 • Francesco Periti, Haim Dubossarsky, Nina Tahmasebi
In the universe of Natural Language Processing, Transformer-based language models like BERT and (Chat)GPT have emerged as lexical superheroes with great power to solve open research problems.
no code implementations • 22 May 2023 • Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Oana-Maria Camburu, Marek Rei
We apply our method to the highly challenging ANLI dataset, where our framework improves the performance of both a DeBERTa-base and BERT baseline.
no code implementations • 13 Apr 2023 • Nina Tahmasebi, Haim Dubossarsky
In this chapter we provide an overview of computational modeling for semantic change using large and semi-large textual corpora.
1 code implementation • 23 May 2022 • Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Marek Rei
We can further improve model performance and span-level decisions by using the e-SNLI explanations during training.
1 code implementation • EMNLP 2021 • Dominik Schlechtweg, Nina Tahmasebi, Simon Hengchen, Haim Dubossarsky, Barbara McGillivray
Word meaning is notoriously difficult to capture, both synchronically and diachronically.
no code implementations • 19 Jan 2021 • Simon Hengchen, Nina Tahmasebi, Dominik Schlechtweg, Haim Dubossarsky
The computational study of lexical semantic change (LSC) has taken off in the past few years and we are seeing increasing interest in the field, from both computational sciences and linguistics.
2 code implementations • SEMEVAL 2020 • Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, Nina Tahmasebi
Lexical Semantic Change detection, i. e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics.
1 code implementation • EMNLP 2020 • Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Sebastian Riedel, Tim Rocktäschel
Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correlations between the natural language utterances and their respective entailment classes.
no code implementations • EMNLP 2020 • Haim Dubossarsky, Ivan Vulić, Roi Reichart, Anna Korhonen
Performance in cross-lingual NLP tasks is impacted by the (dis)similarity of languages at hand: e. g., previous work has suggested there is a connection between the expected success of bilingual lexicon induction (BLI) and the assumption of (approximate) isomorphism between monolingual embedding spaces.
1 code implementation • ACL 2019 • Haim Dubossarsky, Simon Hengchen, Nina Tahmasebi, Dominik Schlechtweg
State-of-the-art models of lexical semantic change detection suffer from noise stemming from vector space alignment.
no code implementations • EMNLP 2018 • Haim Dubossarsky, Eitan Grossman, Daphna Weinshall
This and additional results point to the conclusion that performance gains as reported in previous work may be an artifact of random sense assignment, which is equivalent to sub-sampling and multiple estimation of word vector representations.
no code implementations • EMNLP 2017 • Haim Dubossarsky, Daphna Weinshall, Eitan Grossman
This article evaluates three proposed laws of semantic change.