Search Results for author: Haim Dubossarsky

Found 12 papers, 6 papers with code

(Chat)GPT v BERT: Dawn of Justice for Semantic Change Detection

1 code implementation25 Jan 2024 Francesco Periti, Haim Dubossarsky, Nina Tahmasebi

In the universe of Natural Language Processing, Transformer-based language models like BERT and (Chat)GPT have emerged as lexical superheroes with great power to solve open research problems.

Change Detection

Logical Reasoning for Natural Language Inference Using Generated Facts as Atoms

no code implementations22 May 2023 Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Oana-Maria Camburu, Marek Rei

We apply our method to the highly challenging ANLI dataset, where our framework improves the performance of both a DeBERTa-base and BERT baseline.

Logical Reasoning Natural Language Inference +1

Computational modeling of semantic change

no code implementations13 Apr 2023 Nina Tahmasebi, Haim Dubossarsky

In this chapter we provide an overview of computational modeling for semantic change using large and semi-large textual corpora.

Challenges for Computational Lexical Semantic Change

no code implementations19 Jan 2021 Simon Hengchen, Nina Tahmasebi, Dominik Schlechtweg, Haim Dubossarsky

The computational study of lexical semantic change (LSC) has taken off in the past few years and we are seeing increasing interest in the field, from both computational sciences and linguistics.

SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection

2 code implementations SEMEVAL 2020 Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, Nina Tahmasebi

Lexical Semantic Change detection, i. e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics.

Change Detection

Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training

1 code implementation EMNLP 2020 Joe Stacey, Pasquale Minervini, Haim Dubossarsky, Sebastian Riedel, Tim Rocktäschel

Natural Language Inference (NLI) datasets contain annotation artefacts resulting in spurious correlations between the natural language utterances and their respective entailment classes.

Natural Language Inference Sentence

The Secret is in the Spectra: Predicting Cross-lingual Task Performance with Spectral Similarity Measures

no code implementations EMNLP 2020 Haim Dubossarsky, Ivan Vulić, Roi Reichart, Anna Korhonen

Performance in cross-lingual NLP tasks is impacted by the (dis)similarity of languages at hand: e. g., previous work has suggested there is a connection between the expected success of bilingual lexicon induction (BLI) and the assumption of (approximate) isomorphism between monolingual embedding spaces.

Bilingual Lexicon Induction POS +1

Coming to Your Senses: on Controls and Evaluation Sets in Polysemy Research

no code implementations EMNLP 2018 Haim Dubossarsky, Eitan Grossman, Daphna Weinshall

This and additional results point to the conclusion that performance gains as reported in previous work may be an artifact of random sense assignment, which is equivalent to sub-sampling and multiple estimation of word vector representations.

Word Embeddings Word Similarity

Cannot find the paper you are looking for? You can Submit a new open access paper.