no code implementations • NAACL (TextGraphs) 2021 • Parishad BehnamGhader, Hossein Zakerinia, Mahdieh Soleymani Baghshah
Pre-trained models like Bidirectional Encoder Representations from Transformers (BERT), have recently made a big leap forward in Natural Language Processing (NLP) tasks.
2 code implementations • 9 Apr 2024 • Parishad BehnamGhader, Vaibhav Adlakha, Marius Mosbach, Dzmitry Bahdanau, Nicolas Chapados, Siva Reddy
We outperform encoder-only models by a large margin on word-level tasks and reach a new unsupervised state-of-the-art performance on the Massive Text Embeddings Benchmark (MTEB).
1 code implementation • 31 Jul 2023 • Vaibhav Adlakha, Parishad BehnamGhader, Xing Han Lu, Nicholas Meade, Siva Reddy
Guided by human evaluation and analysis, we highlight the shortcomings of traditional metrics for both correctness and faithfulness.
1 code implementation • 18 Dec 2022 • Parishad BehnamGhader, Santiago Miret, Siva Reddy
Our findings indicate that the simple similarity metric employed by retrievers is insufficient for retrieving all the necessary statements for reasoning.
1 code implementation • 25 Nov 2022 • Aristides Milios, Parishad BehnamGhader
Although large pre-trained language models have achieved great success in many NLP tasks, it has been shown that they reflect human biases from their pre-training corpora.