2 code implementations • 17 Apr 2021 • Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych
To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval.
Ranked #1 on
Argument Retrieval
on ArguAna (BEIR)
1 code implementation • 16 Apr 2021 • Nafise Sadat Moosavi, Andreas Rücklé, Dan Roth, Iryna Gurevych
In this paper, we introduce SciGen, a new challenge dataset for the task of reasoning-aware data-to-text generation consisting of tables from scientific articles and their corresponding descriptions.
1 code implementation • EMNLP 2021 • Clifton Poth, Jonas Pfeiffer, Andreas Rücklé, Iryna Gurevych
Our best methods achieve an average Regret@3 of less than 1% across all target tasks, demonstrating that we are able to efficiently identify the best datasets for intermediate training.
1 code implementation • 14 Apr 2021 • Gregor Geigle, Nils Reimers, Andreas Rücklé, Iryna Gurevych
We argue that there exist a wide range of specialized QA agents in literature.
1 code implementation • EMNLP 2021 • Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, Iryna Gurevych
Massively pre-trained transformer models are computationally expensive to fine-tune, slow for inference, and have large storage requirements.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Mingzhu Wu, Nafise Sadat Moosavi, Andreas Rücklé, Iryna Gurevych
Our framework weights each example based on the biases it contains and the strength of those biases in the training data.
1 code implementation • EMNLP 2020 • Andreas Rücklé, Jonas Pfeiffer, Iryna Gurevych
We investigate the model performances on nine benchmarks of answer selection and question similarity tasks, and show that all 140 models transfer surprisingly well, where the large majority of models substantially outperforms common IR baselines.
5 code implementations • EMNLP 2020 • Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulić, Sebastian Ruder, Kyunghyun Cho, Iryna Gurevych
We propose AdapterHub, a framework that allows dynamic "stitching-in" of pre-trained adapters for different tasks and languages.
3 code implementations • EACL 2021 • Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, Iryna Gurevych
We show that by separating the two stages, i. e., knowledge extraction and knowledge composition, the classifier can effectively exploit the representations learned from multiple tasks in a non-destructive manner.
1 code implementation • IJCNLP 2019 • Andreas Rücklé, Nafise Sadat Moosavi, Iryna Gurevych
We show that our proposed approaches are more effective in many cases because they can utilize larger amounts of unlabeled data from cQA forums.
no code implementations • 19 Sep 2019 • Nafise Sadat Moosavi, Prasetya Ajie Utama, Andreas Rücklé, Iryna Gurevych
Finally, we show that using the coverage information is not only beneficial for improving the performance across different datasets of the same task.
no code implementations • WS 2019 • Steffen Eger, Andreas Rücklé, Iryna Gurevych
Our motivation is to challenge the current evaluation of sentence embeddings and to provide an easy-to-access reference for future research.
1 code implementation • NAACL 2019 • Steffen Eger, Gözde Gül Şahin, Andreas Rücklé, Ji-Ung Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, Iryna Gurevych
Visual modifications to text are often used to obfuscate offensive comments in social media (e. g., "! d10t") or as a writing style ("1337" in "leet speak"), among other scenarios.
1 code implementation • 4 Mar 2018 • Andreas Rücklé, Steffen Eger, Maxime Peyrard, Iryna Gurevych
Here, we generalize the concept of average word embeddings to power mean word embeddings.