Search Results for author: Andreas Rücklé

Found 15 papers, 12 papers with code

Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research

no code implementations29 Jun 2023 Ji-Ung Lee, Haritz Puerto, Betty van Aken, Yuki Arase, Jessica Zosa Forde, Leon Derczynski, Andreas Rücklé, Iryna Gurevych, Roy Schwartz, Emma Strubell, Jesse Dodge

Many recent improvements in NLP stem from the development and use of large pre-trained language models (PLMs) with billions of parameters.

BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models

2 code implementations17 Apr 2021 Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych

To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval.

Argument Retrieval Benchmarking +12

What to Pre-Train on? Efficient Intermediate Task Selection

1 code implementation EMNLP 2021 Clifton Poth, Jonas Pfeiffer, Andreas Rücklé, Iryna Gurevych

Our best methods achieve an average Regret@3 of less than 1% across all target tasks, demonstrating that we are able to efficiently identify the best datasets for intermediate training.

Multiple-choice Question Answering +1

Learning to Reason for Text Generation from Scientific Tables

1 code implementation16 Apr 2021 Nafise Sadat Moosavi, Andreas Rücklé, Dan Roth, Iryna Gurevych

In this paper, we introduce SciGen, a new challenge dataset for the task of reasoning-aware data-to-text generation consisting of tables from scientific articles and their corresponding descriptions.

Arithmetic Reasoning Data-to-Text Generation

AdapterDrop: On the Efficiency of Adapters in Transformers

1 code implementation EMNLP 2021 Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, Iryna Gurevych

Massively pre-trained transformer models are computationally expensive to fine-tune, slow for inference, and have large storage requirements.

MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale

1 code implementation EMNLP 2020 Andreas Rücklé, Jonas Pfeiffer, Iryna Gurevych

We investigate the model performances on nine benchmarks of answer selection and question similarity tasks, and show that all 140 models transfer surprisingly well, where the large majority of models substantially outperforms common IR baselines.

Answer Selection Community Question Answering +3

AdapterHub: A Framework for Adapting Transformers

7 code implementations EMNLP 2020 Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulić, Sebastian Ruder, Kyunghyun Cho, Iryna Gurevych

We propose AdapterHub, a framework that allows dynamic "stitching-in" of pre-trained adapters for different tasks and languages.

XLM-R

AdapterFusion: Non-Destructive Task Composition for Transfer Learning

3 code implementations EACL 2021 Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, Iryna Gurevych

We show that by separating the two stages, i. e., knowledge extraction and knowledge composition, the classifier can effectively exploit the representations learned from multiple tasks in a non-destructive manner.

Language Modelling Multi-Task Learning

Neural Duplicate Question Detection without Labeled Training Data

1 code implementation IJCNLP 2019 Andreas Rücklé, Nafise Sadat Moosavi, Iryna Gurevych

We show that our proposed approaches are more effective in many cases because they can utilize larger amounts of unlabeled data from cQA forums.

Answer Selection Community Question Answering +1

Improving Generalization by Incorporating Coverage in Natural Language Inference

no code implementations19 Sep 2019 Nafise Sadat Moosavi, Prasetya Ajie Utama, Andreas Rücklé, Iryna Gurevych

Finally, we show that using the coverage information is not only beneficial for improving the performance across different datasets of the same task.

Natural Language Inference Relation

Pitfalls in the Evaluation of Sentence Embeddings

no code implementations WS 2019 Steffen Eger, Andreas Rücklé, Iryna Gurevych

Our motivation is to challenge the current evaluation of sentence embeddings and to provide an easy-to-access reference for future research.

Sentence Sentence Embeddings

Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems

1 code implementation NAACL 2019 Steffen Eger, Gözde Gül Şahin, Andreas Rücklé, Ji-Ung Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, Iryna Gurevych

Visual modifications to text are often used to obfuscate offensive comments in social media (e. g., "! d10t") or as a writing style ("1337" in "leet speak"), among other scenarios.

Adversarial Attack Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.