Search Results for author: Andreas Rücklé

Found 14 papers, 9 papers with code

BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models

1 code implementation17 Apr 2021 Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych

Neural IR models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their generalization capabilities.

Argument Retrieval Biomedical Information Retrieval +9

Learning to Reason for Text Generation from Scientific Tables

1 code implementation16 Apr 2021 Nafise Sadat Moosavi, Andreas Rücklé, Dan Roth, Iryna Gurevych

In this paper, we introduce SciGen, a new challenge dataset for the task of reasoning-aware data-to-text generation consisting of tables from scientific articles and their corresponding descriptions.

Data-to-Text Generation

What to Pre-Train on? Efficient Intermediate Task Selection

no code implementations16 Apr 2021 Clifton Poth, Jonas Pfeiffer, Andreas Rücklé, Iryna Gurevych

Our best methods achieve an average Regret@3 of less than 1% across all target tasks, demonstrating that we are able to efficiently identify the best datasets for intermediate training.

Question Answering Transfer Learning

TWEAC: Transformer with Extendable QA Agent Classifiers

1 code implementation14 Apr 2021 Gregor Geigle, Nils Reimers, Andreas Rücklé, Iryna Gurevych

Question answering systems should help users to access knowledge on a broad range of topics and to answer a wide array of different questions.

Question Answering

AdapterDrop: On the Efficiency of Adapters in Transformers

no code implementations22 Oct 2020 Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, Iryna Gurevych

Massively pre-trained transformer models are computationally expensive to fine-tune, slow for inference, and have large storage requirements.

MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale

1 code implementation EMNLP 2020 Andreas Rücklé, Jonas Pfeiffer, Iryna Gurevych

We investigate the model performances on nine benchmarks of answer selection and question similarity tasks, and show that all 140 models transfer surprisingly well, where the large majority of models substantially outperforms common IR baselines.

Answer Selection Community Question Answering +3

AdapterHub: A Framework for Adapting Transformers

3 code implementations EMNLP 2020 Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulić, Sebastian Ruder, Kyunghyun Cho, Iryna Gurevych

We propose AdapterHub, a framework that allows dynamic "stitching-in" of pre-trained adapters for different tasks and languages.

AdapterFusion: Non-Destructive Task Composition for Transfer Learning

1 code implementation EACL 2021 Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, Iryna Gurevych

We show that by separating the two stages, i. e., knowledge extraction and knowledge composition, the classifier can effectively exploit the representations learned from multiple tasks in a non-destructive manner.

Language Modelling Multi-Task Learning

Neural Duplicate Question Detection without Labeled Training Data

1 code implementation IJCNLP 2019 Andreas Rücklé, Nafise Sadat Moosavi, Iryna Gurevych

We show that our proposed approaches are more effective in many cases because they can utilize larger amounts of unlabeled data from cQA forums.

Answer Selection Community Question Answering +1

Improving Generalization by Incorporating Coverage in Natural Language Inference

no code implementations19 Sep 2019 Nafise Sadat Moosavi, Prasetya Ajie Utama, Andreas Rücklé, Iryna Gurevych

Finally, we show that using the coverage information is not only beneficial for improving the performance across different datasets of the same task.

Natural Language Inference

Pitfalls in the Evaluation of Sentence Embeddings

no code implementations WS 2019 Steffen Eger, Andreas Rücklé, Iryna Gurevych

Our motivation is to challenge the current evaluation of sentence embeddings and to provide an easy-to-access reference for future research.

Sentence Embeddings

Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems

no code implementations NAACL 2019 Steffen Eger, Gözde Gül Şahin, Andreas Rücklé, Ji-Ung Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, Iryna Gurevych

Visual modifications to text are often used to obfuscate offensive comments in social media (e. g., "! d10t") or as a writing style ("1337" in "leet speak"), among other scenarios.

Adversarial Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.