Search Results for author: Sascha Rothe

Found 19 papers, 6 papers with code

A Thorough Evaluation of Task-Specific Pretraining for Summarization

no code implementations EMNLP 2021 Sascha Rothe, Joshua Maynez, Shashi Narayan

Task-agnostic pretraining objectives like masked language models or corrupted span prediction are applicable to a wide range of NLP downstream tasks (Raffel et al., 2019), but are outperformed by task-specific pretraining objectives like predicting extracted gap sentences on summarization (Zhang et al., 2020).

LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning

1 code implementation29 May 2023 Amirhossein Abaskohi, Sascha Rothe, Yadollah Yaghoobzadeh

This paper proposes LM-CPPF, Contrastive Paraphrasing-guided Prompt-based Fine-tuning of Language Models, which leverages prompt-based few-shot paraphrasing using generative language models, especially large language models such as GPT-3 and OPT-175B, for data augmentation.

Contrastive Learning Data Augmentation +5

Zero-Shot Retrieval with Search Agents and Hybrid Environments

no code implementations30 Sep 2022 Michelle Chen Huebscher, Christian Buck, Massimiliano Ciaramita, Sascha Rothe

We extend the previous learning to search setup to a hybrid environment, which accepts discrete query refinement operations, after a first-pass retrieval step via a dual encoder.

Retrieval

A Simple Recipe for Multilingual Grammatical Error Correction

2 code implementations ACL 2021 Sascha Rothe, Jonathan Mallinson, Eric Malmi, Sebastian Krause, Aliaksei Severyn

This paper presents a simple recipe to train state-of-the-art multilingual Grammatical Error Correction (GEC) models.

 Ranked #1 on Grammatical Error Correction on Falko-MERLIN (using extra training data)

Grammatical Error Correction

Unsupervised Text Style Transfer with Padded Masked Language Models

no code implementations EMNLP 2020 Eric Malmi, Aliaksei Severyn, Sascha Rothe

This allows us to identify the source tokens to delete to transform the source text to match the style of the target domain.

Sentence Sentence Fusion +3

A Generative Approach to Titling and Clustering Wikipedia Sections

no code implementations WS 2020 Anjalie Field, Sascha Rothe, Simon Baumgartner, Cong Yu, Abe Ittycheriah

We evaluate the performance of transformer encoders with various decoders for information organization through a new task: generation of section headings for Wikipedia articles.

Clustering

Sentence-Level Fluency Evaluation: References Help, But Can Be Spared!

no code implementations CONLL 2018 Katharina Kann, Sascha Rothe, Katja Filippova

Motivated by recent findings on the probabilistic modeling of acceptability judgments, we propose syntactic log-odds ratio (SLOR), a normalized language model score, as a metric for referenceless fluency evaluation of natural language generation output at the sentence level.

Language Modelling Sentence +1

Learning to Learn from Weak Supervision by Full Supervision

1 code implementation30 Nov 2017 Mostafa Dehghani, Aliaksei Severyn, Sascha Rothe, Jaap Kamps

In this paper, we propose a method for training neural networks when we have a large set of data with weak labels and a small amount of data with true labels.

AutoExtend: Combining Word Embeddings with Semantic Resources

no code implementations CL 2017 Sascha Rothe, Hinrich Sch{\"u}tze

We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource.

Learning Word Embeddings Sentiment Analysis +1

Learning to Attend, Copy, and Generate for Session-Based Query Suggestion

no code implementations11 Aug 2017 Mostafa Dehghani, Sascha Rothe, Enrique Alfonseca, Pascal Fleury

e results suggest that our model outperforms the baselines both in terms of the generating queries and scoring candidate queries for the task of query suggestion.

Cannot find the paper you are looking for? You can Submit a new open access paper.