no code implementations • EMNLP 2021 • Sascha Rothe, Joshua Maynez, Shashi Narayan
Task-agnostic pretraining objectives like masked language models or corrupted span prediction are applicable to a wide range of NLP downstream tasks (Raffel et al., 2019), but are outperformed by task-specific pretraining objectives like predicting extracted gap sentences on summarization (Zhang et al., 2020).
no code implementations • 30 Sep 2022 • Michelle Chen Huebscher, Christian Buck, Massimiliano Ciaramita, Sascha Rothe
We extend the previous learning to search setup to a hybrid environment, which accepts discrete query refinement operations, after a first-pass retrieval step performed by a dual encoder.
no code implementations • 1 Sep 2021 • Leonard Adolphs, Benjamin Boerschinger, Christian Buck, Michelle Chen Huebscher, Massimiliano Ciaramita, Lasse Espeholt, Thomas Hofmann, Yannic Kilcher, Sascha Rothe, Pier Giuseppe Sessa, Lierni Sestorain Saralegui
This paper presents first successful steps in designing search agents that learn meta-strategies for iterative query refinement in information-seeking tasks.
1 code implementation • ACL 2021 • Sascha Rothe, Jonathan Mallinson, Eric Malmi, Sebastian Krause, Aliaksei Severyn
This paper presents a simple recipe to train state-of-the-art multilingual Grammatical Error Correction (GEC) models.
Ranked #1 on
Grammatical Error Correction
on Falko-MERLIN
(using extra training data)
no code implementations • ACL 2021 • Rahul Aralikatte, Shashi Narayan, Joshua Maynez, Sascha Rothe, Ryan Mcdonald
Professional summaries are written with document-level information, such as the theme of the document, in mind.
no code implementations • EMNLP 2020 • Eric Malmi, Aliaksei Severyn, Sascha Rothe
This allows us to identify the source tokens to delete to transform the source text to match the style of the target domain.
no code implementations • WS 2020 • Anjalie Field, Sascha Rothe, Simon Baumgartner, Cong Yu, Abe Ittycheriah
We evaluate the performance of transformer encoders with various decoders for information organization through a new task: generation of section headings for Wikipedia articles.
5 code implementations • IJCNLP 2019 • Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, Aliaksei Severyn
We propose LaserTagger - a sequence tagging approach that casts text generation as a text editing task.
Ranked #1 on
Sentence Fusion
on DiscoFuse
6 code implementations • TACL 2020 • Sascha Rothe, Shashi Narayan, Aliaksei Severyn
Unsupervised pre-training of large neural models has recently revolutionized Natural Language Processing.
Ranked #1 on
Split and Rephrase
on WikiSplit
no code implementations • CONLL 2018 • Katharina Kann, Sascha Rothe, Katja Filippova
Motivated by recent findings on the probabilistic modeling of acceptability judgments, we propose syntactic log-odds ratio (SLOR), a normalized language model score, as a metric for referenceless fluency evaluation of natural language generation output at the sentence level.
1 code implementation • 30 Nov 2017 • Mostafa Dehghani, Aliaksei Severyn, Sascha Rothe, Jaap Kamps
In this paper, we propose a method for training neural networks when we have a large set of data with weak labels and a small amount of data with true labels.
no code implementations • 1 Nov 2017 • Mostafa Dehghani, Aliaksei Severyn, Sascha Rothe, Jaap Kamps
Thus we avoid that the weight updates computed from noisy labels harm the quality of the target network model.
no code implementations • CL 2017 • Sascha Rothe, Hinrich Sch{\"u}tze
We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource.
no code implementations • 11 Aug 2017 • Mostafa Dehghani, Sascha Rothe, Enrique Alfonseca, Pascal Fleury
e results suggest that our model outperforms the baselines both in terms of the generating queries and scoring candidate queries for the task of query suggestion.
1 code implementation • NAACL 2016 • Sascha Rothe, Sebastian Ebert, Hinrich Schütze
Embeddings are generic representations that are useful for many NLP tasks.
no code implementations • IJCNLP 2015 • Sascha Rothe, Hinrich Schütze
We present \textit{AutoExtend}, a system to learn embeddings for synsets and lexemes.