Text Infilling
23 papers with code • 0 benchmarks • 1 datasets
Text Infilling is the task of predicting missing spans of text which are consistent with the preceding and subsequent text. Text Infilling is a generalization of the cloze task—cloze historically refers to infilling individual words.
Benchmarks
These leaderboards are used to track progress in Text Infilling
Most implemented papers
Enabling Language Models to Fill in the Blanks
We show that this approach, which we call infilling by language modeling, can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics.
Nutribullets Hybrid: Multi-document Health Summarization
We present a method for generating comparative summaries that highlights similarities and contradictions in input documents.
LOT: A Story-Centric Benchmark for Evaluating Chinese Long Text Understanding and Generation
Therefore, we propose a story-centric benchmark named LOT for evaluating Chinese long text modeling, which aggregates two understanding tasks and two generation tasks.
Text Infilling
Recent years have seen remarkable progress of text generation in different contexts, such as the most common setting of generating text from scratch, and the emerging paradigm of retrieval-and-rewriting.
TIGS: An Inference Algorithm for Text Infilling with Gradient Search
Text infilling is defined as a task for filling in the missing part of a sentence or paragraph, which is suitable for many real-world natural language generation scenarios.
Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic Text Exchange
In this paper, we present a novel method for measurably adjusting the semantics of text while preserving its sentiment and fluency, a task we call semantic text exchange.
Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning
Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the relative past and future.
Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting
In this paper, we generalize text infilling (e. g., masked language models) by proposing Sequence Span Rewriting (SSR) as a self-supervised sequence-to-sequence (seq2seq) pre-training objective.
Show Me How To Revise: Improving Lexically Constrained Sentence Generation with XLNet
To overcome this challenge, we used a classifier to instruct the MCMC-based models where and how to refine the candidate sentences.
Conformal prediction for text infilling and part-of-speech prediction
In our paper, we propose inductive conformal prediction (ICP) algorithms for the tasks of text infilling and part-of-speech (POS) prediction for natural language data.