Text Infilling

20 papers with code • 0 benchmarks • 1 datasets

Text Infilling is the task of predicting missing spans of text which are consistent with the preceding and subsequent text. Text Infilling is a generalization of the cloze task—cloze historically refers to infilling individual words.

Source: Enabling Language Models to Fill in the Blanks

Datasets


Most implemented papers

Language modeling via stochastic processes

rosewang2008/language_modeling_via_stochastic_processes ICLR 2022

Recent work in self-supervised learning suggests that models can learn good latent representations via contrastive learning, which can be effective for discriminative tasks.

CTRLEval: An Unsupervised Reference-Free Metric for Evaluating Controlled Text Generation

thu-coai/ctrleval ACL 2022

Existing reference-free metrics have obvious limitations for evaluating controlled text generation models.

Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models

facebookresearch/electra-fewshot-learning 30 May 2022

In this work, we adapt prompt-based few-shot learning to ELECTRA and show that it outperforms masked language models in a wide range of tasks.

Reprogramming Pretrained Language Models for Antibody Sequence Infilling

ibm/reprogbert 5 Oct 2022

Results on antibody design benchmarks show that our model on low-resourced antibody sequence dataset provides highly diverse CDR sequences, up to more than a two-fold increase of diversity over the baselines, without losing structural integrity and naturalness.

MetaFill: Text Infilling for Meta-Path Generation on Heterogeneous Information Networks

zequnl/metafill 14 Oct 2022

Meta-path, a sequence of node types and edge types, is the core technique to embed HINs.

Generative Prompt Tuning for Relation Classification

hanjiale/genpt 22 Oct 2022

Current prompt tuning methods mostly convert the downstream tasks to masked language modeling problems by adding cloze-style phrases and mapping all labels to verbalizations with fixed length, which has proven effective for tasks with simple label spaces.

Model-tuning Via Prompts Makes NLP Models Adversarially Robust

acmi-lab/mvp 13 Mar 2023

Across 5 NLP datasets, 4 adversarial attacks, and 3 different models, MVP improves performance against adversarial substitutions by an average of 8% over standard methods and even outperforms adversarial training-based state-of-art defenses by 3. 5%.

MAGVLT: Masked Generative Vision-and-Language Transformer

kakaobrain/magvlt CVPR 2023

Particularly, MAGVLT achieves competitive results on both zero-shot image-to-text and text-to-image generation tasks from MS-COCO by one moderate-sized model (fewer than 500M parameters) even without the use of monomodal data and networks.

A Simple yet Effective Framework for Few-Shot Aspect-Based Sentiment Analysis

NUSTM/FS-ABSA SIGIR '23: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval 2023

In this work, we argue that two kinds of gaps, i. e., domain gap and objective gap, hinder the transfer of knowledge from pre-training language models (PLMs) to ABSA tasks.

Probabilistically-sound beam search with masked language models

rcalef/hcb_infilling 22 Feb 2024

Beam search with masked language models (MLMs) is challenging in part because joint probability distributions over sequences are not readily available, unlike for autoregressive models.