Sequence Editing Models

Seq2Edits

Introduced by Stahlberg et al. in Seq2Edits: Sequence Transduction Using Span-level Edit Operations

Seq2Edits is an open-vocabulary approach to sequence editing for natural language processing (NLP) tasks with a high degree of overlap between input and output texts. In this approach, each sequence-to-sequence transduction is represented as a sequence of edit operations, where each operation either replaces an entire source span with target tokens or keeps it unchanged. For text normalization, sentence fusion, sentence splitting & rephrasing, text simplification, and grammatical error correction, the approach improves explainability by associating each edit operation with a human-readable tag.

Rather than generating the target sentence as a series of tokens, the model predicts a sequence of edit operations that, when applied to the source sentence, yields the target sentence. Each edit operates on a span in the source sentence and either copies, deletes, or replaces it with one or more target tokens. Edits are generated auto-regressively from left to right using a modified Transformer architecture to facilitate learning of long-range dependencies.

Source: Seq2Edits: Sequence Transduction Using Span-level Edit Operations

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Grammatical Error Correction 1 33.33%
Sentence 1 33.33%
Text Simplification 1 33.33%

Categories