Transformers

Enhanced Seq2Seq Autoencoder via Contrastive Learning

Introduced by Zheng et al. in Enhanced Seq2Seq Autoencoder via Contrastive Learning for Abstractive Text Summarization

ESACL, or Enhanced Seq2Seq Autoencoder via Contrastive Learning, is a denoising sequence-to-sequence (seq2seq) autoencoder via contrastive learning for abstractive text summarization. The model adopts a standard Transformer-based architecture with a multilayer bi-directional encoder and an autoregressive decoder. To enhance its denoising ability, self-supervised contrastive learning is incorporated along with various sentence-level document augmentation.

Source: Enhanced Seq2Seq Autoencoder via Contrastive Learning for Abstractive Text Summarization

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Continual Learning 1 20.00%
Abstractive Text Summarization 1 20.00%
Denoising 1 20.00%
Sentence 1 20.00%
Text Summarization 1 20.00%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories